The problem is that the lexer could inadvertently skip over the
end of a query being parsed if it encountered a malformed multibyte
character. A specially crated query string could cause the lexer
to jump up to six bytes past the end of the query buffer. Another
problem was that the laxer could use unfiltered user input as
a signed array index for the parser maps (having upper and lower
bounds 0 and 256 respectively).
The solution is to ensure that the lexer only skips over well-formed
multibyte characters and that the index value of the parser maps
is always a unsigned value.
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
when partition is reoganized.
Problem was that table->timestamp_field_type was not changed
before copying rows between partitions.
fixed by setting it to TIMESTAMP_NO_AUTO_SET as the first thing
in fast_alter_partition_table, so that all if-branches is covered.
column on partitioned table
An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query
is executed with index containing double column on partitioned table.
The problem is that assertion expects all the fields which are read,
to be in the read_set.
In this query only the field 'a' is in the readset as the tables in
the query are joined by the field 'a' and so the assertion fails
expecting other field 'b'.
Since the function cmp() is just comparison of two parameters passed,
the assertion is not required.
Fixed by removing the assertion in the double fields comparision
function and also fixed the index initialization to do ordered
index scan with RW LOCK which ensures all the fields from a key are in
the read_set.
Note: this bug is not reproducible with other datatypes because the
assertion doesn't exist in comparision function for other
datatypes.
The test case fails sporadically on Windows while trying to overwrite an unused
binary log. The problem stems from the fact that MySQL on Windows does not
immediately unlock/release a file while the process that opened and closed it is
still running. In BUG 38603, this issue was circumvented by stopping the MySQL
process, copying the file and then restarting the MySQL process.
Unfortunately, such facilities are not available in the 5.0. Other approaches
such as stopping the slave and issuing change master do not work because the relay
log file and index are not closed when a slave is stopped. So to fix the problem,
we simply don't run on windows the part of the test that was failing.
when used with --tab
1) New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE).
mysqldump is updated to use this in --tab mode.
2) ESCAPED BY/ENCLOSED BY field parameters are documented as
accepting CHAR argument, however SELECT .. INTO OUTFILE
silently ignored rests of multisymbol arguments.
For the symmetrical behavior with LOAD DATA INFILE the
server has been modified to fail with the same error:
ERROR 42000: Field separator argument is not what is
expected; check the manual
3) Current LOAD DATA INFILE recognizes field/line separators
"as is" without converting from client charset to data
file charset. So, it is supposed, that input file of
LOAD DATA INFILE consists of data in one charset and
separators in other charset. For the compatibility with
that [buggy] behaviour SELECT INTO OUTFILE implementation
has been saved "as is" too, but the new warning message
has been added:
Non-ASCII separator arguments are not fully supported
This message warns on field/line separators that contain
non-ASCII symbols.
We disallow the partitioning of a log table. You could however
partition a table first, and then point logging to it. This is
not only against the docs, it also crashes the server.
We catch this case now.
A REPLACE in the MERGE engine is actually a REPLACE
into one (FIRST or LAST) of the underlying MyISAM
tables. So in effect the server works on the meta
data of the MERGE table, while the real insert happens
in the MyISAM table.
The MERGE table has no index, while MyISAM has a
unique index. When a REPLACE into a MERGE table (
and the REPLACE conflicts with a duplicate in a
child table) is done, we try to access the duplicate
key information for the MERGE table. This information
actually does not exist, hence this results in a crash.
The problem can be resolved by modifying the MERGE
engine to provide us the duplicate key information
directly, instead of just returning the MyISAM index
number as the error key. Then the SQL layer (or "the
server") does not try to access the key_info of the
MERGE table, which does not exist.
The current patch modifies the MERGE engine to provide
the position for a record where a unique key violation
occurs.
an assertion in a debug build.
The reason is that the C API doesn't support multiple result sets for prepared
statements and attempting to execute a stored routine which returns multiple result
sets sometimes lead to a network error. The network error sets the diagnostic area
prematurely which later leads to the assert when an attempt is made to set a second
server state.
This patch fixes the issue by changing the scope of the error code returned by
sp_instr_stmt::execute() to include any error which happened during the execution.
To assure that Diagnostic_area::is_sent really mean that the message was sent all
network related functions are checked for return status.
those keywords do nothing in 5.1 (they are meant for future versions, for example featuring the Maria engine)
so they are here removed from the syntax. Adding those keywords to future versions when needed is:
- WL#5034 "Add TRANSACTIONA=0|1 and PAGE_CHECKSUM=0|1 clauses to CREATE TABLE"
- WL#5037 "New ROW_FORMAT value for CREATE TABLE: PAGE"
One of the tests introduced for this bug was failing
because of path size restriction in windows.
Moved the test case to a new test which is disabled under windows.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
Problem was that a failing rename just left the partitions at the state
it was at the failure.
Solution was to try to revert the started rename if a failure occured.
> ------------------------------------------------------------
> revno: 2792
> revision-id: sergey.glukhov@sun.com-20090703083500-jq8vhw0tqr37j7te
> parent: bernt.johnsen@sun.com-20090703083610-o7l4s8syz05rc4w0
> committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Fri 2009-07-03 13:35:00 +0500
> message:
> Bug#45806 crash when replacing into a view with a join!
> The crash happend because for views which are joins
> we have table_list->table == 0 and
> table_list->table->'any method' call leads to crash.
> The fix is to perform table_list->table->file->extra()
> method for all tables belonging to view.
> ------------------------------------------------------------
> revno: 2772
> revision-id: joro@sun.com-20090615133815-eb007p5793in33p5
> parent: joro@sun.com-20090612140659-4hj1tta9p8wvcw4k
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: B44810-5.0-bugteam
> timestamp: Mon 2009-06-15 16:38:15 +0300
> message:
> Bug #44810: index merge and order by with low sort_buffer_size
> crashes server!
>
> The problem affects the scenario when index merge is followed by a filesort
> and the sort buffer is not big enough for all the sort keys.
> In this case the filesort function will read the data to the end through the
> index merge quick access method (and thus closing the cursor etc),
> but will leave the pointer to the quick select method in place.
> It will then create a temporary file to hold the results of the filesort and
> will add it as a sort output file (in sort.io_cache).
> Note that filesort will copy the original 'sort' structure in an automatic
> variable and restore it after it's done.
> As a result at exiting filesort() we have a sort.io_cache filled in and
> nothing else (as a result of close of the cursors at end of reading data
> through index merge).
> Now create_sort_index() will note that there is a select and will clean it up
> (as it's been used already by filesort() reading the data in). While doing that
> a special case in the index merge destructor will clean up the sort.io_cache,
> assuming it's an output of the index merge method and is not needed anymore.
> As a result the code that tries to read the data back from the filesort output
> will get no data in both memory and disk and will crash.
>
> Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
> to a local variable, removing the pointer to the io_cache (so that it's not freed
> by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
> structure (together with the valid pointer) after the cleanup is done.
> This is a safe thing to do because all the structures are already cleaned up by
> hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
> and the cleanup code being written in a way that tolerates repeating cleanups.
> ------------------------------------------------------------
> revno: 2763
> revision-id: sergey.glukhov@sun.com-20090602063813-33mh88cz5vpa2jqe
> parent: alexey.kopytov@sun.com-20090601124224-zgt3yov9wou590e9
> committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Tue 2009-06-02 11:38:13 +0500
> message:
> Bug#45152 crash with round() function on longtext column in a derived table
> The crash happens due to wrong max_length value which is set on
> Item_func_round::fix_length_and_dec() stage. The value is set to
> args[0]->max_length which is too big in case of LONGTEXT(LONGBLOB) fields.
> The fix is to set max_length using float_length() function.
> ------------------------------------------------------------
> revno: 2733
> revision-id: gshchepa@mysql.com-20090430192037-9p1etcynkglte2j3
> parent: aelkin@mysql.com-20090430143246-zfqaz0t7uoluzdz2
> committer: Gleb Shchepa <gshchepa@mysql.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Fri 2009-05-01 00:20:37 +0500
> message:
> Bug #37362: Crash in do_field_eq
>
> EXPLAIN EXTENDED of nested query containing a error:
>
> 1054 Unknown column '...' in 'field list'
>
> may cause a server crash.
>
>
> Parse error like described above forces a call to
> JOIN::destroy() on malformed subquery.
> That JOIN::destroy function closes and frees temporary
> tables. However, temporary fields of these tables
> may be listed in st_select_lex::group_list of outer
> query, and that st_select_lex may not cleanup them
> properly. So, after the JOIN::destroy call that
> st_select_lex::group_list may have Item_field
> objects with dangling pointers to freed temporary
> table Field objects. That caused a crash.
When during the optimization an item is moved to the upper select
the item's context left unchanged. This caused wrong result in the
PS/SP mode.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
In a subselect all fields from outer selects are marked as dependent on
selects they are belong to. In some cases optimizer substitutes it for an
equivalent expression. For example "a_field IN (SELECT outer_field)" is
substituted with "a_field = outer_field". As we moved the outer_field to the
upper select it's not really outer anymore. But it was left marked as outer.
If exists an index over a_field optimizer choose wrong execution plan and thus
return wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
table
The MERGE table storage engine does not support the HA_CAN_SQL_HANDLE feature
and any attempt to open the merge table will fail with ER_ILLEGAL_HA.
After an error occurred the tables that was opened must be closed again
or they will be left in an inconsistent state. However, the assumption
made in the code for closing and register handler tables was that only
one table will be opened, and this is not true for MERGE tables which
will cause multiple tables to open.
The next time a SELECT operation was issued on the merge table it
caused the system to freeze.
This patch fixes this issue by making sure that all tables which
are opened also are closed in the event of an error.
match against.
Server crashes when executing prepared statement with duplicating
MATCH() function calls in SELECT and ORDER BY expressions, e.g.:
SELECT MATCH(a) AGAINST('test') FROM t1 ORDER BY MATCH(a) AGAINST('test')
This query gets optimized by the server, so the value returned
by MATCH() from the SELECT list is reused for ORDER BY purposes.
To make this optimization server is comparing items from
SELECT and ORDER BY lists. We were getting server crash because
comparision function for MATCH() item is not intended to be called
at this point of execution.
In 5.0 and 5.1 this problem is workarounded by resetting MATCH()
item to the state as it was during PREPARE.
In 6.0 correct comparision function will be implemented and
duplicating MATCH() items from the ORDER BY list will be
optimized.
without error
When using quick access methods for searching rows in UPDATE or
DELETE there was no check if a fatal error was not already sent
to the client while evaluating the quick condition.
As a result a false OK (following the error) was sent to the
client and the error was thus transformed into a warning.
Fixed by checking for errors sent to the client during
SQL_SELECT::check_quick() and treating them as real errors.
Fixed a wrong test case in group_min_max.test
Fixed a wrong return code in mysql_update() and mysql_delete()