Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
mysql-test/r/order_by.result:
Bug#46454: Test result.
mysql-test/t/order_by.test:
Bug#46454: Test case.
sql/sql_select.cc:
Bug#46454:
Problem 1 fixed in make_join_select()
Problem 2 fixed in test_if_skip_sort_order()
sql/table.h:
Bug#46454: Added comment to field.
when partition is reoganized.
Problem was that table->timestamp_field_type was not changed
before copying rows between partitions.
fixed by setting it to TIMESTAMP_NO_AUTO_SET as the first thing
in fast_alter_partition_table, so that all if-branches is covered.
column on partitioned table
An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query
is executed with index containing double column on partitioned table.
The problem is that assertion expects all the fields which are read,
to be in the read_set.
In this query only the field 'a' is in the readset as the tables in
the query are joined by the field 'a' and so the assertion fails
expecting other field 'b'.
Since the function cmp() is just comparison of two parameters passed,
the assertion is not required.
Fixed by removing the assertion in the double fields comparision
function and also fixed the index initialization to do ordered
index scan with RW LOCK which ensures all the fields from a key are in
the read_set.
Note: this bug is not reproducible with other datatypes because the
assertion doesn't exist in comparision function for other
datatypes.
mysql-test/r/partition.result:
Testcase for BUG#45816
mysql-test/t/partition.test:
Testcase for BUG#45816
sql/field.cc:
Removed the assertion ASSERT_COLUMN_MARED_FOR_READ in Field_double::cmp()
function
sql/ha_partition.cc:
Fixed index_int() method to make it initialize the read_set properly if
ordered index scan with RW lock is requested.
Install procedure does not copy *.inc files located under the mysql-test/t directory.
Therefore, this patch moves the rpl_trigger.inc to the mysql-test/include directory.
The test case fails sporadically on Windows while trying to overwrite an unused
binary log. The problem stems from the fact that MySQL on Windows does not
immediately unlock/release a file while the process that opened and closed it is
still running. In BUG 38603, this issue was circumvented by stopping the MySQL
process, copying the file and then restarting the MySQL process.
Unfortunately, such facilities are not available in the 5.0. Other approaches
such as stopping the slave and issuing change master do not work because the relay
log file and index are not closed when a slave is stopped. So to fix the problem,
we simply don't run on windows the part of the test that was failing.
engine to the partition_csv test. Also remove test case that was
duplicated. Fix connection procedure with the embedded server.
mysql-test/r/partition.result:
Update test case result.
mysql-test/r/partition_csv.result:
Update test case result.
mysql-test/t/partition.test:
Move test cases to the partition_csv test.
mysql-test/t/partition_csv.test:
Move tests from partition.test and remove duplicate.
Tweaky connection procedure to work with embedded.
when used with --tab
1) New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE).
mysqldump is updated to use this in --tab mode.
2) ESCAPED BY/ENCLOSED BY field parameters are documented as
accepting CHAR argument, however SELECT .. INTO OUTFILE
silently ignored rests of multisymbol arguments.
For the symmetrical behavior with LOAD DATA INFILE the
server has been modified to fail with the same error:
ERROR 42000: Field separator argument is not what is
expected; check the manual
3) Current LOAD DATA INFILE recognizes field/line separators
"as is" without converting from client charset to data
file charset. So, it is supposed, that input file of
LOAD DATA INFILE consists of data in one charset and
separators in other charset. For the compatibility with
that [buggy] behaviour SELECT INTO OUTFILE implementation
has been saved "as is" too, but the new warning message
has been added:
Non-ASCII separator arguments are not fully supported
This message warns on field/line separators that contain
non-ASCII symbols.
client/mysqldump.c:
mysqldump has been updated to call SELECT ... INTO OUTFILE
statement with a charset from the --default-charset command
line parameter.
mysql-test/r/mysqldump.result:
Added test case for bug #30946.
mysql-test/r/outfile_loaddata.result:
Added test case for bug #30946.
mysql-test/t/mysqldump.test:
Added test case for bug #30946.
mysql-test/t/outfile_loaddata.test:
Added test case for bug #30946.
sql/field.cc:
String conversion code has been moved from check_string_copy_error()
to convert_to_printable() for reuse.
sql/share/errmsg.txt:
New WARN_NON_ASCII_SEPARATOR_NOT_IMPLEMENTED message has been added.
sql/sql_class.cc:
The select_export::prepare() method has been modified to:
1) raise the ER_WRONG_FIELD_TERMINATORS error on multisymbol
ENCLOSED BY/ESCAPED BY field arguments like LOAD DATA INFILE;
2) warn with a new WARN_NON_ASCII_SEPARATOR_NOT_IMPLEMENTED
message on non-ASCII field or line separators.
The select_export::send_data() merhod has been modified to
convert item data to output charset (see new SELECT INTO OUTFILE
syntax). By default the BINARY charset is used for backward
compatibility.
sql/sql_class.h:
The select_export::write_cs field added to keep output
charset.
sql/sql_load.cc:
mysql_load has been modified to warn on non-ASCII field or
line separators with a new WARN_NON_ASCII_SEPARATOR_NOT_IMPLEMENTED
message.
sql/sql_string.cc:
New global function has been added: convert_to_printable()
(common code has been moved from check_string_copy_error()).
sql/sql_string.h:
New String::is_ascii() method and new global convert_to_printable()
function have been added.
sql/sql_yacc.yy:
New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE). By default the BINARY charset is used for
backward compatibility.
We disallow the partitioning of a log table. You could however
partition a table first, and then point logging to it. This is
not only against the docs, it also crashes the server.
We catch this case now.
mysql-test/r/partition.result:
results for 40281
mysql-test/t/partition.test:
test for 40281: show that trying to log to partitioned table fails rather
to crash the server
sql/ha_partition.cc:
Signal that we no longer support logging to partitioned tables,
as per the docs.
sql/sql_partition.cc:
Some commands like "USE ..." have no select, yet we may try
to parse partition info after their execution if user set a
partitioned table as log target. This shouldn't lead to a
NULL-deref/crash.
A REPLACE in the MERGE engine is actually a REPLACE
into one (FIRST or LAST) of the underlying MyISAM
tables. So in effect the server works on the meta
data of the MERGE table, while the real insert happens
in the MyISAM table.
The MERGE table has no index, while MyISAM has a
unique index. When a REPLACE into a MERGE table (
and the REPLACE conflicts with a duplicate in a
child table) is done, we try to access the duplicate
key information for the MERGE table. This information
actually does not exist, hence this results in a crash.
The problem can be resolved by modifying the MERGE
engine to provide us the duplicate key information
directly, instead of just returning the MyISAM index
number as the error key. Then the SQL layer (or "the
server") does not try to access the key_info of the
MERGE table, which does not exist.
The current patch modifies the MERGE engine to provide
the position for a record where a unique key violation
occurs.
include/myisammrg.h:
Bug#45800 crash when replacing into a merge table and there is a duplicate
Add a member to the st_mymerge_info structure that will
store the duplicate key offset in the MERGE table. This
offset will be the sum of the record offset of the MyISAM
table within the MERGE table and the offset of the record
within the MyISAM table.
mysql-test/r/merge.result:
Bug#45800 crash when replacing into a merge table and there is a duplicate
Result file for the test case.
mysql-test/t/merge.test:
Bug#45800 crash when replacing into a merge table and there is a duplicate
Added test case for both REPLACE and INSERT...ON DUPLICATE UPDATE.
storage/myisammrg/ha_myisammrg.cc:
Bug#45800 crash when replacing into a merge table and there is a duplicate
The info method now will process the HA_STATUS_ERRKEY flag
and will return the index and the offset of the duplicate
key.
storage/myisammrg/ha_myisammrg.h:
Bug#45800 crash when replacing into a merge table and there is a duplicate
Set the HA_DUPLICATE_POS flag to indicate that the duplicate
key information is now available in the MERGE storage engine.
storage/myisammrg/myrg_info.c:
Bug#45800 crash when replacing into a merge table and there is a duplicate
We modify the myrg_status function to return the position of the
duplicate key. The duplicate key position in the MERGE table will
be the MyISAM file_offset and the offset within the MyISAM table
of the start position of the records.
an assertion in a debug build.
The reason is that the C API doesn't support multiple result sets for prepared
statements and attempting to execute a stored routine which returns multiple result
sets sometimes lead to a network error. The network error sets the diagnostic area
prematurely which later leads to the assert when an attempt is made to set a second
server state.
This patch fixes the issue by changing the scope of the error code returned by
sp_instr_stmt::execute() to include any error which happened during the execution.
To assure that Diagnostic_area::is_sent really mean that the message was sent all
network related functions are checked for return status.
libmysqld/lib_sql.cc:
* Changed prototype to return success/failure status on net_send_error_packet(),
net_send_ok(), net_send_eof(), write_eof_packet().
mysql-test/r/sp_notembedded.result:
* Added test case for bug 44521
mysql-test/t/sp_notembedded.test:
* Added test case for bug 44521
sql/protocol.cc:
* Changed prototype to return success/failure status on net_send_error_packet(),
net_send_ok(), net_send_eof(), write_eof_packet().
sql/protocol.h:
* Changed prototype to return success/failure status on net_send_error_packet(),
net_send_ok(), net_send_eof(), write_eof_packet().
sql/sp_head.cc:
* Changed prototype to return success/failure status on net_send_error_packet(),
net_send_ok(), net_send_eof(), write_eof_packet().
those keywords do nothing in 5.1 (they are meant for future versions, for example featuring the Maria engine)
so they are here removed from the syntax. Adding those keywords to future versions when needed is:
- WL#5034 "Add TRANSACTIONA=0|1 and PAGE_CHECKSUM=0|1 clauses to CREATE TABLE"
- WL#5037 "New ROW_FORMAT value for CREATE TABLE: PAGE"
mysql-test/r/create.result:
test that syntax is not accepted
mysql-test/t/create.test:
test that syntax is not accepted
sql/handler.cc:
remove ROW_FORMAT=PAGE
sql/handler.h:
Mark unused objects, but I don't remove them by fear of breaking any plugin which includes this file
(see also table.h)
sql/lex.h:
removing syntax
sql/sql_show.cc:
removing output of noise keywords in SHOW CREATE TABLE and INFORMATION_SCHEMA.TABLES
sql/sql_table.cc:
removing TRANSACTIONAL
sql/sql_yacc.yy:
removing syntax
sql/table.cc:
removing TRANSACTIONAL, PAGE_CHECKSUM. Their place in the frm file is not reclaimed,
for compatibility with older 5.1.
sql/table.h:
Mark unused objects, but I don't remove them by fear of breaking any plugin which includes this file
(and there are several engines which use the content TABLE_SHARE and thus rely on a certain binary
layout of this structure).
consumption (CPU) for upgrading a large log table can be intense.
Therefore, truncate the general_log table beforehand if running
mysql_upgrade test with Valgrind.
mysql-test/t/mysql_upgrade.test:
Truncate log table if running with Valgrind.
One of the tests introduced for this bug was failing
because of path size restriction in windows.
Moved the test case to a new test which is disabled under windows.
mysql-test/r/partition_not_embedded.result:
updated test results after removing a test case.
mysql-test/r/partition_rename_longfilename.result:
Test result for partition_rename_longfilename
mysql-test/t/partition_not_embedded.test:
Removed the test case which tests renaming partition table such that
the file name is 255 char long.
mysql-test/t/partition_rename_longfilename.test:
Test case to test renaming partition table such that
the file name is 255 char long.
Moved from partition_no_embedded.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
mysql-test/r/insert_select.result:
Added a test case for bug #46075.
mysql-test/t/insert_select.test:
Added a test case for bug #46075.
sql/sql_select.cc:
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Problem was that a failing rename just left the partitions at the state
it was at the failure.
Solution was to try to revert the started rename if a failure occured.
mysql-test/r/partition_not_embedded.result:
Bug#30102: Rename table does corrupt tables with partition files on failure
New result file
mysql-test/t/partition_not_embedded.test:
Bug#30102: Rename table does corrupt tables with partition files on failure
New test file
(list_files does not report the files in embedded)
sql/ha_partition.cc:
Bug#30102: Rename table does corrupt tables with partition files on failure
Better error handling for rename partitions (reverting the started rename
operation)
Different order of files for delete.
sql/handler.cc:
Bug#30102: Rename table does corrupt tables with partition files on failure
Tries to remove as many table files as possible
if the first delete succeeds.
> ------------------------------------------------------------
> revno: 2792
> revision-id: sergey.glukhov@sun.com-20090703083500-jq8vhw0tqr37j7te
> parent: bernt.johnsen@sun.com-20090703083610-o7l4s8syz05rc4w0
> committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Fri 2009-07-03 13:35:00 +0500
> message:
> Bug#45806 crash when replacing into a view with a join!
> The crash happend because for views which are joins
> we have table_list->table == 0 and
> table_list->table->'any method' call leads to crash.
> The fix is to perform table_list->table->file->extra()
> method for all tables belonging to view.
> ------------------------------------------------------------
> revno: 2772
> revision-id: joro@sun.com-20090615133815-eb007p5793in33p5
> parent: joro@sun.com-20090612140659-4hj1tta9p8wvcw4k
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: B44810-5.0-bugteam
> timestamp: Mon 2009-06-15 16:38:15 +0300
> message:
> Bug #44810: index merge and order by with low sort_buffer_size
> crashes server!
>
> The problem affects the scenario when index merge is followed by a filesort
> and the sort buffer is not big enough for all the sort keys.
> In this case the filesort function will read the data to the end through the
> index merge quick access method (and thus closing the cursor etc),
> but will leave the pointer to the quick select method in place.
> It will then create a temporary file to hold the results of the filesort and
> will add it as a sort output file (in sort.io_cache).
> Note that filesort will copy the original 'sort' structure in an automatic
> variable and restore it after it's done.
> As a result at exiting filesort() we have a sort.io_cache filled in and
> nothing else (as a result of close of the cursors at end of reading data
> through index merge).
> Now create_sort_index() will note that there is a select and will clean it up
> (as it's been used already by filesort() reading the data in). While doing that
> a special case in the index merge destructor will clean up the sort.io_cache,
> assuming it's an output of the index merge method and is not needed anymore.
> As a result the code that tries to read the data back from the filesort output
> will get no data in both memory and disk and will crash.
>
> Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
> to a local variable, removing the pointer to the io_cache (so that it's not freed
> by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
> structure (together with the valid pointer) after the cleanup is done.
> This is a safe thing to do because all the structures are already cleaned up by
> hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
> and the cleanup code being written in a way that tolerates repeating cleanups.
> ------------------------------------------------------------
> revno: 2763
> revision-id: sergey.glukhov@sun.com-20090602063813-33mh88cz5vpa2jqe
> parent: alexey.kopytov@sun.com-20090601124224-zgt3yov9wou590e9
> committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Tue 2009-06-02 11:38:13 +0500
> message:
> Bug#45152 crash with round() function on longtext column in a derived table
> The crash happens due to wrong max_length value which is set on
> Item_func_round::fix_length_and_dec() stage. The value is set to
> args[0]->max_length which is too big in case of LONGTEXT(LONGBLOB) fields.
> The fix is to set max_length using float_length() function.
> ------------------------------------------------------------
> revno: 2733
> revision-id: gshchepa@mysql.com-20090430192037-9p1etcynkglte2j3
> parent: aelkin@mysql.com-20090430143246-zfqaz0t7uoluzdz2
> committer: Gleb Shchepa <gshchepa@mysql.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Fri 2009-05-01 00:20:37 +0500
> message:
> Bug #37362: Crash in do_field_eq
>
> EXPLAIN EXTENDED of nested query containing a error:
>
> 1054 Unknown column '...' in 'field list'
>
> may cause a server crash.
>
>
> Parse error like described above forces a call to
> JOIN::destroy() on malformed subquery.
> That JOIN::destroy function closes and frees temporary
> tables. However, temporary fields of these tables
> may be listed in st_select_lex::group_list of outer
> query, and that st_select_lex may not cleanup them
> properly. So, after the JOIN::destroy call that
> st_select_lex::group_list may have Item_field
> objects with dangling pointers to freed temporary
> table Field objects. That caused a crash.
When during the optimization an item is moved to the upper select
the item's context left unchanged. This caused wrong result in the
PS/SP mode.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
mysql-test/r/subselect.result:
The test case for the bug#46051 is adjusted.
mysql-test/t/subselect.test:
The test case for the bug#46051 is adjusted.
sql/item.cc:
Bug#46051: Incorrectly market field caused wrong result.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
In a subselect all fields from outer selects are marked as dependent on
selects they are belong to. In some cases optimizer substitutes it for an
equivalent expression. For example "a_field IN (SELECT outer_field)" is
substituted with "a_field = outer_field". As we moved the outer_field to the
upper select it's not really outer anymore. But it was left marked as outer.
If exists an index over a_field optimizer choose wrong execution plan and thus
return wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
mysql-test/r/subselect.result:
Added a test case for the bug#46051.
mysql-test/t/subselect.test:
Added a test case for the bug#46051.
sql/item_subselect.cc:
Bug#46051: Incorrectly market field caused wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
table
The MERGE table storage engine does not support the HA_CAN_SQL_HANDLE feature
and any attempt to open the merge table will fail with ER_ILLEGAL_HA.
After an error occurred the tables that was opened must be closed again
or they will be left in an inconsistent state. However, the assumption
made in the code for closing and register handler tables was that only
one table will be opened, and this is not true for MERGE tables which
will cause multiple tables to open.
The next time a SELECT operation was issued on the merge table it
caused the system to freeze.
This patch fixes this issue by making sure that all tables which
are opened also are closed in the event of an error.
mysql-test/r/merge.result:
Added test case for bug 45781
mysql-test/t/merge.test:
Added test case for bug 45781
sql/sql_handler.cc:
* mysql_ha_open() was never ment to open more than one table. If we encounter more tables, we should
close all tables related to the current substatement and raise an exception.
match against.
Server crashes when executing prepared statement with duplicating
MATCH() function calls in SELECT and ORDER BY expressions, e.g.:
SELECT MATCH(a) AGAINST('test') FROM t1 ORDER BY MATCH(a) AGAINST('test')
This query gets optimized by the server, so the value returned
by MATCH() from the SELECT list is reused for ORDER BY purposes.
To make this optimization server is comparing items from
SELECT and ORDER BY lists. We were getting server crash because
comparision function for MATCH() item is not intended to be called
at this point of execution.
In 5.0 and 5.1 this problem is workarounded by resetting MATCH()
item to the state as it was during PREPARE.
In 6.0 correct comparision function will be implemented and
duplicating MATCH() items from the ORDER BY list will be
optimized.
mysql-test/r/fulltext.result:
Updated with the test case for Bug#37740
mysql-test/t/fulltext.test:
A test case for Bug#37740.
sql/item_func.h:
True initialization of 'table' happens in ::fix_fields(). As
Item_func_match::eq() may be called before ::fix_fields(), it is
expected that 'table' is initialized to 0 when it is reused.
This is mostly affecting prepared statements, when the same item
doesn't get destroyed, but rather cleaned up and reused.