- Changed to still use bcmp() in certain cases becasue
- Faster for short unaligneed strings than memcmp()
- Bettern when using valgrind
- Changed to use my_sprintf() instead of sprintf() to get higher portability for old systems
- Changed code to use MariaDB version of select->skip_record()
- Removed -%::SCCS/s.% from Makefile.am:s to remove automake warnings
to allow temp table operations) -- prerequisite patch #1.
Move a piece of code that initialiazes TABLE instance
after it was successfully opened into a separate function.
This function will be reused in the following patches.
to allow temp table operations) -- prerequisite patch #1.
Move a piece of code that initialiazes TABLE instance
after it was successfully opened into a separate function.
This function will be reused in the following patches.
'CREATE TABLE IF NOT EXISTS ... SELECT' behaviour
BUG#47132, BUG#47442, BUG49494, BUG#23992 and BUG#48814 will disappear
automatically after the this patch.
BUG#55617 is fixed by this patch too.
This is the 5.5 part.
It implements:
- 'CREATE TABLE IF NOT EXISTS ... SELECT' statement will not insert
anything and binlog anything if the table already exists.
It only generate a warning that table already exists.
- A couple of test cases for the behavior changing.
'CREATE TABLE IF NOT EXISTS ... SELECT' behaviour
BUG#47132, BUG#47442, BUG49494, BUG#23992 and BUG#48814 will disappear
automatically after the this patch.
BUG#55617 is fixed by this patch too.
This is the 5.5 part.
It implements:
- 'CREATE TABLE IF NOT EXISTS ... SELECT' statement will not insert
anything and binlog anything if the table already exists.
It only generate a warning that table already exists.
- A couple of test cases for the behavior changing.
corruption on ADD PARTITION and LOCK TABLE
Bug#53770: Server crash at handler.cc:2076 on
LOAD DATA after timed out COALESCE PARTITION
5.5 fix for:
Bug#51042: REORGANIZE PARTITION can leave table in an
inconsistent state in case of crash
Needs to be back-ported to 5.1
5.5 fix for:
Bug#50418: DROP PARTITION does not interact with
transactions
Main problem was non-persistent operations done
before meta-data lock was taken (53770+53676).
And 53676 needed to keep the table/partitions opened and locked
while copying the data to the new partitions.
Also added thorough tests to spot some additional bugs
in the ddl_log code, which could result in bad state
between the .frm and partitions.
Collapsed patch, includes all fixes required from the reviewers.
mysql-test/r/partition_innodb.result:
updated result with new test
mysql-test/suite/parts/inc/partition_crash.inc:
crash test include file
mysql-test/suite/parts/inc/partition_crash_add.inc:
test all states in fast_alter_partition_table
ADD PARTITION branch
mysql-test/suite/parts/inc/partition_crash_change.inc:
test all states in fast_alter_partition_table
CHANGE PARTITION branch
mysql-test/suite/parts/inc/partition_crash_drop.inc:
test all states in fast_alter_partition_table
DROP PARTITION branch
mysql-test/suite/parts/inc/partition_fail.inc:
recovery test including injecting errors
mysql-test/suite/parts/inc/partition_fail_add.inc:
test all states in fast_alter_partition_table
ADD PARTITION branch
mysql-test/suite/parts/inc/partition_fail_change.inc:
test all states in fast_alter_partition_table
CHANGE PARTITION branch
mysql-test/suite/parts/inc/partition_fail_drop.inc:
test all states in fast_alter_partition_table
DROP PARTITION branch
mysql-test/suite/parts/inc/partition_mgm_crash.inc:
include file that runs all crash and failure injection tests.
mysql-test/suite/parts/r/partition_debug_innodb.result:
new test result file
mysql-test/suite/parts/r/partition_debug_myisam.result:
new test result file
mysql-test/suite/parts/r/partition_special_innodb.result:
updated result
mysql-test/suite/parts/r/partition_special_myisam.result:
updated result
mysql-test/suite/parts/t/partition_debug_innodb-master.opt:
opt file for using with crashing tests of partitioned innodb
mysql-test/suite/parts/t/partition_debug_innodb.test:
partitioned innodb test that require debug builds
mysql-test/suite/parts/t/partition_debug_myisam-master.opt:
opt file for using with crashing tests of partitioned myisam
mysql-test/suite/parts/t/partition_debug_myisam.test:
partitioned myisam test that require debug builds
mysql-test/suite/parts/t/partition_special_innodb-master.opt:
added innodb-file-per-table to easier verify partition status on disk
mysql-test/suite/parts/t/partition_special_innodb.test:
added test case
mysql-test/suite/parts/t/partition_special_myisam.test:
added test case
mysql-test/t/partition_innodb.test:
added test case
sql/sql_base.cc:
Moved alter_close_tables to sql_partition.cc
sql/sql_base.h:
removed some non existing and duplicated functions.
sql/sql_partition.cc:
fast_alter_partition_table:
Spletted abort_and_upgrad_lock_and_close_table
to its parts (wait_while_table_is_used and
alter_close_tables) and always have
wait_while_table_is_used before any persistent
operations (including logs, which will be executed
on failure) and alter_close_tables after
create/read/write operations and before
drop operations.
moved alter_close_tables here from sql_base.cc
Added error injections for better test coverage.
write_log_final_change_partition:
fixed a log_entry linking bug (delete_frm was not
linked to change/drop partition)
and drop partition must be executed before
change partition (change partition can rename a
partition to an old name, like REORG p1 INTO (p1,p2).
write_log_add_change_partition:
need to use drop_frm first, and relinking that entry
and reusing its execute entry.
sql/sql_table.cc:
added initialization of next_active_log_entry.
sql/table.h:
removed a duplicate declaration.
corruption on ADD PARTITION and LOCK TABLE
Bug#53770: Server crash at handler.cc:2076 on
LOAD DATA after timed out COALESCE PARTITION
5.5 fix for:
Bug#51042: REORGANIZE PARTITION can leave table in an
inconsistent state in case of crash
Needs to be back-ported to 5.1
5.5 fix for:
Bug#50418: DROP PARTITION does not interact with
transactions
Main problem was non-persistent operations done
before meta-data lock was taken (53770+53676).
And 53676 needed to keep the table/partitions opened and locked
while copying the data to the new partitions.
Also added thorough tests to spot some additional bugs
in the ddl_log code, which could result in bad state
between the .frm and partitions.
Collapsed patch, includes all fixes required from the reviewers.
This was triggered by innodb.innodb_multi_update, where we had a static length row without nulls and xtradb didn't fill in the delete-marker byte
include/my_bitmap.h:
Added prototype for bitmap_union_is_set_all()
mysys/my_bitmap.c:
Added function to check if union of two bit maps covers all bits.
sql/mysql_priv.h:
Updated protype for compare_record()
sql/sql_insert.cc:
Send to compare_record() flag if all fields are used.
sql/sql_select.cc:
Set share->null_bytes_for_compare.
sql/sql_update.cc:
In compare_record() don't use the fast cmp_record() (which is basically memcmp) if we don't know that all fields exists.
Don't compare the null_bytes if there is no data there.
sql/table.cc:
Store in share->null_bytes_for_compare the number of bytes that has null or bit fields (but not delete marker)
Store in can_cmp_whole_record if we can use memcmp() (assuming all rows are read) to compare rows in compare_record()
sql/table.h:
Added two elements in table->share to speed up checking how updated rows can be compared.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
mysql-test/suite/rpl/t/rpl_conditional_comments.test:
Test the patch for this bug.
sql/mysql_priv.h:
Rename inBuf as rawBuf and remove the const limitation.
sql/sql_lex.cc:
To replace '!' with ' ' in the magic comments which are not applied on
master.
sql/sql_lex.h:
Remove the const limitation on parameter buff, as it can be modified in the function since
this patch.
Add member function yyUnput for Lex_input_stream. It set a character back the query buff.
sql/sql_parse.cc:
Rename inBuf as rawBuf and remove the const limitation.
sql/sql_partition.cc:
Remove the const limitation on parameter part_buff, as it can be modified in the function since
this patch.
sql/sql_partition.h:
Remove the const limitation on parameter part_buff, as it can be modified in the function since
this patch.
sql/table.h:
Remove the const limitation on variable partition_info, as it can be modified since
this patch.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
TABLES <list> WITH READ LOCK are incompatible".
The problem was that FLUSH TABLES <list> WITH READ LOCK
which was issued when other connection has acquired global
read lock using FLUSH TABLES WITH READ LOCK was blocked
and has to wait until global read lock is released.
This issue stemmed from the fact that FLUSH TABLES <list>
WITH READ LOCK implementation has acquired X metadata locks
on tables to be flushed. Since these locks required acquiring
of global IX lock this statement was incompatible with global
read lock.
This patch addresses problem by using SNW metadata type of
lock for tables to be flushed by FLUSH TABLES <list> WITH
READ LOCK. It is OK to acquire them without global IX lock
as long as we won't try to upgrade those locks. Since SNW
locks allow concurrent statements using same table FLUSH
TABLE <list> WITH READ LOCK now has to wait until old
versions of tables to be flushed go away after acquiring
metadata locks. Since such waiting can lead to deadlock
MDL deadlock detector was extended to take into account
waits for flush and resolve such deadlocks.
As a bonus code in open_tables() which was responsible for
waiting old versions of tables to go away was refactored.
Now when we encounter old version of table in open_table()
we don't back-off and wait for all old version to go away,
but instead wait for this particular table to be flushed.
Such approach supported by deadlock detection should reduce
number of scenarios in which FLUSH TABLES aborts concurrent
multi-statement transactions.
Note that active FLUSH TABLES <list> WITH READ LOCK still
blocks concurrent FLUSH TABLES WITH READ LOCK statement
as the former keeps tables open and thus prevents the
latter statement from doing flush.
mysql-test/include/handler.inc:
Adjusted test case after changing status which is set
when FLUSH TABLES waits for tables to be flushed from
"Flushing tables" to "Waiting for table".
mysql-test/r/flush.result:
Added test which checks that "flush tables <list> with
read lock" is compatible with active "flush tables with
read lock" but not vice-versa. This test also covers
bug #52044 "FLUSH TABLES WITH READ LOCK and FLUSH TABLES
<list> WITH READ LOCK are incompatible".
mysql-test/r/mdl_sync.result:
Added scenarios in which wait for table to be flushed
causes deadlocks to the coverage of MDL deadlock detector.
mysql-test/suite/perfschema/r/dml_setup_instruments.result:
Adjusted test results after removal of COND_refresh
condition variable.
mysql-test/suite/perfschema/r/server_init.result:
Adjusted test and its results after removal of COND_refresh
condition variable.
mysql-test/suite/perfschema/t/server_init.test:
Adjusted test and its results after removal of COND_refresh
condition variable.
mysql-test/t/flush.test:
Added test which checks that "flush tables <list> with
read lock" is compatible with active "flush tables with
read lock" but not vice-versa. This test also covers
bug #52044 "FLUSH TABLES WITH READ LOCK and FLUSH TABLES
<list> WITH READ LOCK are incompatible".
mysql-test/t/kill.test:
Adjusted test case after changing status which is set
when FLUSH TABLES waits for tables to be flushed from
"Flushing tables" to "Waiting for table".
mysql-test/t/lock_multi.test:
Adjusted test case after changing status which is set
when FLUSH TABLES waits for tables to be flushed from
"Flushing tables" to "Waiting for table".
mysql-test/t/mdl_sync.test:
Added scenarios in which wait for table to be flushed
causes deadlocks to the coverage of MDL deadlock detector.
sql/ha_ndbcluster.cc:
Adjusted code after adding one more parameter for
close_cached_tables() call - timeout for waiting for
table to be flushed.
sql/ha_ndbcluster_binlog.cc:
Adjusted code after adding one more parameter for
close_cached_tables() call - timeout for waiting for
table to be flushed.
sql/lock.cc:
Removed COND_refresh condition variable. See comment
for sql_base.cc for details.
sql/mdl.cc:
Now MDL deadlock detector takes into account information
about waits for table flushes when searching for deadlock.
To implement this change:
- Declaration of enum_deadlock_weight and
Deadlock_detection_visitor were moved to mdl.h header
to make them available to the code in table.cc which
implements deadlock detector traversal through edges
of waiters graph representing waiting for flush.
- Since now MDL_context may wait not only for metadata
lock but also for table to be flushed an abstract
Wait_for_edge class was introduced. Its descendants
MDL_ticket and Flush_ticket incapsulate specifics
of inspecting waiters graph when following through
edge representing wait of particular type.
We no longer require global IX metadata lock when acquiring
SNW or SNRW locks. Such locks are needed only when metadata
locks of these types are upgraded to X locks. This allows
to use SNW locks in FLUSH TABLES <list> WITH READ LOCK
implementation and keep the latter compatible with global
read lock.
sql/mdl.h:
Now MDL deadlock detector takes into account information
about waits for table flushes when searching for deadlock.
To implement this change:
- Declaration of enum_deadlock_weight and
Deadlock_detection_visitor were moved to mdl.h header
to make them available to the code in table.cc which
implements deadlock detector traversal through edges
of waiters graph representing waiting for flush.
- Since now MDL_context may wait not only for metadata
lock but also for table to be flushed an abstract
Wait_for_edge class was introduced. Its descendants
MDL_ticket and Flush_ticket incapsulate specifics
of inspecting waiters graph when following through
edge representing wait of particular type.
- Deadlock_detection_visitor now has m_table_shares_visited
member which allows to support recursive locking for
LOCK_open. This is required when deadlock detector
inspects waiters graph which contains several edges
representing waits for flushes or needs to come through
the such edge more than once.
sql/mysqld.cc:
Removed COND_refresh condition variable. See comment
for sql_base.cc for details.
sql/mysqld.h:
Removed COND_refresh condition variable. See comment
for sql_base.cc for details.
sql/sql_base.cc:
Changed approach to how threads are waiting for table
to be flushed. Now thread that wants to wait for old
table to go away subscribes for notification by adding
Flush_ticket to table's share and waits using
MDL_context::m_wait object. Once table gets flushed
(i.e. all tables are closed and table share is ready
to be destroyed) all such waiters are notified
individually.
Thanks to this change MDL deadlock detector can take
such waits into account.
To implement this/as result of this change:
- tdc_wait_for_old_versions() was replaced with
tdc_wait_for_old_version() which waits for individual
old share to go away and which is called by open_table()
after finding out that share is outdated. We don't
need to perform back-off before such waiting thanks
to the fact that deadlock detector now sees such waits.
- As result Open_table_ctx::m_mdl_requests became
unnecessary and was removed. We no longer allocate
copies of MDL_request objects on MEM_ROOT when
MYSQL_OPEN_FORCE_SHARED/SHARED_HIGH_PRIO flags are
in effect.
- close_cached_tables() and tdc_wait_for_old_version()
share code which implements waiting for share to be
flushed - the both use TABLE_SHARE::wait_until_flush()
method. Thanks to this close_cached_tables() supports
timeouts and has extra parameter for this.
- Open_table_context::OT_MDL_CONFLICT enum element was
renamed to OT_CONFLICT as it is now also used in cases
when back-off is required to resolve deadlock caused
by waiting for flush and not metadata lock.
- In cases when we discover that current connection tries
to open tables from different generation we now simply
back-off and restart process of opening tables. To
support this Open_table_context::OT_REOPEN_TABLES enum
element was added.
- COND_refresh condition variable became unnecessary and
was removed.
- mysql_notify_thread_having_shared_lock() no longer wakes
up connections waiting for flush as all such connections
can be waken up by deadlock detector if necessary.
sql/sql_base.h:
- close_cached_tables() now has one more parameter -
timeout for waiting for table to be flushed.
- Open_table_context::OT_MDL_CONFLICT enum element was
renamed to OT_CONFLICT as it is now also used in cases
when back-off is required to resolve deadlock caused
by waiting for flush and not metadata lock.
Added new OT_REOPEN_TABLES enum element to be used in
cases when we need to restart open tables process even
in the middle of transaction.
- Open_table_ctx::m_mdl_requests became unnecessary and
was removed.
sql/sql_class.h:
Added assert ensuring that we won't use LOCK_open mutex
with THD::enter_cond(). Otherwise deadlocks can arise in
MDL deadlock detector.
sql/sql_parse.cc:
Changed FLUSH TABLES <list> WITH READ LOCK to take SNW
metadata locks instead of X locks on tables to be flushed.
Since we no longer require global IX lock to be taken
when SNW locks are taken this makes this statement
compatible with FLUSH TABLES WITH READ LOCK statement.
Since SNW locks allow other connections to have table
opened FLUSH TABLES <list> WITH READ LOCK now has to
wait during open_tables() for old version to go away.
Such waits can lead to deadlocks which will be detected
by MDL deadlock detector which now takes waits for table
to be flushed into account.
Also adjusted code after adding one more parameter for
close_cached_tables() call - timeout for waiting for
table to be flushed.
sql/sql_yacc.yy:
FLUSH TABLES <list> WITH READ LOCK now needs only SNW
metadata locks on tables.
sql/sys_vars.cc:
Adjusted code after adding one more parameter for
close_cached_tables() call - timeout for waiting for
table to be flushed.
sql/table.cc:
Implemented new approach to how threads are waiting for
table to be flushed. Now thread that wants to wait for
old table to go away subscribes for notification by
adding Flush_ticket to table's share and waits using
MDL_context::m_wait object. Once table gets flushed
(i.e. all tables are closed and table share is ready
to be destroyed) all such waiters are notified
individually. This change allows to make such waits
visible inside of MDL deadlock detector.
To do it:
- Added list of waiters/Flush_tickets to TABLE_SHARE
class.
- Changed free_table_share() to postpone freeing of
share memory until last waiter goes away and to
wake up subscribed waiters.
- Added TABLE_SHARE::wait_until_flushed() method which
implements subscription to the list of waiters for
table to be flushed and waiting for this event.
Implemented interface which allows to expose waits for
flushes to MDL deadlock detector:
- Introduced Flush_ticket class a descendant of
Wait_for_edge class.
- Added TABLE_SHARE::find_deadlock() method which allows
deadlock detector to find out what contexts are still
using old version of table in question (i.e. to find
out what contexts are waited for by owner of
Flush_ticket).
sql/table.h:
In order to support new strategy of waiting for table flush
(see comment for table.cc for details) added list of
waiters/Flush_tickets to TABLE_SHARE class.
Implemented interface which allows to expose waits for
flushes to MDL deadlock detector:
- Introduced Flush_ticket class a descendant of
Wait_for_edge class.
- Added TABLE_SHARE::find_deadlock() method which allows
deadlock detector to find out what contexts are still
using old version of table in question (i.e. to find
out what contexts are waited for by owner of
Flush_ticket).
TABLES <list> WITH READ LOCK are incompatible".
The problem was that FLUSH TABLES <list> WITH READ LOCK
which was issued when other connection has acquired global
read lock using FLUSH TABLES WITH READ LOCK was blocked
and has to wait until global read lock is released.
This issue stemmed from the fact that FLUSH TABLES <list>
WITH READ LOCK implementation has acquired X metadata locks
on tables to be flushed. Since these locks required acquiring
of global IX lock this statement was incompatible with global
read lock.
This patch addresses problem by using SNW metadata type of
lock for tables to be flushed by FLUSH TABLES <list> WITH
READ LOCK. It is OK to acquire them without global IX lock
as long as we won't try to upgrade those locks. Since SNW
locks allow concurrent statements using same table FLUSH
TABLE <list> WITH READ LOCK now has to wait until old
versions of tables to be flushed go away after acquiring
metadata locks. Since such waiting can lead to deadlock
MDL deadlock detector was extended to take into account
waits for flush and resolve such deadlocks.
As a bonus code in open_tables() which was responsible for
waiting old versions of tables to go away was refactored.
Now when we encounter old version of table in open_table()
we don't back-off and wait for all old version to go away,
but instead wait for this particular table to be flushed.
Such approach supported by deadlock detection should reduce
number of scenarios in which FLUSH TABLES aborts concurrent
multi-statement transactions.
Note that active FLUSH TABLES <list> WITH READ LOCK still
blocks concurrent FLUSH TABLES WITH READ LOCK statement
as the former keeps tables open and thus prevents the
latter statement from doing flush.
prepared statements
Using GROUP_CONCAT() together with the WITH ROLLUP modifier
could crash the server.
The reason was a combination of several facts:
1. The Item_func_group_concat class stores pointers to ORDER
objects representing the columns in the ORDER BY clause of
GROUP_CONCAT().
2. find_order_in_list() called from
Item_func_group_concat::setup() modifies the ORDER objects so
that their 'item' member points to the arguments list
allocated in the Item_func_group_concat constructor.
3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of
the original Item_func_group_concat object could be created by
using the Item_func_group_concat::Item_func_group_concat(THD
*thd, Item_func_group_concat *item) copy constructor. The
latter essentially creates a shallow copy of the source
object. Memory for the arguments array is allocated on
thd->mem_root, but the pointers for arguments and ORDER are
copied verbatim.
What happens in the test case is that when executing the query
for the first time, after a copy of the original
Item_func_group_concat object has been created by
JOIN::rollup_make_fields(), find_order_in_list() is called for
this new object. It then resolves ORDER BY by modifying the
ORDER objects so that they point to elements of the arguments
array which is local to the cloned object. When thd->mem_root
is freed upon completing the execution, pointers in the ORDER
objects become invalid. Those ORDER objects, however, are also
shared with the original Item_func_group_concat object which is
preserved between executions of a prepared statement. So the
first call to find_order_in_list() for the original object on
the second execution tries to dereference an invalid pointer.
The solution is to create copies of the ORDER objects when
copying Item_func_group_concat to not leave any stale pointers
in other instances with different lifecycles.
mysql-test/r/func_gconcat.result:
Test case for bug #54476.
mysql-test/t/func_gconcat.test:
Test case for bug #54476.
sql/item_sum.cc:
Copy the ORDER objects pointed to by the elements of the
'order' array in the copy constructor of
Item_func_group_concat.
sql/table.h:
Removed the unused 'item_copy' member of the ORDER class.
prepared statements
Using GROUP_CONCAT() together with the WITH ROLLUP modifier
could crash the server.
The reason was a combination of several facts:
1. The Item_func_group_concat class stores pointers to ORDER
objects representing the columns in the ORDER BY clause of
GROUP_CONCAT().
2. find_order_in_list() called from
Item_func_group_concat::setup() modifies the ORDER objects so
that their 'item' member points to the arguments list
allocated in the Item_func_group_concat constructor.
3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of
the original Item_func_group_concat object could be created by
using the Item_func_group_concat::Item_func_group_concat(THD
*thd, Item_func_group_concat *item) copy constructor. The
latter essentially creates a shallow copy of the source
object. Memory for the arguments array is allocated on
thd->mem_root, but the pointers for arguments and ORDER are
copied verbatim.
What happens in the test case is that when executing the query
for the first time, after a copy of the original
Item_func_group_concat object has been created by
JOIN::rollup_make_fields(), find_order_in_list() is called for
this new object. It then resolves ORDER BY by modifying the
ORDER objects so that they point to elements of the arguments
array which is local to the cloned object. When thd->mem_root
is freed upon completing the execution, pointers in the ORDER
objects become invalid. Those ORDER objects, however, are also
shared with the original Item_func_group_concat object which is
preserved between executions of a prepared statement. So the
first call to find_order_in_list() for the original object on
the second execution tries to dereference an invalid pointer.
The solution is to create copies of the ORDER objects when
copying Item_func_group_concat to not leave any stale pointers
in other instances with different lifecycles.
For queries with order by clauses that employed filesort usage of
virtual column references in select lists could trigger assertion
failures. It happened because a wrong vcol_set bitmap was used for
filesort. It turned out that filesort required its own vcol_set bitmap.
Made management of the vcol_set bitmaps similar to the management
of the read_set and write_set bitmaps.
If the expression for a virtual column of table contained datetime
comparison then the execution of the second query that used this
virtual column caused a crash. It happened because the execution
of the first query that used this virtual column inserted a cached
item into the expression tree. The cached tree was allocated in
the statement memory while the expression tree was allocated in
the table memory.
Now the cached items that are inserted into expressions for virtual
columns with datetime comparisons are always allocated in the same
mem_root as the expressions for virtual columns. So now the inserted
cached items are valid for any queries that use these virtual columns.
There were two problems that caused wrong results reported with this bug.
1. In some cases stored(persistent) virtual columns were not marked
in the write_set and in the vcol_set bitmaps.
2. If the list of fields in an insert command was empty then the values of
the stored virtual columns were set to default.
To fix the first problem the function st_table::mark_virtual_columns_for_write
was modified. Now the function has a parameter that says whether the virtual
columns are to be marked for insert or for update.
To fix the second problem a special handling of empty insert lists is
added in the function fill_record().
libmysqld/Makefile.am:
The new file added.
mysql-test/r/index_merge_myisam.result:
subquery_cache optimization option added.
mysql-test/r/myisam_mrr.result:
subquery_cache optimization option added.
mysql-test/r/subquery_cache.result:
The subquery cache tests added.
mysql-test/r/subselect3.result:
Subquery cache switched off to avoid changing read statistics.
mysql-test/r/subselect3_jcl6.result:
Subquery cache switched off to avoid changing read statistics.
mysql-test/r/subselect_no_mat.result:
subquery_cache optimization option added.
mysql-test/r/subselect_no_opts.result:
subquery_cache optimization option added.
mysql-test/r/subselect_no_semijoin.result:
subquery_cache optimization option added.
mysql-test/r/subselect_sj.result:
subquery_cache optimization option added.
mysql-test/r/subselect_sj_jcl6.result:
subquery_cache optimization option added.
mysql-test/t/subquery_cache.test:
The subquery cache tests added.
mysql-test/t/subselect3.test:
Subquery cache switched off to avoid changing read statistics.
sql/CMakeLists.txt:
The new file added.
sql/Makefile.am:
The new files added.
sql/item.cc:
Expression cache item (Item_cache_wrapper) added.
Item_ref and Item_field fixed for correct usage of result field and fast resolwing in SP.
sql/item.h:
Expression cache item (Item_cache_wrapper) added.
Item_ref and Item_field fixed for correct usage of result field and fast resolwing in SP.
sql/item_cmpfunc.cc:
Subquery cache added.
sql/item_cmpfunc.h:
Subquery cache added.
sql/item_subselect.cc:
Subquery cache added.
sql/item_subselect.h:
Subquery cache added.
sql/item_sum.cc:
Registration of subquery parameters added.
sql/mysql_priv.h:
subquery_cache optimization option added.
sql/mysqld.cc:
subquery_cache optimization option added.
sql/opt_range.cc:
Fix due to subquery cache.
sql/opt_subselect.cc:
Parameters of the function cahnged.
sql/procedure.h:
.h file guard added.
sql/sql_base.cc:
Registration of subquery parameters added.
sql/sql_class.cc:
Option to allow add indeces to temporary table.
sql/sql_class.h:
Item iterators added.
Option to allow add indeces to temporary table.
sql/sql_expression_cache.cc:
Expression cache for caching subqueries added.
sql/sql_expression_cache.h:
Expression cache for caching subqueries added.
sql/sql_lex.cc:
Registration of subquery parameters added.
sql/sql_lex.h:
Registration of subqueries and subquery parameters added.
sql/sql_select.cc:
Subquery cache added.
sql/sql_select.h:
Subquery cache added.
sql/sql_union.cc:
A new parameter to the function added.
sql/sql_update.cc:
A new parameter to the function added.
sql/table.cc:
Procedures to manage temporarty tables index added.
sql/table.h:
Procedures to manage temporarty tables index added.
storage/maria/ha_maria.cc:
Fix of handler to allow destoy a table in case of error during the table creation.
storage/maria/ha_maria.h:
.h file guard added.
storage/myisam/ha_myisam.cc:
Fix of handler to allow destoy a table in case of error during the table creation.
use limit efficiently
Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a
filesort even if not required
Also two bugs reported after QA review (before the commit
of bugs above to public trees, no documentation needed):
Bug #53737: Performance regressions after applying patch
for bug 36569
Bug #53742: UPDATEs have no effect after applying patch
for bug 36569
Execution of single-table UPDATE and DELETE statements did not use the
same optimizer as was used in the compilation of SELECT statements.
Instead, it had an optimizer of its own that did not take into account
that you can omit sorting by retrieving rows using an index.
Extra optimization has been added: when applicable, single-table
UPDATE/DELETE statements use an existing index instead of filesort. A
corresponding SELECT query would do the former.
Also handling of the DESC ordering expression has been added when
reverse index scan is applicable.
From now on most single table UPDATE and DELETE statements show the
same disk access patterns as the corresponding SELECT query. We verify
this by comparing the result of SHOW STATUS LIKE 'Sort%
Currently the get_index_for_order function
a) checks quick select index (if any) for compatibility with the
ORDER expression list or
b) chooses the cheapest available compatible index, but only if
the index scan is cheaper than filesort.
Second way is implemented by the new test_if_cheaper_ordering
function (extracted part the test_if_skip_sort_order()).
mysql-test/r/log_state.result:
Updated result for optimized query, bug #36569.
mysql-test/r/single_delete_update.result:
Test case for bug #30584, bug #36569 and bug #53742.
mysql-test/r/update.result:
Updated result for optimized query, bug #30584.
Note:
"Handler_read_last 1" omitted, see bug 52312:
lost Handler_read_last status variable.
mysql-test/t/single_delete_update.test:
Test case for bug #30584, bug #36569 and bug #53742.
sql/opt_range.cc:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
* get_index_for_order() has been rewritten entirely and moved
to sql_select.cc
New QUICK_RANGE_SELECT::make_reverse method has been added.
sql/opt_range.h:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
* get_index_for_order() has been rewritten entirely and moved
to sql_select.cc
New functions:
* QUICK_SELECT_I::make_reverse()
* SQL_SELECT::set_quick()
sql/records.cc:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
* init_read_record_idx() has been modified to allow reverse index scan
New functions:
* rr_index_last()
* rr_index_desc()
sql/records.h:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
init_read_record_idx() has been modified to allow reverse index scan
sql/sql_delete.cc:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
mysql_delete: an optimization has been added to skip
unnecessary sorting with ORDER BY clause where select
result ordering is acceptable.
sql/sql_select.cc:
Bug #30584, bug #36569, bug #53737, bug #53742:
UPDATE/DELETE ... WHERE ... ORDER BY... always does a filesort
even if not required
The const_expression_in_where function has been modified
to accept both Item and Field pointers.
New functions:
* get_index_for_order()
* test_if_cheaper_ordering() has been extracted from
test_if_skip_sort_order() to share with get_index_for_order()
* simple_remove_const()
sql/sql_select.h:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
New functions:
* test_if_cheaper_ordering()
* simple_remove_const()
* get_index_for_order()
sql/sql_update.cc:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
mysql_update: an optimization has been added to skip
unnecessary sorting with ORDER BY clause where a select
result ordering is acceptable.
sql/table.cc:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
New functions:
* TABLE::update_const_key_parts()
* is_simple_order()
sql/table.h:
Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
always does a filesort even if not required
New functions:
* TABLE::update_const_key_parts()
* is_simple_order()
use limit efficiently
Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a
filesort even if not required
Also two bugs reported after QA review (before the commit
of bugs above to public trees, no documentation needed):
Bug #53737: Performance regressions after applying patch
for bug 36569
Bug #53742: UPDATEs have no effect after applying patch
for bug 36569
Execution of single-table UPDATE and DELETE statements did not use the
same optimizer as was used in the compilation of SELECT statements.
Instead, it had an optimizer of its own that did not take into account
that you can omit sorting by retrieving rows using an index.
Extra optimization has been added: when applicable, single-table
UPDATE/DELETE statements use an existing index instead of filesort. A
corresponding SELECT query would do the former.
Also handling of the DESC ordering expression has been added when
reverse index scan is applicable.
From now on most single table UPDATE and DELETE statements show the
same disk access patterns as the corresponding SELECT query. We verify
this by comparing the result of SHOW STATUS LIKE 'Sort%
Currently the get_index_for_order function
a) checks quick select index (if any) for compatibility with the
ORDER expression list or
b) chooses the cheapest available compatible index, but only if
the index scan is cheaper than filesort.
Second way is implemented by the new test_if_cheaper_ordering
function (extracted part the test_if_skip_sort_order()).