table causes assert failure".
Attempting to use FLUSH TABLE table_list WITH READ LOCK
statement for a MERGE table led to an assertion failure if
one of its children was not present in the list of tables
to be flushed. The problem was not visible in non-debug builds.
The assertion failure was caused by the fact that in such
situations FLUSH TABLES table_list WITH READ LOCK implementation
tried to use (e.g. lock) such child tables without acquiring
metadata lock on them. This happened because when opening tables
we assumed metadata locks on all tables were already acquired
earlier during statement execution and a such assumption was
false for MERGE children.
This patch fixes the problem by ensuring at open_tables() time
that we try to acquire metadata locks on all tables to be opened.
For normal tables such requests are satisfied instantly since
locks are already acquired for them. For MERGE children metadata
locks are acquired in normal fashion.
Note that FLUSH TABLES merge_table WITH READ LOCK will lock for
read both the MERGE table and its children but will flush only
the MERGE table. To flush children one has to mention them in table
list explicitly. This is expected behavior and it is consistent with
usage patterns for this statement (e.g. in mysqlhotcopy script).
mysql-test/r/flush.result:
Added test case for bug #55273 "FLUSH TABLE tm WITH READ LOCK
for Merge table causes assert failure".
mysql-test/t/flush.test:
Added test case for bug #55273 "FLUSH TABLE tm WITH READ LOCK
for Merge table causes assert failure".
sql/sql_base.cc:
Changed lock_table_names() to support newly introduced
MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag.
sql/sql_base.h:
Introduced MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag for
open_tables() and lock_table_names() which allows to skip
acquiring of global and schema-scope locks when SNW, SNRW or
X metadata locks are acquired.
sql/sql_reload.cc:
Changed "FLUSH TABLES table_list WITH READ LOCK" code not to
cause assert about missing metadata locks when MERGE table is
flushed without one of its underlying tables.
To achieve this we no longer call open_and_lock_tables() with
MYSQL_OPEN_HAS_MDL_LOCK flag so this function automatically
acquires metadata locks on MERGE children if such lock has
not been already acquired at earlier stage. Instead we call
this function with MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag to
suppress acquiring of global IX lock in order to keep FLUSH
TABLES table_list WITH READ LOCK compatible with FLUSH TABLE
WITH READ LOCK.
Also changed implementation to use lock_table_names() function
for pre-acquiring of metadata locks instead of custom code.
To implement this change moved setting of open_type member for
table list elements to parser.
sql/sql_yacc.yy:
Now we set acceptable type of table for FLUSH TABLES table_list
WITH READ LOCK at parsing time instead of execution time.
ALTER TABLE on a MERGE table could cause a deadlock with two
other connections if we reached a situation where:
1) A connection doing ALTER TABLE can't upgrade to MDL_EXCLUSIVE on the
parent table, but holds TL_READ_NO_INSERT on the child tables.
2) A connection doing DELETE on a child table can't get TL_WRITE on it
since ALTER TABLE holds TL_READ_NO_INSERT.
3) A connection doing SELECT on the parent table can't get TL_READ on
the child tables since TL_WRITE is ahead in the lock queue, but holds
MDL_SHARED_READ on the parent table preventing ALTER TABLE from upgrading.
For regular tables, this deadlock is avoided by having ALTER TABLE
take a MDL_SHARED_NO_WRITE metadata lock on the table. This prevents
DELETE from acquiring MDL_SHARED_WRITE on the table before ALTER TABLE
tries to upgrade to MDL_EXCLUSIVE. In the example above, SELECT would
therefore not be blocked by the pending DELETE as DELETE would not be
able to enter TL_WRITE in the table lock queue.
This patch fixes the problem for merge tables by using the same metadata
lock type for child tables as for the parent table. The child tables will
in this case therefore be locked with MDL_SHARED_NO_WRITE, preventing
DELETE from acquiring a metadata lock and enter into the table lock queue.
Change in behavior: By taking the same metadata lock for child tables
as for the parent table, LOCK TABLE on the parent table will now also
implicitly lock the child tables. Since LOCK TABLE on the parent table
now takes more than one metadata lock, it is possible for LOCK TABLE
... WRITE on the parent table or child tables to give ER_LOCK_DEADLOCK
error.
Test case added to mdl_sync.test.
Merge.test/.result has been updated to reflect the change to LOCK TABLE.
create data dir correctly in initial_database target on Windows
handle case where INSTALL_MYSQLTESTDIR is empty (e.g someone does not want
to install tests)
Problem:
- ORDER BY for utf8mb4_bin, utf16_bin and utf32_bin returned
results in a wrong order, because old functions
(supporting only BMP range) were used to handle these collations.
- Additionally, utf16_bin did not sort supplementary characters
between U+D700 and U+E000, as WL#1213 specification specified.
include/m_ctype.h:
Adding prototypes.
mysql-test/include/ctype_filesort2.inc:
Adding a new shared test file.
mysql-test/t/ctype_utf8mb4.test:
Adding tests.
strings/ctype-ucs2.c:
- Fixing my_strncoll[sp]_utf16_bin to compare
binary representation instead of code points,
to make columns with indexes sort correct.
- Fixing my_collation_handler_utf32_bin and
my_collation_handler_utf16_bin to use new
functions.
strings/ctype-utf8.c:
- Adding my_strnxfrm[len]_unicode_fill_bin()
to handle utf8mb4_bin, utf16_bin and utf32_bin,
using 3 bytes per weight.
This function also performs special reordering in case of utf16_bin.
- Fixing my_collation_utf8mb4_bin handler to use the
new function.
from 5.1.50 to 5.5.6".
Debug builds of the server aborted due to an assertion
failure when DROP DATABASE statement was run on an
installation which had outdated or corrupt mysql.proc table.
Particularly this affected the mysql_upgrade tool which is
run as part of 5.1 to 5.5 upgrade.
The problem was that sp_drop_db_routines(), which was invoked
during dropping of the database, could have returned without
closing and unlocking mysql.proc table in cases when this
table was not up-to-date with the current server. As a result
further attempt to open and lock the mysql.event table, which
was necessary to complete dropping of the database, ended up
with an assert.
This patch solves this problem by ensuring that
sp_drop_db_routines() always closes mysql.proc table and
releases metadata locks on it. This is achieved by changing
open_proc_table_for_update() function to close tables and
release metadata locks acquired by it in case of failure.
This step also makes behavior of the latter function
consistent with behavior of open_proc_table_for_read()/
open_and_lock_tables().
Test case for this bug was added to sp-destruct.test.
"Access compatibility" syntax
The "wild" "DELETE FROM table_name.* ... USING ..." syntax
for multi-table DELETE statements is documented but it was
lost in the fix for the bug 30234.
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
mysql-test/r/delete.result:
Test case for bug #53034.
mysql-test/t/delete.test:
Test case for bug #53034.
sql/sql_yacc.yy:
Bug #53034: Multiple-table DELETE statements not accepting
"Access compatibility" syntax
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
Note: simple extending of table_ident with opt_wild in
the table_alias_ref rule is not acceptable, because
a) it adds one conflict more and b) this conflict resolves
in the inappropriate way.
The lock_type is upgrade to TL_WRITE from TL_WRITE_DELAYED for
INSERT DELAYED when inserting multi values in one statement.
It's safe. But it causes an unsafe warning in SBR.
Make INSERT DELAYED safe by logging it as INSERT without DELAYED.
mysql-test/extra/binlog_tests/binlog_insert_delayed.test:
Updated the test file to test multi INSERT DELAYED statement
is no longer causes an unsafe warning and binlogged as INSERT
without DELAYED.
mysql-test/extra/rpl_tests/create_recursive_construct.inc:
Updated for the patch of bug#54579.
mysql-test/suite/binlog/r/binlog_row_binlog.result:
Updated for the patch of bug#54579.
mysql-test/suite/binlog/r/binlog_statement_insert_delayed.result:
Test result for BUG#54579.
mysql-test/suite/binlog/r/binlog_stm_binlog.result:
Updated for the patch of bug#54579.
mysql-test/suite/binlog/r/binlog_unsafe.result:
Updated for the patch of bug#54579.
mysql-test/suite/binlog/t/binlog_unsafe.test:
Updated for the patch of bug#54579.
sql/sql_insert.cc:
Added code to genetate a new query string for removing
DELAYED keyword for multi INSERT DEALAYED statement.
sql/sql_yacc.yy:
Added code to record the DELAYED keyword position and remove the setting
of unsafe statement for INSERT DELAYED statement
== MYSQL_TYPE_LONGLONG
A MIN/MAX() function with a subquery as its argument could lead
to a debug assertion on debug builds or wrong data on release
ones.
The problem was a combination of the following factors:
- Item_sum_hybrid::fix_fields() might use the argument
(args[0]) to calculate 'hybrid_field_type' which was later used
to decide how the data should be sent to the client.
- Item_sum::make_field() might use the argument again to
calculate the field's type when sending result set metadata to
the client.
- The argument could be changed in between these two calls via
Item::set_arg() leading to inconsistent metadata being
reported.
Here is what was happening for the bug's test case:
1. Item_sum_hybrid::fix_fields() calculates hybrid_field_type
as MYSQL_TYPE_LONGLONG based on args[0] which is an
Item::SUBSELECT_ITEM at that time.
2. A temporary table is created to execute the
query. create_tmp_field_from_item() creates a Field_long object
according to the subselect's max_length.
3. The subselect item in Item_sum_hybrid is replaced by the
Item_field object referencing the newly created Field_long.
4. Item_sum::make_field() rightfully returns the
MYSQL_TYPE_LONG type when calculating the result set metadata.
5. When sending the actual data, Item::send() relies on the
virtual field_type() function which in our case returns
previously calculated hybrid_field_type == MYSQL_TYPE_LONGLONG.
It looks like the only solution is to never refer to the
argument's metadata after the result metadata has been
calculated in fix_fields(), since the argument itself may be
different by then. In this sense, Item_sum::make_field() should
never be used, because it may rely on the argument's metadata
and is only called after fix_fields(). The "default"
implementation in Item::make_field() should be used instead as
it relies only on field_type(), but not on the argument's type.
Fixed by removing Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
mysql-test/r/func_group.result:
Added a test case for bug #54465.
mysql-test/t/func_group.test:
Added a test case for bug #54465.
sql/item_sum.cc:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
sql/item_sum.h:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
Problem: trailing spaces were stripped using 8-bit code,
so the truncation result length was incorrect, which led
to an assertion failure.
Fix: using multi-byte safe code.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
mysql-test/suite/innodb/r/innodb_mysql.result:
Added a test case for the bug#55656.
mysql-test/suite/innodb/t/innodb_mysql.test:
Added a test case for the bug#55656.
sql/sql_select.cc:
Bug #55656: mysqldump can be slower after bug #39653 fix.
The find_shortest_key function now prefers clustered primary key
if found secondary key includes all fields of the table.
Before this fix, the server did not recognize 'short' (as in -a)
options but only 'long' (as in --ansi) options
in the startup command line, due to earlier changes in 5.5
introduced for the performance schema.
The root cause is that handle_options() did not honor the
my_getopt_skip_unknown flag when parsing 'short' options.
The fix changes handle_options(), so that my_getopt_skip_unknown is
honored in all cases.
Note that there are limitations to this,
see the added doxygen documentation in handle_options().
The current usage of handle_options() by the server to
parse early performance schema options fits within the limitations.
This has been enforced by an assert for PARSE_EARLY options, for safety.
Before this fix, the ha_read_last_count status variable was defined and
updated internally, for never exposed as a system variable.
This fix exposes the system variable as "Handler_read_last",
for completness of the Handler_read_* system variables interface.
Adjusted tests results accordingly.
The 'mysqlhotcopy' tool gets into bin/ directory after the installation
from the scripts/.
So check for it in that in the mysql-test-run.pl.
per-file comments:
mysql-test/mysql-test-run.pl
Check the bin/ for mysqlhotcopy presence.
Before this fix, some tests failed due to lack of instrumentation slots
in the performance schema, because the default sizing was too low.
Now that more code has been instrumented, the default sizing has to be adjusted
to match the current instrumentation consumption.
This change:
- increases the number of rwlock classes from 20 to 30,
- increases the number of rwlock and mutex instances to 1 million.
Both are to account for the volume of data instrumented
when the innodb storage engine is used (because of the innodb buffer pool).
Adjusted the test output accordingly.
with two connections doing LOCK TABLE+INSERT DELAYED".
Disabled --ps-protocol for this part of the test as INSERT
DELAYED simply doesn't work with it under LOCK TABLES.
Queries involving predicates of the form "const NOT BETWEEN
not_indexed_column AND indexed_column" could return wrong data
due to incorrect handling by the range optimizer.
For "c NOT BETWEEN f1 AND f2" predicates, get_mm_tree()
produces a disjunction of the SEL_ARG trees for "f1 > c" and
"f2 < c". If one of the trees is empty (i.e. one of the
arguments is not sargable) the resulting tree should be empty
as well, since the whole expression in this case is not
sargable.
The above logic is implemented in get_mm_tree() as follows. The
initial state of the resulting tree is NULL (aka empty). We
then iterate through arguments and compute the corresponding
SEL_ARG tree (either "f1 > c" or "f2 < c"). If the resulting
tree is NULL, it is simply replaced by the generated
tree. Otherwise it is replaced by a disjunction of itself and
the generated tree. The obvious flaw in this implementation is
that if the first argument is not sargable and thus produces a
NULL tree, the resulting tree will simply be replaced by the
tree for the second argument. As a result, "c NOT BETWEEN f1
AND f2" will end up as just "f2 < c".
Fixed by adding a check so that when the first argument
produces an empty tree for the NOT BETWEEN case, the loop is
aborted with an empty tree as a result. The whole idea of using
a loop for 2 arguments does not make much sense, but it was
probably used to avoid code duplication for several BETWEEN
variants.
The Item_cache_datetime::val_str function wasn't taking into account that time
could be negative. This led to failed assertion.
Now Item_cache_datetime::val_str correctly converts negative time values
from integer to string representation.
mysql-test/r/func_group.result:
Added a test case for the bug#56120.
mysql-test/t/func_group.test:
Added a test case for the bug#56120.
sql/item.cc:
Bug#56120: Failed assertion on MIX/MAX on negative time value
Now Item_cache_datetime::val_str correctly converts negative time values
from integer to string representation.
The problem was that deadlocks involving INSERT DELAYED were not detected.
The reason for this is that two threads are involved in INSERT DELAYED:
the connection thread and the handler thread. The connection thread would
wait while the handler thread acquired locks and opened the table.
In essence, this adds an edge to the wait-for-graph between the
connection thread and the handler thread that the deadlock detector is
unaware of. Therefore many deadlocks involving INSERT DELAYED were not
detected.
This patch fixes the problem by having the connection thread acquire the
metadata lock the table before starting the handler thread. This allows the
deadlock detector to detect any possible deadlocks resulting from trying to
acquire a metadata lock the table. If a metadata lock is successfully acquired,
the handler thread is started and given a copy of the ticket representing the
metadata lock. When the handler thread then tries to lock and open the table,
it will find that it already has the metadata lock and therefore not acquire
any new metadata locks.
Test cases added to delayed.test.
Problem: ENUM columns are sorted and distributed according to their
numeric value, but Field::hash() incorrectly passed string character set
(utf32) in combination with numeric value to the hash function,
which made assertion fail.
Fix: pass "binary" character set in combination with numeric value
to the hash function.
mysql-test/suite/parts/r/part_ctype_utf32.result
Adding tests
mysql-test/suite/parts/t/part_ctype_utf32.test
Adding test
sql/field.cc
Pass correct character set pointer to the hash function.
The include/mysqlhotcopy.inc had an error in the 'if' condition, so it failed
if the mysqlhotcopy tool was found.
per-file comments:
mysql-test/include/mysqlhotcopy.inc
test should proceed exactly if the mysqlhotcopy was set.
mysql-test/mysql-test-run.pl
don't set the MYSQL_HOTCOPY variable if no mysqlhotcopy was found.