with open HANDLER
This patch changes the code for table renames to not drop metadata
locks. Since table renames are done as a part of ALTER DATABASE ...
UPGRADE, dropping metadata locks in the middle of execution can
result in wrong binlog order since it means that no locks are held
when the binlog is written to.
The RENAME TABLE statement is unafffected since it auto commits and
therefore already drops metadata locks at the end of execution.
This patch also reverts the regression test for Bug#48940 back to
its original version. The test was temporarily changed due to the
issue mentioned above.
Remove acquisition of LOCK_open around file system operations,
since such operations are now protected by metadata locks.
Rework table discovery algorithm to not require LOCK_open.
No new tests added since all MDL locking operations are covered
in lock.test and mdl_sync.test, and as long as these tests
pass despite the increased concurrency, consistency must be
unaffected.
FLUSH TABLES <list> WITH READ LOCK are incompatible" to
be pushed as separate patch.
Replaced thread state name "Waiting for table", which was
used by threads waiting for a metadata lock or table flush,
with a set of names which better reflect types of resources
being waited for.
Also replaced "Table lock" thread state name, which was used
by threads waiting on thr_lock.c table level lock, with more
elaborate "Waiting for table level lock", to make it
more consistent with other thread state names.
Updated test cases and their results according to these
changes.
Fixed sys_vars.query_cache_wlock_invalidate_func test to not
to wait for timeout of wait_condition.inc script.
******
This patch fixes the following bugs:
- Bug#5889: Exit handler for a warning doesn't hide the warning in
trigger
- Bug#9857: Stored procedures: handler for sqlwarning ignored
- Bug#23032: Handlers declared in a SP do not handle warnings generated
in sub-SP
- Bug#36185: Incorrect precedence for warning and exception handlers
The problem was in the way warnings/errors during stored routine execution
were handled. Prior to this patch the logic was as follows:
- when a warning/an error happens: if we're executing a stored routine,
and there is a handler for that warning/error, remember the handler,
ignore the warning/error and continue execution.
- after a stored routine instruction is executed: check for a remembered
handler and activate one (if any).
This logic caused several problems:
- if one instruction generates several warnings (errors) it's impossible
to choose the right handler -- a handler for the first generated
condition was chosen and remembered for activation.
- mess with handling conditions in scopes different from the current one.
- not putting generated warnings/errors into Warning Info (Diagnostic
Area) is against The Standard.
The patch changes the logic as follows:
- Diagnostic Area is cleared on the beginning of each statement that
either is able to generate warnings, or is able to work with tables.
- at the end of a stored routine instruction, Diagnostic Area is left
intact.
- Diagnostic Area is checked after each stored routine instruction. If
an instruction generates several condition, it's now possible to take a
look at all of them and determine an appropriate handler.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
DELETE statement
Single-table delete ordered by a field that has a hash-type index
may cause an assertion failure or a crash.
An optimization added by the fix for the bug 36569 forced the
optimizer to use ORDER BY-compatible indices when applicable.
However, the existence of unsorted indices (HASH index algorithm
for some engines such as MEMORY/HEAP, NDB) was ignored.
The test_if_order_by_key function has been modified to skip
unsorted indices.
TABLES <list> WITH READ LOCK are incompatible".
The problem was that FLUSH TABLES <list> WITH READ LOCK
which was issued when other connection has acquired global
read lock using FLUSH TABLES WITH READ LOCK was blocked
and has to wait until global read lock is released.
This issue stemmed from the fact that FLUSH TABLES <list>
WITH READ LOCK implementation has acquired X metadata locks
on tables to be flushed. Since these locks required acquiring
of global IX lock this statement was incompatible with global
read lock.
This patch addresses problem by using SNW metadata type of
lock for tables to be flushed by FLUSH TABLES <list> WITH
READ LOCK. It is OK to acquire them without global IX lock
as long as we won't try to upgrade those locks. Since SNW
locks allow concurrent statements using same table FLUSH
TABLE <list> WITH READ LOCK now has to wait until old
versions of tables to be flushed go away after acquiring
metadata locks. Since such waiting can lead to deadlock
MDL deadlock detector was extended to take into account
waits for flush and resolve such deadlocks.
As a bonus code in open_tables() which was responsible for
waiting old versions of tables to go away was refactored.
Now when we encounter old version of table in open_table()
we don't back-off and wait for all old version to go away,
but instead wait for this particular table to be flushed.
Such approach supported by deadlock detection should reduce
number of scenarios in which FLUSH TABLES aborts concurrent
multi-statement transactions.
Note that active FLUSH TABLES <list> WITH READ LOCK still
blocks concurrent FLUSH TABLES WITH READ LOCK statement
as the former keeps tables open and thus prevents the
latter statement from doing flush.
This patch also fixes Bug#55452 "SET PASSWORD is
replicated twice in RBR mode".
The goal of this patch is to remove the release of
metadata locks from close_thread_tables().
This is necessary to not mistakenly release
the locks in the course of a multi-step
operation that involves multiple close_thread_tables()
or close_tables_for_reopen().
On the same token, move statement commit outside
close_thread_tables().
Other cleanups:
Cleanup COM_FIELD_LIST.
Don't call close_thread_tables() in COM_SHUTDOWN -- there
are no open tables there that can be closed (we leave
the locked tables mode in THD destructor, and this
close_thread_tables() won't leave it anyway).
Make open_and_lock_tables() and open_and_lock_tables_derived()
call close_thread_tables() upon failure.
Remove the calls to close_thread_tables() that are now
unnecessary.
Simplify the back off condition in Open_table_context.
Streamline metadata lock handling in LOCK TABLES
implementation.
Add asserts to ensure correct life cycle of
statement transaction in a session.
Remove a piece of dead code that has also become redundant
after the fix for Bug 37521.
The problem was that the optimize method of the ARCHIVE storage
engine was not preserving the FRM embedded in the ARZ file when
rewriting the ARZ file for optimization. The ARCHIVE engine stores
the FRM in the ARZ file so it can be transferred from machine to
machine without also copying the FRM -- the engine restores the
embedded FRM during discovery.
The solution is to copy over the FRM when rewriting the ARZ file.
In addition, some initial error checking is performed to ensure
garbage is not copied over.
table copy".
This patch only adds test case as the bug itself was addressed
by Ramil's fix for bug 50946 "fast index creation still seems
to copy the table".
The first problem was that SHOW CREATE TRIGGER took a stronger metadata
lock than required. This caused the statement to be blocked when it was
not needed. For example, LOCK TABLE WRITE in one connection would block
SHOW CREATE TRIGGER in another connection.
Another problem was that a SHOW CREATE TRIGGER statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TRIGGER is an
information statement. The consequence was that SHOW CREATE TRIGGER
was able to block other connections from accessing the table
(e.g. using ALTER TABLE).
This patch fixes the problem by changing SHOW CREATE TRIGGER to take
a MDL_SHARED_HIGH_PRIO metadata lock similar to what is already done
for SHOW CREATE TABLE. The patch also changes SHOW CREATE TRIGGER to
explicitly release any metadata locks taken by the statement after
it completes.
Test case added to show_check.test.
A change in the default values of some config parameters
caused this test to fail, adjust the test and make it more
robust so it does not fail for the same reason in the future.
concurrent SHOW CREATE
The problem was that a SHOW CREATE TABLE statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TABLE is an
information statement.
The consequence was that SHOW CREATE TABLE was able to block other
connections from accessing the table (e.g. using ALTER TABLE).
This patch fixes the problem by explicitly releasing any metadata
locks taken by SHOW CREATE TABLE after the statement completes.
Test case added to show_check.test.
SHOW DATABASES LIKE ... was not converting to lowercase on comparison as the
documentation is suggesting.
Fixed it to behave similarly to SHOW TABLES LIKE ... and updated the failing
on MacOSX lowercase_table2 test case.
The reason for the bug above is unclear but
- Modify pfs_upgrade so that it's result is easier to analyze in case something fails
- Fix several minor weaknesses which could cause that a successing test (either an
already existing or a to be developed one) fails because of imperfect cleanup,
too slow disconnected sessions etc.
should either fix the bug or reduce it's probability or at least
make the analysis of failures easier.
table with active trx
Essentially, the problem is that InnoDB does a implicit commit
when a cursor (table handler) is unlocked/closed, creating
a dissonance between the transaction state within the server
layer and the storage engine layer. Theoretically, a statement
transaction can encompass several table instances in a similar
manner to a multiple statement transaction, hence it does not
make sense to limit a statement transaction to the lifetime of
the table instances (cursors) used within it.
Since this particular instance of the problem is only triggerable
on 5.1 and is masked on 5.5 due 2PC being skipped (assertion is in
the prepare phase of a 2PC), the solution (which is less risky) is
to explicitly end the transaction before the cached table is unlock
on rename table.
The patch is to be null merged into trunk.
Problem: when SHOW BINLOG EVENTS was issued, it increased the value of
@@session.max_allowed_packet. This allowed a non-root user to increase
the amount of memory used by her thread arbitrarily. Thus, it removes
the bound on the amount of system resources used by a client, so it
presents a security risk (DoS attack).
Fix: it is correct to increase the value of @@session.max_allowed_packet
while executing SHOW BINLOG EVENTS (see BUG 30435). However, the
increase should only be temporary. Thus, the fix is to restore the value
when SHOW BINLOG EVENTS ends.
The value of @@session.max_allowed_packet is also increased in
mysql_binlog_send (i.e., the binlog dump thread). It is not clear if this
can cause any trouble, since normally the client that issues
COM_BINLOG_DUMP will not issue any other commands that would be affected
by the increased value of @@session.max_allowed_packet. However, we
restore the value just in case.
This bug is a design flaw of the fix for the bug#33546. It assumed that an
item can be used only in one comparison context, but actually it isn't the
case. Item_cache_datetime is used to store result for MIX/MAX aggregate
functions. Because Arg_comparator always compares datetime values as INTs when
possible the Item_cache_datetime most time caches only INT value. But
since all datetime values has STRING result type MIN/MAX functions are asked
for a STRING value when the result is being sent to a client. The
Item_cache_datetime was designed to avoid conversions and get INT/STRING
values from an underlying item, but at the moment the values is asked
underlying item doesn't hold it anymore thus wrong result is returned.
Beside that MIN/MAX aggregate functions was wrongly initializing cached result
and this led to a wrong result.
The Item::has_compatible_context helper function is added. It checks whether
this and given items has the same comparison context or can be compared as
DATETIME values by Arg_comparator. The equality propagation optimization is
adjusted to take into account that items which being compared as DATETIME
can have different comparison contexts.
The Item_cache_datetime now converts cached INT value to a correct STRING
DATETIME value by means of number_to_datetime & my_TIME_to_str functions.
The Arg_comparator::set_cmp_context_for_datetime helper function is added.
It sets comparison context of items being compared as DATETIMEs to INT if
items will be compared as longlong.
The Item_sum_hybrid::setup function now correctly initializes its result
value.
In order to avoid unnecessary conversions Item_sum_hybrid now states that it
can provide correct longlong value if the item being aggregated can do it
too.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.