The patch for Bug 26379 (Combination of FLUSH TABLE and
REPAIR TABLE corrupts a MERGE table) fixed this bug too.
However it revealed a new bug that crashed the server.
Flushing a merge table at the moment when it is between open
and attach of children crashed the server.
The flushing thread wants to abort locks on the flushed table.
It calls ha_myisammrg::lock_count() and ha_myisammrg::store_lock()
on the TABLE object of the other thread.
Changed ha_myisammrg::lock_count() and ha_myisammrg::store_lock()
to accept non-attached children. ha_myisammrg::lock_count() returns
the number of MyISAM tables in the MERGE table so that the memory
allocation done by get_lock_data() is done correctly, even if the
children become attached before ha_myisammrg::store_lock() is
called. ha_myisammrg::store_lock() will not return any lock if the
children are not attached.
This is however a change in the handler interface. lock_count()
can now return a higher number than store_lock() stores locks.
This is more safe than the reverse implementation would be.
get_lock_data() in the SQL layer is adjusted accordingly. It sets
MYSQL_LOCK::lock_count based on the number of locks returned by
the handler::store_lock() calls, not based on the numbers returned
by the handler::lock_count() calls. The latter are only used for
allocation of memory now.
No test case. The test suite cannot reliably run FLUSH between
lock_count() and store_lock() of another thread. The bug report
contains a program that can repeat the problem with some
probability.
Change LAST_EXECUTED time the execution start time, instead of the execution completion time. This ensures the END time always the same or later than the LAST_EXECUTED time.
Problem: when alter to partitioned table,
it does not see it as change of engine.
Solution: If alter includes partitioning, check if it is possible
to change engines (eg. is the table referenced by a FK)
(also fixes the bugs: Bug#29320, Bug#29493 and Bug#30536)
Problem: Partitioning did not handle unordered scans correctly
for engines with unordered read order.
Solution: do not stop scanning fi a recored is out of range, since
there can be more records within the range afterwards.
Note: this is the patch that fixes the bug, but since there are no
storage engines shipped with mysql 5.1 (falcon comes in 6.0) there
are no test cases (it is a separate patch that only goes into 6.0)
numbers into char fields" and bug #12860 "Difference in zero padding of
exponent between Unix and Windows"
Rewrote the code that determines what 'precision' argument should be
passed to sprintf() to fit the string representation of the input number
into the field.
We get finer control over conversion by pre-calculating the exponent, so
we are able to determine which conversion format, 'e' or 'f', will be
used by sprintf().
We also remove the leading zero from the exponent on Windows to make it
compatible with the sprintf() output on other platforms.
The problem was that THD::killed was reset after a command was
read from the socket, but before it was actually handled. That lead
to a race: if another KILL statement was issued for this connection
in the middle of reading from the socket and processing a command,
THD::killed state would be cleaned.
The fix is to move this cleanup into net_send_error() function.
A sample test case exists in binlog_killed.test:
- connection 1: start a new transaction on table t1;
- connection 2: send query to the server (w/o waiting for the
result) to update data in table t1 -- this query will be blocked
since there is unfinished transaction;
- connection 1: kill query in connection 2 and finish the transaction;
- connection 2: get result of the previous query -- it should be
the "query-killed" error.
This test however contains race condition, which can not be fixed
with the current protocol: there is no way to guarantee, that the
server will receive and start processing the query in connection 2
(which is intended to get blocked) before the KILL command (sent in
the connection 1) will arrive. In other words, there is no way to
ensure that the following sequence will not happen:
- connection 1: start a new transaction on table t1;
- connection 1: kill query in connection 2 and finish the transaction;
- connection 2: send query to the server (w/o waiting for the
result) to update data in table t1 -- this query will be blocked
since there is unfinished transaction;
- connection 2: get result of the previous query -- the query will
succeed.
So, there is no test case for this bug, since it's impossible
to write a reliable test case under the current circumstances.
Parser rejects valid INTERVAL() expressions when associated with
arithmetic operators. The problem is the way in which the expression
and interval grammar rules were organized caused shift/reduce conflicts.
The solution is to tweak the interval rules to avoid shift/reduce
conflicts by removing the broken interval_expr rule and explicitly
specify it's content where necessary.
Original fix by Davi Arnaut, revised and improved rules by Marc Alff
This bug is actually two bugs in one, one of which is CREATE TRIGGER under
LOCK TABLES and the other is CREATE TRIGGER under LOCK TABLES simultaneous
to a FLUSH TABLES WITH READ LOCK (global read lock). Both situations could
lead to a server crash or deadlock.
The first problem arises from the fact that when under LOCK TABLES, if the
table is in the set of locked tables, the table is already open and it doesn't
need to be reopened (not a placeholder). Also in this case, if the table is
not write locked, a exclusive lock can't be acquired because of a possible
deadlock with another thread also holding a (read) lock on the table. The
second issue arises from the fact that one should never wait for a global
read lock if it's holding any locked tables, because the global read lock
is waiting for these tables and this leads to a circular wait deadlock.
The solution for the first case is to check if the table is write locked
and upgraded the write lock to a exclusive lock and fail otherwise for non
write locked tables. Grabbin the exclusive lock in this case also means
to ensure that the table is opened only by the calling thread. The second
issue is partly fixed by not waiting for the global read lock if the thread
is holding any locked tables.
The second issue is only partly addressed in this patch because it turned
out to be much wider and also affects other DDL statements. Reported as
Bug#32395
Problem: passing a non-constant name to the NAME_CONST function results in a crash.
Fix: check the NAME_CONST name argument; return fake item type if we got
non-constant argument(s).