1. A bad error message was given when a MERGE table with an
InnoDB child table was tried to use.
2. After selecting from a correct MERGE table and then altering
one of the children to InnoDB, incorrect results were returned.
These bugs have been fixed with the patch for bug 26379 (Combination
of FLUSH TABLE and REPAIR TABLE corrupts a MERGE table).
For verification, I added the test case from the bug report.
filesort() uses file->estimate_rows_upper_bound() call to allocate
internal buffers. If this function returns a value smaller than
a number of row that will be returned later in find_all_keys(),
that can cause server crash.
Fixed by implementing ha_federated::estimate_rows_upper_bound() to
return maximum possible number of rows.
Present estimation for FEDERATED always returns 0 if the linked to the VIEW.
Parser rejects valid INTERVAL() expressions when associated with
arithmetic operators. The problem is the way in which the expression
and interval grammar rules were organized caused shift/reduce conflicts.
The solution is to tweak the interval rules to avoid shift/reduce
conflicts by removing the broken interval_expr rule and explicitly
specify it's content where necessary.
Original fix by Davi Arnaut, revised and improved rules by Marc Alff
The following clarification should be made in The Manual:
Standard SQL is quite clear that, if new columns are added
to a table after a view on that table is created with
"select *", the new columns will not become part of the view.
In all cases, the view definition (view structure) is frozen
at CREATE time, so changes to the underlying tables do not
affect the view structure.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
This bug is actually two bugs in one, one of which is CREATE TRIGGER under
LOCK TABLES and the other is CREATE TRIGGER under LOCK TABLES simultaneous
to a FLUSH TABLES WITH READ LOCK (global read lock). Both situations could
lead to a server crash or deadlock.
The first problem arises from the fact that when under LOCK TABLES, if the
table is in the set of locked tables, the table is already open and it doesn't
need to be reopened (not a placeholder). Also in this case, if the table is
not write locked, a exclusive lock can't be acquired because of a possible
deadlock with another thread also holding a (read) lock on the table. The
second issue arises from the fact that one should never wait for a global
read lock if it's holding any locked tables, because the global read lock
is waiting for these tables and this leads to a circular wait deadlock.
The solution for the first case is to check if the table is write locked
and upgraded the write lock to a exclusive lock and fail otherwise for non
write locked tables. Grabbin the exclusive lock in this case also means
to ensure that the table is opened only by the calling thread. The second
issue is partly fixed by not waiting for the global read lock if the thread
is holding any locked tables.
The second issue is only partly addressed in this patch because it turned
out to be much wider and also affects other DDL statements. Reported as
Bug#32395
Kill of a CREATE TABLE source_table LIKE statement waiting for a
name-lock on the source table causes a bad lock interaction.
The mysql_create_like_table() has a bug that if the connection is
killed while waiting for the name-lock on the source table, it will
jump to the wrong error path and try to unlock the source table and
LOCK_open, but both weren't locked.
The solution is to simple return when the name lock request is killed,
it's safe to do so because no lock was acquired and no cleanup is needed.
Original bug report also contains description of other problems
related to this scenario but they either already fixed in 5.1 or
will be addressed separately (see bug report for details).
The bug was that for ordered index scans, ha_partition::index_init() did
not put index columns into table->read_set if the underlying storage
engine did not have HA_PARTIAL_COLUMN_READ flag.
This was causing assertion failure when handle_ordered_index_scan() tried
to sort the records according to index order.
Fixed by making ha_partition::index_init() put index columns into table->read_set
for all ordered scans.
There's currently no way of knowing the determinicity of an UDF.
And the optimizer and the sequence() UDFs were making wrong
assumptions about what the is_const member means.
Plus there was no implementation of update_system_tables()
causing the optimizer to overwrite the information returned by
the <udf>_init function.
Fixed by equating the assumptions about the semantics of
is_const and providing a implementation of update_used_tables().
Added a TODO item for the UDF API change needed to make a better
implementation.
Problem: passing a non-constant name to the NAME_CONST function results in a crash.
Fix: check the NAME_CONST name argument; return fake item type if we got
non-constant argument(s).