SELECT ... WHERE ... IN (NULL, ...) does full table scan,
even if the same query without the NULL uses efficient range scan.
The bugfix for the bug 18360 introduced an optimization:
if
1) all right-hand arguments of the IN function are constants
2) result types of all right argument items are compatible
enough to use the same single comparison function to
compare all of them to the left argument,
then
we can convert the right-hand list of constant items to an array
of equally-typed constant values for the further
QUICK index access etc. (see Item_func_in::fix_length_and_dec()).
The Item_null constant item objects have STRING_RESULT
result types, so, as far as Item_func_in::fix_length_and_dec()
is aware of NULLs in the right list, this improvement efficiently
optimizes IN function calls with a mixed right list of NULLs and
string constants. However, the optimization doesn't affect mixed
lists of NULLs and integers, floats etc., because there is no
unique common comparator.
New optimization has been added to ignore the result type
of NULL constants in the static analysis of mixed right-hand lists.
This is safe, because at the execution phase we care about
presence of NULLs anyway.
1. The collect_cmp_types() function has been modified to optionally
ignore NULL constants in the item list.
2. NULL-skipping code of the Item_func_in::fix_length_and_dec()
function has been modified to work not only with in_string
vectors but with in_vectors of other types.
The problem is that there is only one autoinc value associated with
the query when binlogging. If more than one autoinc values are used
in the query, the autoinc values after the first one can be inserted
wrongly on slave. So these autoinc values can become inconsistent on
master and slave.
The problem is resolved by marking all the statements that invoke
a trigger or call a function that updated autoinc fields as unsafe,
and will switch to row-format in Mixed mode. Actually, the statement
is safe if just one autoinc value is used in sub-statement, but it's
impossible to check how many autoinc values are used in sub-statement.)
On Mac OS X or Windows, sending a SIGHUP to the server or a
asynchronous flush (triggered by flush_time), would cause the
server to crash.
The problem was that a hook used to detach client API handles
wasn't prepared to handle cases where the thread does not have
a associated session.
The solution is to verify whether the thread has a associated
session before trying to detach a handle.
'flush tables' crashes
The server crashes when 'show procedure status' and 'flush tables' are
run concurrently.
This is caused by the way mysql.proc table is added twice to the list
of table to lock although the requirements on the current locking API
assumes differently.
No test case is submitted because of the nature of the crash which is
currently difficult to reproduce in a deterministic way.
This is a backport from 5.1
Backport from 6.0 to 5.1.
Only those sync points are included, which are used in debug_sync.test.
The Debug Sync Facility allows to place synchronization points
in the code:
open_tables(...)
DEBUG_SYNC(thd, "after_open_tables");
lock_tables(...)
When activated, a sync point can
- Send a signal and/or
- Wait for a signal
Nomenclature:
- signal: A value of a global variable that persists
until overwritten by a new signal. The global
variable can also be seen as a "signal post"
or "flag mast". Then the signal is what is
attached to the "signal post" or "flag mast".
- send a signal: Assign the value (the signal) to the global
variable ("set a flag") and broadcast a
global condition to wake those waiting for
a signal.
- wait for a signal: Loop over waiting for the global condition until
the global value matches the wait-for signal.
Please find more information in the top comment in debug_sync.cc
or in the worklog entry.
replication
MySQL server uses wrong lock type (always TL_READ instead of
TL_READ_NO_INSERT when appropriate) for tables used in
subqueries of UPDATE statement. This leads in some cases to
a broken replication as statements are written in the wrong
order to the binlog.
The problem was that appending values to the end of an existing
ENUM or SET column was being treated as table data modification,
preventing a immediately (fast) table alteration that occurs when
only table metadata is being modified.
The cause was twofold: adding a enumeration or set members to the
end of the list of valid member values was not being considered
a "compatible" table alteration, and for SET columns, the check
was being done upon the max display length and not the underlying
(pack) length of the field.
The solution is to augment the function that checks wether two ENUM
or SET fields are compatible -- by comparing the pack lengths and
performing a limited comparison of the member values.
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
"load data" statements were written to the binlog as a mix of the original statement
and bits recreated from parse-info. This relied on implementation details and broke
with IGNORE_SPACES and versioned comments.
We now completely resynthesize the query for LOAD DATA for binlog (which among other
things normalizes them somewhat with regard to case, spaces, etc.).
We have already parsed the query properly, so we make use of that rather
than mix-and-match string literals and parsed items.
This should make us safe with regard to versioned comments, even those
spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
view definition
During SHOW CREATE VIEW there is no reason to 'anonymize'
errors that name objects that a user does not have access
to. Moreover it was inconsistently implemented. For example
base tables being referenced from a view appear to be ok,
but not views. The manual on the other hand is clear: If a
user has the privileges SELECT and SHOW VIEW, the view
definition is available to that user, period. The fix
changes the behavior to support the manual.
trigger, merge table
The problem with break statements is that they have very
local effects. Hence a break statement within the inner loop
of a nested-loops join caused execution to proceed to the
next table even though a serious error occurred. The problem
was fixed by breaking out the inner loop into its own
method. The change empowers all errors to terminate the
execution.
The errors that will now halt multi-DELETE execution
altogether are
- triggers returning errors
- handler errors
- server being killed
In RBR, 'DROP TEMPORARY TABLE IF EXISTS...' statement is binlogged when the table
does not exist.
In fact, 'DROP TEMPORARY TABLE ...' statement should never be binlogged in RBR
no matter if the table exists or not.
This patch addresses this by checking whether we are dropping a
temporary table or not, when building the custom drop statement.
HA_ERR_WRONG_INDEX
In RBR, disabling keys on slave table will break replication when
updating or deleting a record. When the slave thread tries to
find the row, by searching in the storage engine, it checks
whether the table has a key or not. If it has one, then the slave
thread uses it to search the record.
Nonetheless, the slave only checks whether the key exists or not,
it does not verify if it is active. Should the key be
disabled (eg, DBA has issued an ALTER TABLE ... DISABLE KEYS)
then it will result in error: HA_ERR_WRONG_INDEX.
This patch addresses this issue by making the slave thread also
check whether the key is active or not before actually using it.
Invalid (old?) table or database name in logs
Problem was still not completely fixed, due to
qouting.
This is the server side only fix (in explain_filename),
the change from filename_to_tablename to use explain_filename
in the InnoDB code must be done before the bug is
fixed.
when compiled with Sun Studio compiler).
The thing is that Sun Studio compiler calls destructor of stack
objects when pthread_exit() is called. That triggered an assertion
in DBUG_ENTER()/DBUG_RETURN() validation logic (if DBUG_ENTER() is
used in the beginning of function, all returns should be replaced
by DBUG_RETURN/DBUG_VOID_RETURN macros).
A fix is to explicitly use DBUG_LEAVE macro.
The "socket" variable is not available on Windows even though
the --socket option can be used to specify the pipe name for
local connections that use a named pipe.
The solution is to ensure that the variable is always defined.
checksum)"
The problem was that checksum of GEOMETRY type used memory addresses
in the computation, making it un-repeatable thus useless.
(This patch is a backport from 6.0 branch)
Despite copying the value of the old table's row type
we don't always have to mark row type as being specified.
Innodb uses this to check if it can do fast ALTER TABLE
or not.
Fixed by correctly flagging the presence of row_type
only when it's actually changed.
Added a test case for 39200.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
But there is no Last_IO_Error reported.
On the master, if a binary log event is larger than max_allowed_packet,
ER_MASTER_FATAL_ERROR_READING_BINLOG and the specific reason of this error is
sent to a slave when it requests a dump from the master, thus leading
the I/O thread to stop.
On a slave, the I/O thread stops when receiving a packet larger than max_allowed_packet.
In both cases, however, there was no Last_IO_Error reported.
This patch adds code to report the Last_IO_Error and exact reason before stopping the
I/O thread and also reports the case the out memory pops up while
handling packets from the master.
"insert into.. select * from"
When inserting into a partitioned table using 'insert into
<target> select * from <src>', read_buffer_size bytes of memory
are allocated for each partition in the target table.
This resulted in large memory consumption when the number of
partitions are high.
This patch introduces a new method which tries to estimate the
buffer size required for each partition and limits the maximum
buffer size used to maximum of 10 * read_buffer_size,
11 * read_buffer_size in case of monotonic partition functions.