Bug#45243: crash on win in sql thread clear_tables_to_lock() -> free()
Bug#45242: crash on win in mysql_close() -> free()
Bug#45238: rpl_slave_skip, rpl_change_master failed (lost connection) for STOP SLAVE
Bug#46030: rpl_truncate_3innodb causes server crash on windows
Bug#46014: rpl_stm_reset_slave crashes the server sporadically in pb2
When killing a user session on the server, it's necessary to
interrupt (notify) the thread associated with the session that
the connection is being killed so that the thread is woken up
if waiting for I/O. On a few platforms (Mac, Windows and HP-UX)
where the SIGNAL_WITH_VIO_CLOSE flag is defined, this interruption
procedure is to asynchronously close the underlying socket of
the connection.
In order to enable this schema, each connection serving thread
registers its VIO (I/O interface) so that other threads can
access it and close the connection. But only the owner thread of
the VIO might delete it as to guarantee that other threads won't
see freed memory (the thread unregisters the VIO before deleting
it). A side note: closing the socket introduces a harmless race
that might cause a thread attempt to read from a closed socket,
but this is deemed acceptable.
The problem is that this infrastructure was meant to only be used
by server threads, but the slave I/O thread was registering the
VIO of a mysql handle (a client API structure that represents a
connection to another server instance) as a active connection of
the thread. But under some circumstances such as network failures,
the client API might destroy the VIO associated with a handle at
will, yet the VIO wouldn't be properly unregistered. This could
lead to accesses to freed data if a thread attempted to kill a
slave I/O thread whose connection was already broken.
There was a attempt to work around this by checking whether
the socket was being interrupted, but this hack didn't work as
intended due to the aforementioned race -- attempting to read
from the socket would yield a "bad file descriptor" error.
The solution is to add a hook to the client API that is called
from the client code before the VIO of a handle is deleted.
This hook allows the slave I/O thread to detach the active vio
so it does not point to freed memory.
on SHOW CREATE TRIGGER + MERGE table
Problem: SHOW CREATE TRIGGER erroneously relies on fact
that we have the only underlying table for a trigger
(wrong for merge tables).
Fix: remove erroneous assert().
In STATEMENT based replication, a statement that failed on the master but that
updated non-transactional tables is written to binary log with the error code
appended to it. On the slave, the statement is executed and the same error is
expected. However, when an "expected error" did not happen on the slave and was
either ignored or was related to a concurrency issue on the master, the slave
did not rollback the effects of the statement and as such inconsistencies might
happen.
To fix the problem, we automatically rollback a statement that should have
failed on a slave but succeded and whose expected failure is either ignored or
stems from a concurrency issue on the master.
There is an inconsistency with DROP DATABASE|TABLE|EVENT IF EXISTS and
CREATE DATABASE|TABLE|EVENT IF NOT EXISTS. DROP IF EXISTS statements are
binlogged even if either the DB, TABLE or EVENT does not exist. In
contrast, Only the CREATE EVENT IF NOT EXISTS is binlogged when the EVENT
exists.
This patch fixes the following cases for all the replication formats:
CREATE DATABASE IF NOT EXISTS.
CREATE TABLE IF NOT EXISTS,
CREATE TABLE IF NOT EXISTS ... LIKE,
CREAET TABLE IF NOT EXISTS ... SELECT.
"CREATE TABLE TRANSACTIONAL PAGE_CHECKSUM ROW_FORMAT=PAGE accepted,
does nothing".
Put back stubs for members of structures that are shared between
sql/ and pluggable storage engines. to not break ABI unnecessarily.
To be NULL-merged into 5.4, where we do break the ABI already.
Replication SQL thread does not set database default charset to
thd->variables.collation_database properly, when executing LOAD DATA binlog.
This bug can be repeated by using "LOAD DATA" command in STATEMENT mode.
This patch adds code to find the default character set of the current database
then assign it to thd->db_charset when slave server begins to execute a relay log.
The test of this bug is added into rpl_loaddata_charset.test
The test for the 45806 entry in our bug DB got applied twice,
in different places for the "view.test" and "view.result" files.
The fix is to simply remove the erroneous insertion.
Set to one week for testcase and suite timeout
Also set one day timeout for PID file creation (not currently needed in 5.1 but might become, and is needed in azalea)
The problem is that the lexer could inadvertently skip over the
end of a query being parsed if it encountered a malformed multibyte
character. A specially crated query string could cause the lexer
to jump up to six bytes past the end of the query buffer. Another
problem was that the laxer could use unfiltered user input as
a signed array index for the parser maps (having upper and lower
bounds 0 and 256 respectively).
The solution is to ensure that the lexer only skips over well-formed
multibyte characters and that the index value of the parser maps
is always a unsigned value.
Invalid (old?) table or database name in logs
Post push patch.
Bug was that a non partitioned table file was not
converted to system_charset, (due to table_name_len was not set).
Also missing DBUG_RETURN.
And Innodb adds quotes after calling the function,
so I added one more mode where explain_filename does not
add quotes. But it still appends the [sub]partition name
as a comment.
Also caught a minor quoting bug, the character '`' was
not quoted in the identifier. (so 'a`b' was quoted as `a`b`
and not `a``b`, this is mulitbyte characters aware.)
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
when partition is reoganized.
Problem was that table->timestamp_field_type was not changed
before copying rows between partitions.
fixed by setting it to TIMESTAMP_NO_AUTO_SET as the first thing
in fast_alter_partition_table, so that all if-branches is covered.
column on partitioned table
An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query
is executed with index containing double column on partitioned table.
The problem is that assertion expects all the fields which are read,
to be in the read_set.
In this query only the field 'a' is in the readset as the tables in
the query are joined by the field 'a' and so the assertion fails
expecting other field 'b'.
Since the function cmp() is just comparison of two parameters passed,
the assertion is not required.
Fixed by removing the assertion in the double fields comparision
function and also fixed the index initialization to do ordered
index scan with RW LOCK which ensures all the fields from a key are in
the read_set.
Note: this bug is not reproducible with other datatypes because the
assertion doesn't exist in comparision function for other
datatypes.
Bug in Perl
Scrap attempt to do this smartly on AIX, just drop the test and assume it's OK
This commit undoes the previous push and adds a line to ignore on AIX
The server shutdown and start code triggered the valgrind failures
within nptl_pthread_exit_hack_handler on Ubuntu 9.04, x86 (but not amd64)
in rpl_trigger.test file.
For fixing the bug, suppress valgrind failures within nptl_pthread_exit_hack_handler
on Ubuntu 9.04, x86 (but not amd64). Because the server shutdown and start
code has been heavily used in mysql test set.