'LOAD DATA CONCURRENT [LOCAL] INFILE ...' statment only is binlogged as
'LOAD DATA [LOCAL] INFILE ...' in SBR and MBR. As a result, if replication is on,
queries on slaves will be blocked by the replication SQL thread.
This patch write code to write 'CONCURRENT' into the log event if 'CONCURRENT' option
is in the original statement in SBR and MBR.
{PROCEDURE|FUNCTION} FROM ...'
The master would hit an assertion when binary log was
active. This was due to the fact that the thread's diagnostics
area was being cleared before writing to the binlog,
independently of mysql_routine_grant returning an error or
not. When mysql_routine_grant was to return an error, the return
value and the diagnostics area contents would
mismatch. Consequently, neither my_ok would be called nor an
error would be signaled in the diagnostics area, eventually
triggering the assertion in net_end_statement.
We fix this by not clearing the diagnostics area at binlogging
time.
The 'slave_patternload_file' is assigned to the real path of the load data file
when initializing the object of Relay_log_info. But the path of the load data
file is not formatted to real path when executing event from relay log. So the
error will be encountered if the path of the load data file is a symbolic link.
Actually the global 'opt_secure_file_priv' is not formatted to real path when
loading data from file. So the same thing will happen too.
To fix these errors, the path of the load data file should be formatted to
real path when executing event from relay log. And the 'opt_secure_file_priv'
should be formatted to real path when loading data infile.
<tmp_tbl> with RBL
When binlogging the statement, the server always handle the existing
object as a table, even though it is a view. However a view is
handled differently in other parts of the code thus leading the
statement to crash in RBL if the view exists.
This happens because the underlying tables for the view are not opened
when we try to call store_create_info() on the view in order to build
a CREATE TABLE statement.
This patch will only address the crash problem, other binlogging
problems related to CREATE TABLE IF NOT EXISTS LIKE when the existing
object is a view will be solved by BUG 47442.
In RBR, All statements operating on temporary tables should not be binlogged.
Despite this fact, after executing 'TRUNCATE... ' on a temporary table,
the command is still logged, even if in row-based mode. Consequently, this raises
problems in the slave as the table may not exist, resulting in an
execution failure. Ultimately, this causes the slave to report
an error and abort.
After this patch, 'TRUNCATE ...' statement on a temporary table will not be
binlogged in RBR.
Problem: Some system functions that could return different values on
master and slave were not marked unsafe. In particular:
GET_LOCK
IS_FREE_LOCK
IS_USED_LOCK
MASTER_POS_WAIT
RELEASE_LOCK
SLEEP
SYSDATE
VERSION
Fix: Mark these functions unsafe.
The 'rpl_get_master_version_and_clock' test verifies if the slave I/O
thread tries to reconnect to master when it tries to get the values of
the UNIX_TIMESTAMP, SERVER_ID from master under network disconnection.
So the master server is restarted for making the transient network
disconnection. Restarting master server can bring two problems as following:
1. The time out error is encountered sporadically. The slave I/O thread tries
to reconnect master ten times, which is set in my.cnf. So in the test
framework sporadically the slave I/O thread really stoped when it can't
reconnect to master in the ten times successfully before the master starts,
then the time out error will be encountered while waiting for the slave to
start.
2. These warnings and errors are produced in server log file when
the slave I/O thread tries to get the values of the UNIX_TIMESTAMP,
SERVER_ID from master under the transient network disconnection.
To fix problem 1, increase the master retry count to sixty times,
so that the slave I/O thread has enough time to reconnect master
successfully.
To fix problem 2, suppress these warnings and errors by mtr suppression,
because they are expected.
Backporting BUG#43789 to mysql-5.1-bugteam
The replication was generating corrupted data, warning messages on Valgrind
and aborting on debug mode while replicating a "null" to "not null" field.
Specifically the unpack_row routine, was considering the slave's table
definition and trying to retrieve a field value, where there was nothing to be
retrieved, ignoring the fact that the value was defined as "null" by the master.
To fix the problem, we proceed as follows:
1 - If it is not STRICT sql_mode, implicit default values are used, regardless
if it is multi-row or single-row statement.
2 - However, if it is STRICT mode, then a we do what follows:
2.1 If it is a transactional engine, we do a rollback on the first NULL that is
to be set into a NOT NULL column and return an error.
2.2 If it is a non-transactional engine and it is the first row to be inserted
with multi-row, we also return the error. Otherwise, we proceed with the
execution, use implicit default values and print out warning messages.
Unfortunately, the current patch cannot mimic the behavior showed by the master
for updates on multi-tables and multi-row inserts. This happens because such
statements are unfolded in different row events. For instance, considering the
following updates and strict mode:
(master)
create table t1 (a int);
create table t2 (a int not null);
insert into t1 values (1);
insert into t2 values (2);
update t1, t2 SET t1.a=10, t2.a=NULL;
t1 would have (10) and t2 would have (0) as this would be handled as a
multi-row update. On the other hand, if we had the following updates:
(master)
create table t1 (a int);
create table t2 (a int);
(slave)
create table t1 (a int);
create table t2 (a int not null);
(master)
insert into t1 values (1);
insert into t2 values (2);
update t1, t2 SET t1.a=10, t2.a=NULL;
On the master t1 would have (10) and t2 would have (NULL). On
the slave, t1 would have (10) but the update on t1 would fail.
The problem is that there is only one autoinc value associated with
the query when binlogging. If more than one autoinc values are used
in the query, the autoinc values after the first one can be inserted
wrongly on slave. So these autoinc values can become inconsistent on
master and slave.
The problem is resolved by marking all the statements that invoke
a trigger or call a function that updated autoinc fields as unsafe,
and will switch to row-format in Mixed mode. Actually, the statement
is safe if just one autoinc value is used in sub-statement, but it's
impossible to check how many autoinc values are used in sub-statement.)
"load data" statements were written to the binlog as a mix of the original statement
and bits recreated from parse-info. This relied on implementation details and broke
with IGNORE_SPACES and versioned comments.
We now completely resynthesize the query for LOAD DATA for binlog (which among other
things normalizes them somewhat with regard to case, spaces, etc.).
We have already parsed the query properly, so we make use of that rather
than mix-and-match string literals and parsed items.
This should make us safe with regard to versioned comments, even those
spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
All statements executed by mysql_upgrade are binlogged and then are replicated to slave.
This will result in some errors. The report of this bug has demonstrated some examples.
Master and slave should be upgraded separately. All statements executed by
mysql_upgrade will not be binlogged.
--write-binlog and --skip-write-binlog options are added into mysql_upgrade.
These options control whether sql statements are binlogged or not.
HA_ERR_WRONG_INDEX
In RBR, disabling keys on slave table will break replication when
updating or deleting a record. When the slave thread tries to
find the row, by searching in the storage engine, it checks
whether the table has a key or not. If it has one, then the slave
thread uses it to search the record.
Nonetheless, the slave only checks whether the key exists or not,
it does not verify if it is active. Should the key be
disabled (eg, DBA has issued an ALTER TABLE ... DISABLE KEYS)
then it will result in error: HA_ERR_WRONG_INDEX.
This patch addresses this issue by making the slave thread also
check whether the key is active or not before actually using it.
But there is no Last_IO_Error reported.
On the master, if a binary log event is larger than max_allowed_packet,
ER_MASTER_FATAL_ERROR_READING_BINLOG and the specific reason of this error is
sent to a slave when it requests a dump from the master, thus leading
the I/O thread to stop.
On a slave, the I/O thread stops when receiving a packet larger than max_allowed_packet.
In both cases, however, there was no Last_IO_Error reported.
This patch adds code to report the Last_IO_Error and exact reason before stopping the
I/O thread and also reports the case the out memory pops up while
handling packets from the master.
The test case rpl_do_grant fails sporadically on PB2 with "Access
denied for user 'create_rout_db'@'localhost' ...". Inspecting the
test case, one may find that if issues a GRANT on the master
connection and immediately after it creates two new connections
(one to the master and one to the slave) using the credentials
set with the GRANT.
Unfortunately, there is no synchronization between master and
slave after the grant and before the connections are
established. This can result in slave not having executed the
GRANT by the time the connection is attempted.
This patch fixes this by deploying a sync_slave_with_master
between the grant and the connections attempt.
The test case creates two temporary tables, then closes the
connection, waits for it to disconnect, then syncs the slave with
the master, checks for remaining opened temporary tables on
slave (which should be 0) and finally drops the used
database (mysqltest).
Unfortunately, sometimes, the test fails with one open table on
the slave. This seems to be caused by the fact that waiting for
the connection to be closed is not sufficient. The test needs to
wait for the DROP event to be logged and only then synchronize
the slave with the master and proceed with the check. This is
caused by the asynchronous nature of the disconnect wrt
binlogging of the DROP temporary table statement.
We fix this by deploying a call to wait_for_binlog_event.inc
on the test case, which makes execution to wait for the DROP
temp tables event before synchronizing master and slave.
If an EVENT is created without the DEFINER clause set explicitly or with it set
to CURRENT_USER, the master and slaves become inconsistent. This issue stems from
the fact that in both cases, the DEFINER is set to the CURRENT_USER of the current
thread. On the master, the CURRENT_USER is the mysqld's user, while on the slave,
the CURRENT_USER is empty for the SQL Thread which is responsible for executing
the statement.
To fix the problem, we do what follows. If the definer is not set explicitly,
a DEFINER clause is added when writing the query into binlog; if 'CURRENT_USER' is
used as the DEFINER, it is replaced with the value of the current user before
writing to binlog.
binlog
Mixing transactional (T) and non-transactional (N) tables on behalf of a
transaction may lead to inconsistencies among masters and slaves in STATEMENT
mode. The problem stems from the fact that although modifications done to
non-transactional tables on behalf of a transaction become immediately visible
to other connections they do not immediately get to the binary log and therefore
consistency is broken. Although there may be issues in mixing T and M tables in
STATEMENT mode, there are safe combinations that clients find useful.
In this bug, we fix the following issue. Mixing N and T tables in multi-level
(e.g. a statement that fires a trigger) or multi-table table statements (e.g.
update t1, t2...) were not handled correctly. In such cases, it was not possible
to distinguish when a T table was updated if the sequence of changes was N and T.
In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect
that both a N and T tables were changed. To circumvent this issue, we check if an
engine is registered in the handler's list and changed something which means that
a T table was modified.
Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or
ROW modes completely safe.
In STATEMENT based replication, a statement that failed on the master but that
updated non-transactional tables is written to binary log with the error code
appended to it. On the slave, the statement is executed and the same error is
expected. However, when an "expected error" did not happen on the slave and was
either ignored or was related to a concurrency issue on the master, the slave
did not rollback the effects of the statement and as such inconsistencies might
happen.
To fix the problem, we automatically rollback a statement that should have
failed on a slave but succeded and whose expected failure is either ignored or
stems from a concurrency issue on the master.
If the log_bin_trust_function_creators option is not defined, creating a stored
function requires either one of the modifiers DETERMINISTIC, NO SQL, or READS
SQL DATA. Executing a stored function should also follows the same rules if in
STATEMENT mode. However, this was not happening and a wrong error was being
printed out: ER_BINLOG_ROW_RBR_TO_SBR.
The patch makes the creation and execution compatible and prints out the correct
error ER_BINLOG_UNSAFE_ROUTINE when a stored function without one of the modifiers
above is executed in STATEMENT mode.
to wrong result
When using MIXED mode and issuing 'CREATE TEMPORARY TABLE t_tmp',
the statement is logged if the current binlogging mode is
STATEMENT. This causes the slave to replay the instruction and
create the temporary table as well. If there is no switch to ROW
mode, and later on a 'DROP TEMPORARY TABLE t_tmp' is issued, then
this statement will also be logged and the slave will
remove/close the temporary table.
However, if there is a switch to ROW mode between the CREATE and
DROP TEMPORARY table, the DROP statement will not be logged,
leaving the slave with a dangling temporary table.
This patch addresses this, by always logging a DROP TEMPORARY
TABLE IF EXISTS when in mixed mode and a drop statement is issued
for temporary table(s).
binlog
The fix for BUG 43929 introduced a regression issue. In a nutshell, when a
statement that changes a non-transactional table fails, it is written to the
binary log with the error code appended. Unfortunately, after BUG 43929, this
failure was flushing the transactional chace causing mismatch between execution
and logging histories. To fix this issue, we avoid flushing the transactional
cache when a commit or rollback is not issued.
The "get_master_version_and_clock(...)" function in sql/slave.cc ignores
error and passes directly when queries fail, or queries succeed
but the result retrieved is empty.
The "get_master_version_and_clock(...)" function should try to reconnect master
if queries fail because of transient network problems, and fail otherwise.
The I/O thread should print a warning if the some system variables do not
exist on master (very old master)