The result set for multi-row statements is not the same between STMT and
RBR and among different versions. Thus to avoid test failures, we are not
printing out such result sets. Note, however, that this does not have
impact on coverage and accuracy since the execution is able to continue
without further issues when an error is found on the master and such error
is set to be skipped.
RBR was not considering the option --slave-skip-errors.
To fix the problem, we are reporting the ignored ERROR(s) as warnings thus avoiding
stopping the SQL Thread. Besides, it fixes the output of "SHOW VARIABLES LIKE
'slave_skip_errors'" which was showing nothing when the value "all" was assigned
to --slave-skip-errors.
@sql/log_event.cc
skipped rbr errors when the option skip-slave-errors is set.
@sql/slave.cc
fixed the output of for SHOW VARIABLES LIKE 'slave_skip_errors'"
@test-cases
fixed the output of rpl.rpl_idempotency
updated the test case rpl_skip_error
1. Test case was rewritten completely.
2. Test covers 3 cases:
a) do deadlock on slave, wait retries of transaction, unlock slave before lock
timeout;
b) do deadlock on slave and wait error 'lock timeout exceed' on slave;
c) same as b) but if of max relay log size = 0;
3. Added comments inline.
4. Updated result file.
Mysql server crashes because unsafe statements warning is wrongly elevated to error,
which is set the error status of Diagnostics_area of the thread in THD::binlog_query().
Yet the caller believes that binary logging shouldn't touch the status, so it will
set the status also later by my_ok(), my_error() or my_message() seperately
according to the execution result of the statement or transaction.
But the status of Diagnostics_area of the thread is allowed to set only once.
Fixed to clear the error wrongly set by binary logging, but keep the warning message.
Test was flakey on some machines and showed spurious
reds for races.
New-and-improved test makes do with fewer statements,
no mysqltest-variables, and no backticks. Should hope-
fully be more robust. Heck, it's debatable whether we
should have a test for this, anyway.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
The issue happened to be two-fold.
The table map event was recorded into binlog having
an incorrect size when number of columns exceeded 251.
The Row-based event had incorrect recording and restoring m_width member within
the same as above conditions.
Fixed with correcting m_data_size and m_width.
LOAD_FILE
LOAD_FILE is not safe to replicate in STATEMENT mode, because it
depends on a file (which is loaded on master and may not exist in
slave(s)). This leads to scenarios on which the slave replicates the
statement with 'load_file' and it will try to load the file from local
file system. Given that the file may not exist in the slave filesystem
the operation will not succeed (probably returning NULL), causing
master and slave(s) to diverge. However, when using MIXED mode
replication, this can be made to work, if the statement including
LOAD_FILE is marked as unsafe, triggering a switch to ROW mode,
meaning that the contents of the file are written to binlog as row
events. Consequently, the contents from the file in the master will
reach the slave via the binlog.
This patch addresses this bug by marking the load_file function as
unsafe. When in mixed mode and when LOAD_FILE is issued, there will be
a switch to row mode. Furthermore, when in statement mode, the
LOAD_FILE will raise a warning that the statement is unsafe in that
mode.
The problem is that after disconnect, the DOPR TEMPORARY TABLE event didn't been
written into binlog. So after syncing with slave, the TEMPORARY table on slave
is not removed.
Waiting DROP TEMPORARY TABLE event to be written into binlog before sync slave with
master.
Compiling with debug and assigning an invalid directory to --slave-load-tmpdir
was crashing the slave due to the following assertion DBUG_ASSERT(! is_set() ||
can_overwrite_status). This assertion assumes that a thread can change its
state once (i.e. ok,error, etc) before aborting, cleaning/resuming or completing
its execution unless the overwrite flag (i.e. can_overwrite_status) is true.
The Append_block_log_event::do_apply_event which is responsible for creating
temporary file(s) was not cleaning the thread state. Thus a failure while
trying to create a file in an invalid temporary directory was causing the crash.
To fix the problem we check if the temporary directory is valid before starting
the SQL Thread and reset the thread state before creating a file in
Append_block_log_event::do_apply_event.
mysql.procs_priv table itself does not get replicated.
Inserting routine privilege record into mysql.procs_priv table
is triggered by creating function/procedure statements
according to current user's privileges.
Because the current user of SQL thread has GLOBAL_ACL,
which doesn't need any check mysql.procs_priv privilege
when create/alter/execute routines.
Corresponding GLOBAL_ACL privilege user
doesn't insert routine privilege record into
mysql.procs_priv when creating a routine.
Fixed by switching the current user of SQL thread to definer user if
the definer user exists on slave.
That populates procs_priv, otherwise to keep the SQL thread
user and procs_priv remains unchanged.
The problem is issued because we set wrong start position and stop position of query string into binlog.
That two values are stored as part of head info of query string.
When we parse binlog, we first get position values then get the query string according position values.
But seems that two values are not calculated correctly after the parse of Yacc.
We don't want to touch so much of yacc because it may influence other codes.
So just add one space after 'INTO' key word when parsing.
This can easily resolve the problem.
The method to purge binary log files produces different results in some platforms.
The reason is that the purge time is calculated based on table modified time and
that can't guarantee to purge master-bin.000002 in all platforms.(eg. windows)
Use a new way that sets the time to purge binlog file 1 second after the last
modified time of master-bin.000002.
That can be sure that the file is always deleted in any platform.
The problem is that creating a event could fail if the value of
the variable server_id didn't fit in the originator column of
the event system table. The cause is two-fold: it was possible
to set server_id to a value outside the documented range (from
0 to 2^32-1) and the originator column of the event table didn't
have enough room for values in this range.
The log tables (general_log and slow_log) also don't have a proper
column type to store the server_id and having a large server_id
value could prevent queries from being logged.
The solution is to ensure that all system tables that store the
server_id value have a proper column type (int unsigned) and that
the variable can't be set to a value that is not within the range.