If before running test rpl_mulit_engin, the mysqltest1 database exists
on master but not exists on slave, then the following statement:
create database if not exists mysqltest1;
would not be logged to binary log, and so the database would not be
created on slave. This would cause the test to fail and reporting
mysqltest1 database not existed on slave.
This patch fixed this problem by not using a different database for
the whole test, just use the default test database, there is no reason
why a seperate database is used for this test.
Problem: Many test cases don't clean up after themselves (fail
to drop tables or fail to reset variables). This implies that:
(1) check-testcase in the new mtr that currently lives in
5.1-rpl failed. (2) it may cause unexpected results in
subsequent tests.
Fix: make all tests clean up.
Also: cleaned away unnecessary output in rpl_packet.result
Also: fixed bug where rpl_log called RESET MASTER with a running
slave. This is not supposed to work.
Also: removed unnecessary code from rpl_stm_EE_err2 and made it
verify that an error occurred.
Also: removed unnecessary code from rpl_ndb_ctype_ucs2_def.
Problem 1: BUG#36625: rpl_redirect doesn't do anything useful. It tests an
obsolete feature that was never fully implemented.
Fix 1: Remove rpl_redirect.
Problem 2: rpl_innodb_bug28430 and rpl_flushlog_loop are disabled despite the
bugs for which they were disabled have been fixed.
Fix 2: Re-enable rpl_innodb_bug28430 and rpl_flushlog_loop.
Removed the flag that disables innodb on slave in the default
configuration of replication tests. That made the explicit
--innodb flag in -slave.opt files redundant, so lots of -slave.opt
files could be removed. Also, -master.opt files containing reduntant
--innodb flag were removed (those were redundant even without
changing the default). Removing .opt files is good because .opt
files cause server restarts and make tests less readable.
Also fixed a bug where rpl_innodb_mixed_ddl unintentionally
used myisam on slave.
Problem: during a refactoring of mtr, a pattern for suppressing a warning from lowercase_table3 was lost.
Fix: re-introduce the suppression.
Problem 2: suppression was misspelt as supression. Fixed by adding a p.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
ha_statistic_increment for rpl_temporary
Problem: in some cases master send a special event to reconnecting
slave to keep slave's temporary tables (see #17284) and they still
have references to the "old" SQL slave thread and use them to access
thread's data.
Fix: set temporary tables thread references to the actual SQL slave
thread in such cases.
The Blackhole engine did not support row-based replication
since the delete_row(), update_row(), and the index and range
searching functions were not implemented.
This patch adds row-based replication support for the
Blackhole engine by implementing the two functions mentioned
above, and making the engine pretend that it has found the
correct row to delete or update when executed from the slave
SQL thread by implementing index and range searching functions.
It is necessary to only pretend this for the SQL thread, since
a SELECT executed on the Blackhole engine will otherwise never
return EOF, causing a livelock.
The replication filtering rules were inappropiately applied when
executing BINLOG pseudo-query. The rules are supposed to be active
only at times when the slave's sql thread executes an event.
Fixed with correcting a condition to call replication rules only if
the slave sql thread executes the event.
INSTALL PLUGIN and UNINSTALL PLUGIN worked with statement-based and
mixed-mode replication only, but not with row-based replication.
There is no statement-based replication of these statements.
But there was row-based replication of the inserts and deletes
to and from the mysql.plugin table.
The fix is to suppress binlogging during insert and delete to
and from the mysql.plugin table.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
manually resolved conflicts:
Text conflict in client/mysqltest.c
Contents conflict in mysql-test/include/have_bug25714.inc
Text conflict in mysql-test/include/have_ndbapi_examples.inc
Text conflict in mysql-test/mysql-test-run.pl
Text conflict in mysql-test/suite/parts/inc/partition_check_drop.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout_check1.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout_check2.inc
Text conflict in mysql-test/suite/parts/r/partition_alter1_1_2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter1_1_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter1_2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter3_innodb.result
Text conflict in mysql-test/suite/parts/r/partition_alter3_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_basic_innodb.result
Text conflict in mysql-test/suite/parts/r/partition_basic_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_basic_symlink_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_engine_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_syntax_myisam.result
Text conflict in mysql-test/suite/rpl_ndb/t/disabled.def
Text conflict in mysql-test/t/disabled.def
The problem of this bug is that we need to get the list of tables
to be updated for a multi-table update statement, which requires to
open all the tables referenced by the statement and resolve all
the fields involved in update in order to figure out the list of
tables for update. However if there are replicate filter rules,
some tables might not exist on slave and result in a failure
before we could examine the filter rules.
I think the whole problem can not be solved on slave alone,
the master must record and send the information of tables
involved for update to slave, so that the slave do not need to
open all the tables referenced by the multi-table update statement to
figure out which tables are involved for update.
So a status variable is added to Query_log event to store the
value of table map for update on master. And on slave, it will
try to get the value of this variable and use it to examine
filter rules without opening any tables on slave, if this values
is not available, the old approach is used and thus the bug will
still occur for when replicating from old masters.
BUG#37884: rpl_row_basic_2myisam and rpl_row_basic_3innodb fail sporadically in pushbuild
These have been fixed in 5.1-rpl. Re-applying fix for BUG#37884
in 5.1-bugteam, and disabling rpl_flushlog_loop for BUG#37733 in
5.1-bugteam.
Problem: After START SLAVE, the Slave_IO_Status column of
SHOW SLAVE STATUS goes from No to Yes asynchronously. That
caused sporadic failures on pushbuild in rpl_stm_until since
the test contains SHOW SLAVE STATUS right after START SLAVE.
Fix: Wait until Slave_IO_Status becomes Yes after each
START SLAVE.
Problem: the test waits for a 'DROP TEMPORARY TABLE' event to
appear in the master's binlog, then checks on the slave whether
the number of temporary tables has decreased. The slave does
not sync, causing a race.
Fix: check for the 'DROP TEMPORARY TABLE' event on slave
instead of on master.