Removed the flag that disables innodb on slave in the default
configuration of replication tests. That made the explicit
--innodb flag in -slave.opt files redundant, so lots of -slave.opt
files could be removed. Also, -master.opt files containing reduntant
--innodb flag were removed (those were redundant even without
changing the default). Removing .opt files is good because .opt
files cause server restarts and make tests less readable.
Also fixed a bug where rpl_innodb_mixed_ddl unintentionally
used myisam on slave.
Problem: during a refactoring of mtr, a pattern for suppressing a warning from lowercase_table3 was lost.
Fix: re-introduce the suppression.
Problem 2: suppression was misspelt as supression. Fixed by adding a p.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
Adds --general-log-file, --slow-query-log-file command-
line options to match system variables of the same names.
Deprecates --log, --log-slow-queries command-line option
and log, log_slow_queries system-variables for v7.0; they
are superseded by general_log/general_log_file and
slow_query_log/slow_query_log_file, respectively.
The Blackhole engine did not support row-based replication
since the delete_row(), update_row(), and the index and range
searching functions were not implemented.
This patch adds row-based replication support for the
Blackhole engine by implementing the two functions mentioned
above, and making the engine pretend that it has found the
correct row to delete or update when executed from the slave
SQL thread by implementing index and range searching functions.
It is necessary to only pretend this for the SQL thread, since
a SELECT executed on the Blackhole engine will otherwise never
return EOF, causing a livelock.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.
The replication filtering rules were inappropiately applied when
executing BINLOG pseudo-query. The rules are supposed to be active
only at times when the slave's sql thread executes an event.
Fixed with correcting a condition to call replication rules only if
the slave sql thread executes the event.
INSTALL PLUGIN and UNINSTALL PLUGIN worked with statement-based and
mixed-mode replication only, but not with row-based replication.
There is no statement-based replication of these statements.
But there was row-based replication of the inserts and deletes
to and from the mysql.plugin table.
The fix is to suppress binlogging during insert and delete to
and from the mysql.plugin table.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
manually resolved conflicts:
Text conflict in client/mysqltest.c
Contents conflict in mysql-test/include/have_bug25714.inc
Text conflict in mysql-test/include/have_ndbapi_examples.inc
Text conflict in mysql-test/mysql-test-run.pl
Text conflict in mysql-test/suite/parts/inc/partition_check_drop.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout_check1.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout_check2.inc
Text conflict in mysql-test/suite/parts/r/partition_alter1_1_2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter1_1_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter1_2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter3_innodb.result
Text conflict in mysql-test/suite/parts/r/partition_alter3_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_basic_innodb.result
Text conflict in mysql-test/suite/parts/r/partition_basic_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_basic_symlink_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_engine_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_syntax_myisam.result
Text conflict in mysql-test/suite/rpl_ndb/t/disabled.def
Text conflict in mysql-test/t/disabled.def
The problem of this bug is that we need to get the list of tables
to be updated for a multi-table update statement, which requires to
open all the tables referenced by the statement and resolve all
the fields involved in update in order to figure out the list of
tables for update. However if there are replicate filter rules,
some tables might not exist on slave and result in a failure
before we could examine the filter rules.
I think the whole problem can not be solved on slave alone,
the master must record and send the information of tables
involved for update to slave, so that the slave do not need to
open all the tables referenced by the multi-table update statement to
figure out which tables are involved for update.
So a status variable is added to Query_log event to store the
value of table map for update on master. And on slave, it will
try to get the value of this variable and use it to examine
filter rules without opening any tables on slave, if this values
is not available, the old approach is used and thus the bug will
still occur for when replicating from old masters.
Problem: After START SLAVE, the Slave_IO_Status column of
SHOW SLAVE STATUS goes from No to Yes asynchronously. That
caused sporadic failures on pushbuild in rpl_stm_until since
the test contains SHOW SLAVE STATUS right after START SLAVE.
Fix: Wait until Slave_IO_Status becomes Yes after each
START SLAVE.
the reason for the failure is that io thread passes through a sequence of state
changes before it eventually got stuck at the expect running state as NO.
It's unreasonble to wait for the running status while the whole idea of the test is
to get to the IO thread error.
Fixed with changing the waiting condition.
Problem: master binlog has 'create table t1'. Master binlog
was removed before slave could replicate it. In test's cleanup
code, master did 'drop table t1', which caused slave sql
thread to stop with an error since slave sql thread did not
know about t1.
Fix: t1 is just an auxiliary construction, only needed on
master. Hence, we turn off binlogging before t1 is created,
drop t1 as soon as we don't need it anymore, and then turn
on binlogging again.
Many dump threads can exist due to a way the new version of mtr governs suites.
For this immediate problem the test is refined not to use I_S but rather to reconnect
explicitly with preserving logics of a an old target bug fixes verification.
Problem: the test set @@global.init_slave to garbage at a time
which was not guaranteed to be after the time when the slave's
SQL thread used it. That would cause the slave's SQL thread to
stop in rare cases.
Fix: The test does not care about the value of
@@global.init_slave, except that it should be different on
master and slave. Hence, we set @@global.init_slave to
something that is valid SQL.
Post-post-push fix. The result file for rpl_rbr_to_sbr needs to be
updated. While I was there, made rpl_rbr_to_sbr clean up after itself
by reverting @@binlog_format to the value it had before the test
started.
Problem 1: tests often fail in pushbuild with a timeout when waiting
for the slave to start/stop/receive error.
Fix 1: Updated the wait_for_slave_* macros in the following way:
- The timeout is increased by a factor ten
- Refactored the macros so that wait_for_slave_param does the work for
the other macros.
Problem 2: Tests are often incorrectly written, lacking a
source include/wait_for_slave_to_[start|stop].inc.
Fix 2: Improved the chance to get it right by adding
include/start_slave.inc and include/stop_slave.inc, and updated tests
to use these.
Problem 3: The the built-in test language command
wait_for_slave_to_stop is a misnomer (does not wait for the slave io
thread) and does not give as much debug info in case of failure as
the otherwise equivalent macro
source include/wait_for_slave_sql_to_stop.inc
Fix 3: Replaced all calls to the built-in command by a call to the
macro.
Problem 4: Some, but not all, of the wait_for_slave_* macros had an
implicit connection slave. This made some tests confusing to read,
and made it more difficult to use the macro in circular replication
scenarios, where the connection named master needs to wait.
Fix 4: Removed the implicit connection slave from all
wait_for_slave_* macros, and updated tests to use an explicit
connection slave where necessary.
Problem 5: The macros wait_slave_status.inc and wait_show_pattern.inc
were unused. Moreover, using them is difficult and error-prone.
Fix 5: remove these macros.
Problem 6: log_bin_trust_function_creators_basic failed when running
tests because it assumed @@global.log_bin_trust_function_creators=1,
and some tests modified this variable without resetting it to its
original value.
Fix 6: All tests that use this variable have been updated so that
they reset the value at end of test.