INSTALL PLUGIN and UNINSTALL PLUGIN worked with statement-based and
mixed-mode replication only, but not with row-based replication.
There is no statement-based replication of these statements.
But there was row-based replication of the inserts and deletes
to and from the mysql.plugin table.
The fix is to suppress binlogging during insert and delete to
and from the mysql.plugin table.
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
Problem: binlog_stm_binlog runs INSERT DELAYED queries, and
then prints the contents of the binlog. Before checking the
contents of the binlog, the test waits until the rows have
appeared in the table. However, this is not enough, since
INSERT DELAYED does not write rows to the binlog at the same
time as it writes them to the table. So there is a race.
Fix: Add a FLUSH TABLES before SHOW BINLOG EVENTS. That
waits until the insert_delayed thread is done.
BUG#37884: rpl_row_basic_2myisam and rpl_row_basic_3innodb fail sporadically in pushbuild
These have been fixed in 5.1-rpl. Re-applying fix for BUG#37884
in 5.1-bugteam, and disabling rpl_flushlog_loop for BUG#37733 in
5.1-bugteam.
Problem: the test waits for a 'DROP TEMPORARY TABLE' event to
appear in the master's binlog, then checks on the slave whether
the number of temporary tables has decreased. The slave does
not sync, causing a race.
Fix: check for the 'DROP TEMPORARY TABLE' event on slave
instead of on master.
This is not a fix to the bug. It only adds debug info, so
that we can analyze the bug better next time it happens.
Please revert the patch after the bug is fixed.
the reason for the failure is that io thread passes through a sequence of state
changes before it eventually got stuck at the expect running state as NO.
It's unreasonble to wait for the running status while the whole idea of the test is
to get to the IO thread error.
Fixed with changing the waiting condition.
Problem: master binlog has 'create table t1'. Master binlog
was removed before slave could replicate it. In test's cleanup
code, master did 'drop table t1', which caused slave sql
thread to stop with an error since slave sql thread did not
know about t1.
Fix: t1 is just an auxiliary construction, only needed on
master. Hence, we turn off binlogging before t1 is created,
drop t1 as soon as we don't need it anymore, and then turn
on binlogging again.
Many dump threads can exist due to a way the new version of mtr governs suites.
For this immediate problem the test is refined not to use I_S but rather to reconnect
explicitly with preserving logics of a an old target bug fixes verification.
Problem: the test set @@global.init_slave to garbage at a time
which was not guaranteed to be after the time when the slave's
SQL thread used it. That would cause the slave's SQL thread to
stop in rare cases.
Fix: The test does not care about the value of
@@global.init_slave, except that it should be different on
master and slave. Hence, we set @@global.init_slave to
something that is valid SQL.
Problem was that ha_partition had HA_FILE_BASED flag set
(since it uses a .par file), but after open it uses the first partitions
flags, which results in different case handling for create and for
open.
Solution was to change the underlying partition name so it was consistent.
(Only happens when lower_case_table_names = 2, i.e. Mac OS X and storage
engines without HA_FILE_BASED, like InnoDB and Memory.)
(Recommit after adding rename of check_lowercase_names to
get_canonical_filename, and moved it from handler.h to mysql_priv.h)
NOTE: if a mixed case name for a partitioned table was created when
lower_case_table_name = 2 it should be renamed or dropped before using
the updated version (See bug#37402 for more info)
The problem is that relying on the output of the 'ls' command is not
portable as its behavior is not the same between systems and it might
even not be available at all in (Windows).
So I added list_files that relies on the portable mysys library instead.
(and also list_files_write_file and list_files_append_file,
since the test was using '--exec ls' in that way.)
problem was that ha_partition::records was not implemented, thus
using the default handler::records, which is not correct if the engine
does not support HA_STATS_RECORDS_IS_EXACT.
Solution was to implement ha_partition::records as a wrapper around
the underlying partitions records.
The rows column in explain partitions will now include the total
number of records in the partitioned table.
(recommit after removing out-commented code)
Problem was that auto_repair, is_crashed and check_and_repair was not
implemented in ha_partition.
Solution, implemented them as loop over all partitions for is_crashed and
check_and_repair, and using the first partition for auto_repair.
(Recommit after fixing review comments)
This bug has been fixed in two slightly different ways in
6.0-rpl and {5.1,6.0}-bugteam. To avoid future merge
problems, I'm now copying the 6.0-rpl fix to 5.1-bugteam.
The previous fix for the bug was incomplete. The test failed
because t2 did not exist on the slave (since the slave was
lagging) when the
wait_condition was executed. Fixed by inserting
sync_slave_with_master just after t2 was created.
Test was failing due to the addition of a '\x05' character in result sets
Latest builds of the server have shown this problem to have disappeared.
Removing code within the test that disables the test on Mac OS X.
Recommit due to tree error on earlier, approved patch.