INSTALL PLUGIN and UNINSTALL PLUGIN worked with statement-based and
mixed-mode replication only, but not with row-based replication.
There is no statement-based replication of these statements.
But there was row-based replication of the inserts and deletes
to and from the mysql.plugin table.
The fix is to suppress binlogging during insert and delete to
and from the mysql.plugin table.
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
BUG#37884: rpl_row_basic_2myisam and rpl_row_basic_3innodb fail sporadically in pushbuild
These have been fixed in 5.1-rpl. Re-applying fix for BUG#37884
in 5.1-bugteam, and disabling rpl_flushlog_loop for BUG#37733 in
5.1-bugteam.
Problem: the test waits for a 'DROP TEMPORARY TABLE' event to
appear in the master's binlog, then checks on the slave whether
the number of temporary tables has decreased. The slave does
not sync, causing a race.
Fix: check for the 'DROP TEMPORARY TABLE' event on slave
instead of on master.
This is not a fix to the bug. It only adds debug info, so
that we can analyze the bug better next time it happens.
Please revert the patch after the bug is fixed.
the reason for the failure is that io thread passes through a sequence of state
changes before it eventually got stuck at the expect running state as NO.
It's unreasonble to wait for the running status while the whole idea of the test is
to get to the IO thread error.
Fixed with changing the waiting condition.
Problem: master binlog has 'create table t1'. Master binlog
was removed before slave could replicate it. In test's cleanup
code, master did 'drop table t1', which caused slave sql
thread to stop with an error since slave sql thread did not
know about t1.
Fix: t1 is just an auxiliary construction, only needed on
master. Hence, we turn off binlogging before t1 is created,
drop t1 as soon as we don't need it anymore, and then turn
on binlogging again.
Many dump threads can exist due to a way the new version of mtr governs suites.
For this immediate problem the test is refined not to use I_S but rather to reconnect
explicitly with preserving logics of a an old target bug fixes verification.
Problem: the test set @@global.init_slave to garbage at a time
which was not guaranteed to be after the time when the slave's
SQL thread used it. That would cause the slave's SQL thread to
stop in rare cases.
Fix: The test does not care about the value of
@@global.init_slave, except that it should be different on
master and slave. Hence, we set @@global.init_slave to
something that is valid SQL.
This bug has been fixed in two slightly different ways in
6.0-rpl and {5.1,6.0}-bugteam. To avoid future merge
problems, I'm now copying the 6.0-rpl fix to 5.1-bugteam.
The previous fix for the bug was incomplete. The test failed
because t2 did not exist on the slave (since the slave was
lagging) when the
wait_condition was executed. Fixed by inserting
sync_slave_with_master just after t2 was created.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.
Problem: rpl_switch_stm_row_mixed did not wait until row events generated by
INSERT DELAYED were written to the master binlog before it synchronized slave
with master. This caused sporadic errors where these rows were missing on
slave.
Fix: wait until all rows appear on the slave.
This is a backport, applying the same to 5.1-bugteam as was previously
applied to 6.0-rpl
On a slow environment like valgrind the test is vulnerable
because it does not check if slave has stopped at time
of the new session is requested `start slave;' -- disabling
test till it is fixed.
The test is vulnerable because it does not check if slave has stopped at time
of the new session is requested `start slave;'
Fixed with deploying explicitly wait_for_slave_to_stop synchronization macro.
Problem: If INSERT is immediately followed by SELECT in another thread,
the newly inserted rows may not be returned by the SELECT statement, if
ENGINE=myisam and @@concurrent_insert=1. This caused sporadic errors in
rpl_insert_id.
Fix: The test now uses ENGINE=$engine_type when creating tables (so that
innodb is used). It also turns off @@concurrent_insert around the critical
place, so that it works if someone in the future writes a test that sets
$engine_type=myisam before sourcing extra/rpl_tests/rpl_insert_id.test.
It also adds ORDER BY to all SELECTs so that the result is deterministic.
When flushing tables, there were a slight chance that the flush was occuring
between processing of two table map events. Since the tables are opened
one by one, it might result in that the tables were not valid and that sub-
sequent locking of tables would cause the slave to crash.
The problem is solved by opening and locking all tables at once using
simple_open_n_lock_tables(). Also, the patch contain a change to open_tables()
so that pre-locking only takes place when the trg_event_map is not zero, which
was not the case before (this caused the lock to be placed in thd->locked_tables
instead of thd->lock since the assumption was that triggers would be called
later and therefore the tables should be pre-locked).
Temporarily checking in an incorrect test case. Rationale: the impact of
this bug is negligible (it's almost a feature request). We need 5.1 to be
stable, and making a real fix is a bit risky. So the fix is postponed
to 6.0.
The test suite/rpl/t/rpl_innodb_bug28430.test was disabled because of
BUG#32247, but not re-enabled when BUG#32247 was fixed. I've re-enabled
it. The test and result file needed to be updated too.
Select of the test could not perform deterministically, because the table remains to be
updatable by the running event handler.
Fixed with changing verification to use a logical values instead of comparison
with a pre-recorded results.