The previous fix for the bug was incomplete. The test failed
because t2 did not exist on the slave (since the slave was
lagging) when the
wait_condition was executed. Fixed by inserting
sync_slave_with_master just after t2 was created.
Problem: rpl_switch_stm_row_mixed did not wait until row events generated by
INSERT DELAYED were written to the master binlog before it synchronized slave
with master. This caused sporadic errors where these rows were missing on
slave.
Fix: wait until all rows appear on the slave.
This is a backport, applying the same to 5.1-bugteam as was previously
applied to 6.0-rpl
On a slow environment like valgrind the test is vulnerable
because it does not check if slave has stopped at time
of the new session is requested `start slave;' -- disabling
test till it is fixed.
The test is vulnerable because it does not check if slave has stopped at time
of the new session is requested `start slave;'
Fixed with deploying explicitly wait_for_slave_to_stop synchronization macro.
When flushing tables, there were a slight chance that the flush was occuring
between processing of two table map events. Since the tables are opened
one by one, it might result in that the tables were not valid and that sub-
sequent locking of tables would cause the slave to crash.
The problem is solved by opening and locking all tables at once using
simple_open_n_lock_tables(). Also, the patch contain a change to open_tables()
so that pre-locking only takes place when the trg_event_map is not zero, which
was not the case before (this caused the lock to be placed in thd->locked_tables
instead of thd->lock since the assumption was that triggers would be called
later and therefore the tables should be pre-locked).
Temporarily checking in an incorrect test case. Rationale: the impact of
this bug is negligible (it's almost a feature request). We need 5.1 to be
stable, and making a real fix is a bit risky. So the fix is postponed
to 6.0.
The test suite/rpl/t/rpl_innodb_bug28430.test was disabled because of
BUG#32247, but not re-enabled when BUG#32247 was fixed. I've re-enabled
it. The test and result file needed to be updated too.
Among two claimed artifacts the critical one is in that the Table map of
a query following the failing with a duplicate key error CREATE-SELECT is skipped from
instantionating (and thus binlogging). That leads to sending a "chopped" group of the data
row-events without the table map head to the slave.
The slave can not apply the only data row events.
It's not easy to force the slave to react with an error in such a case (the second complaint
on the bug report), because the lack of a table Rows_log_event::do_apply_event the data row event
handler is a common situation which normally designates the event has to be filtered out
basing on the repliation do/ingore rules decision.
Fixed: table map creating and binlogging is restored via deploying the standard cleanup call in
select_create::abort().
No error is reported if by chance the table map was not been binlogged.
Leaving this out to resolve with considering how to combine the do/ingore rules with the situation
when erronoulsy the Table_map is not written to binlog.
Disabled 'rpl_redirect', failure is sporadic and and the test is superfluous
rpl_packet.test, rpl_packet.result:
Removing race conditions from rpl_packet causing test to fail
If a binlog file is manually replaced with a namesake directory the internal purging did
not handle the error of deleting the file so that eventually
a post-execution guards fires an assert.
Fixed with reusing a snippet of code for bug@18199 to tolerate lack of the file but no other error
at an attempt to delete it.
The same applied to the index file deletion.
The cset carries pieces of manual merging.