BUG#20850: Assert during slave shutdown in many rpl_* tests.
This was caused by a race condition at the end of handle_slave_io
which under some circumstances allowed the cleanup to proceed before
the thread had completed.
Fix random failures in test 'wait_timeout' that depend on exact timing.
1. Force a reconnect initially if necessary, as otherwise slow startup
might have caused a connection timeout before the test can even start.
2. Explicitly disconnect the first connection to remove confusion about
which connection aborts from timeout, causing test failure.
The bug was that if the server was running in mixed binlogging mode,
and an INSERT DELAYED used some needing-row-based components like UUID(),
the server didn't binlog this row-based but statement-based, which
thus failed to insert correct data on the slave.
This changeset implements that when a delayed_insert thread is created,
if the server's global binlog mode is "mixed", that thread will use row-based.
This also fixes BUG#20633 "INSERT DELAYED RAND() or @user_var does not
replicate statement-based": we don't fix it in statement-based mode (would
require bookeeping of rand seeds and user variables used by each row),
but at least it will now work in mixed mode (as row-based will be used).
We re-enable rpl_switch_stm_row_mixed.test (so BUG#18590
which was about re-enabling this test, will be closed) to test the fixes.
Between when it was disabled and now, some good changes to row-based
binlogging (no generation of table map events for non-changed tables)
induce changes in the test's result file.
by default we never run disabled tests (even if they're
explicitely listed on the command-line). We add an option --enable-disabled
which will run tests even though they are disabled, and will print, for each
such test, the comment explaining why it was disabled.
The reason for the change is when you want to run "all tests which are about
NDB" for example: mysql-test-run.pl t/*ndb*.test used to run some disabled
NDB tests, causing failures, causing investigations.
Code amended and approved by Kent.
a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
before update trigger on NDB table".
Two main changes:
- We use TABLE::read_set/write_set bitmaps for marking fields used by
statement instead of Field::query_id in 5.1.
- Now when we mark columns used by statement we take into account columns
used by table's triggers instead of marking all columns as used if table
has triggers.