TRUNCATE TABLE fails to replicate when stmt-based binlogging is not supported.
There were two separate problems with the code, both of which are fixed with
this patch:
1. An error was printed by InnoDB for TRUNCATE TABLE in statement mode when
the in isolation levels READ COMMITTED and READ UNCOMMITTED since InnoDB
does permit statement-based replication for DML statements. However,
the TRUNCATE TABLE is not transactional, but is a DDL, and should therefore
be allowed to be replicated as a statement.
2. The statement was not logged in mixed mode because of the error above, but
the error was not reported to the client.
This patch fixes the problem by treating TRUNCATE TABLE a DDL, that is, it is
always logged as a statement and not reporting an error from InnoDB for TRUNCATE
TABLE.
The next number (AUTO_INCREMENT) field of the table for write
rows events are not initialized, and cause some engines (innodb)
not correctly update the tables's auto_increment value.
This patch fixed this problem by honor next number fields if present.
In certain situations, a scan of the table will return the error
code HA_ERR_RECORD_DELETED, and this error code is not
correctly caught in the Rows_log_event::find_row() function, which
causes an error to be returned for this case.
This patch fixes the problem by adding code to either ignore the
record and continuing with the next one, the the event of a table
scan, or change the error code to HA_ERR_KEY_NOT_FOUND, in the event
that a key lookup is attempted.
The code to get read the value of a system variable was extracting its value
on PREPARE stage and was substituting the value (as a constant) into the parse tree.
Note that this must be a reversible transformation, i.e. it must be reversed before
each re-execution.
Unfortunately this cannot be reliably done using the current code, because there are
other non-reversible source tree transformations that can interfere with this
reversible transformation.
Fixed by not resolving the value at PREPARE, but at EXECUTE (as the rest of the
functions operate). Added a cache of the value (so that it's constant throughout
the execution of the query). Note that the cache also caches NULL values.
Updated an obsolete related test suite (variables-big) and the code to test the
result type of system variables (as per bug 74).
The Blackhole engine did not support row-based replication
since the delete_row(), update_row(), and the index and range
searching functions were not implemented.
This patch adds row-based replication support for the
Blackhole engine by implementing the two functions mentioned
above, and making the engine pretend that it has found the
correct row to delete or update when executed from the slave
SQL thread by implementing index and range searching functions.
It is necessary to only pretend this for the SQL thread, since
a SELECT executed on the Blackhole engine will otherwise never
return EOF, causing a livelock.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.
BUG#37884: rpl_row_basic_2myisam and rpl_row_basic_3innodb fail sporadically in pushbuild
These have been fixed in 5.1-rpl. Re-applying fix for BUG#37884
in 5.1-bugteam, and disabling rpl_flushlog_loop for BUG#37733 in
5.1-bugteam.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.
Problem: If INSERT is immediately followed by SELECT in another thread,
the newly inserted rows may not be returned by the SELECT statement, if
ENGINE=myisam and @@concurrent_insert=1. This caused sporadic errors in
rpl_insert_id.
Fix: The test now uses ENGINE=$engine_type when creating tables (so that
innodb is used). It also turns off @@concurrent_insert around the critical
place, so that it works if someone in the future writes a test that sets
$engine_type=myisam before sourcing extra/rpl_tests/rpl_insert_id.test.
It also adds ORDER BY to all SELECTs so that the result is deterministic.
The bug allow multiple executing transactions working with non-transactional
to interfere with each others by interleaving the events of different trans-
actions.
Bug is fixed by writing non-transactional events to the transaction cache and
flushing the cache to the binary log at statement commit. To mimic the behavior
of normal statement-based replication, we flush the transaction cache in row-
based mode when there is no committed statements in the transaction cache,
which means we are committing the first one. This means that it will be written
to the binary log as a "mini-transaction" with just the rows for the statement.
Note that the changes here does not take effect when building the server with
HAVE_TRANSACTIONS set to false, but it is not clear if this was possible before
this patch either.
For row-based logging, we also have that when AUTOCOMMIT=1, the code now always
generates a BEGIN/COMMIT pair for single statements, or BEGIN/ROLLBACK pair in the
case of non-transactional changes in a statement that was rolled back. Note that
for the case where changes to a non-transactional table causes a rollback due
to error, the statement will now be logged with a BEGIN/ROLLBACK pair, even
though some changes has been committed to the non-transactional table.
There was a failure in that show slave status displayed a wrong message
when slave stopped at processing a row event inserting to a default-less
column.
The problem seem to have ceased after recent fixes in rbr code.
However, the test was not updated to carry testing of the case commented-out.
Uncommenting and editing the test.
Notice, Bug#23907 is most probably a duplicate of this one.
irrelevant to execute since the charset information does not
affect replication for row-based replication. The row-based
versions of the tests were removed, and the statement-based
version of the test was made executable by all three modes.
This involves removing any lines that causes the test to be
dependent on the contents of the binary log, and instead we
just check that the replication works as it should.
rpl_ndb_rep_ignore
Reason: previous test, rpl_ndb_2multi_eng, does not sync slave with master
after cleanup, so tables are sometimes left on slave
Fix: sync_slave_with_master
The error message due to lack of the default value for an extra field
was not as informative as it should be.
Fixed with improving the scheme of gathering, propagating and reporting
errors in applying rows events.
The scheme is in the following.
Any kind of error of processing of a row event incidents are to be
registered with my_error().
In the end Rows_log_event::do_apply_event() invokes rli->report() with the
message to display consisting of all the errors.
This mimics `show warnings' displaying.
A simple test checks three errors in processing an event.
Two hunks - a user level error and pushing it into the list -
have been devoted to already fixed Bug@31702.
Some open issues relating to this artifact listed on BUG@21842 page and
on WL@3679.
Todo: to synchronize the statement in the tests comments on Update and Delete
events may not stop when an extra field does not have a default with wl@3228 spec.
columns (default datatype value is assigned).
The mysql_update function has been modified to generate
an error when trying to set a NOT NULL field to NULL rather than a warning
in the set_field_to_null_with_conversions function.
without PK
Bug#31609 Not all RBR slave errors reported as errors
bug#32468 delete rows event on a table with foreign key constraint fails
The first two bugs comprise idempotency issues.
First, there was no error code reported under conditions of the bug
description although the slave sql thread halted.
Second, executions were different with and without presence of prim key in
the table.
Third, there was no way to instruct the slave whether to ignore an error
and skip to the following event or to halt.
Fourth, there are handler errors which might happen due to idempotent
applying of binlog but those were not listed among the "idempotent" error
list.
All the named issues are addressed.
Wrt to the 3rd, there is the new global system variable, changeble at run
time, which controls the slave sql thread behaviour.
The new variable allows further extensions to mimic the sql_mode
session/global variable.
To address the 4th, the new bug#32468 had to be fixed as it was staying
in the way.