Problem: The following statements can cause the slave to go out of sync
if logged in statement format:
INSERT IGNORE...SELECT
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
REPLACE ... SELECT
UPDATE IGNORE :
CREATE ... IGNORE SELECT
CREATE ... REPLACE SELECT
Background: Since the order of the rows returned by the SELECT
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master.
Fix:
Issue a warning when statements like the above are exectued and
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
Six new warning messages has been added for each unsafe statement.
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
******
BUG#11758262 - 50439: MARK INSERT...SEL...ON DUP KEY UPD,REPLACE...SEL,CREATE...[IGN|REPL] SEL
Problem: The following statements can cause the slave to go out of sync
if logged in statement format:
INSERT IGNORE...SELECT
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
REPLACE ... SELECT
UPDATE IGNORE :
CREATE ... IGNORE SELECT
CREATE ... REPLACE SELECT
Background: Since the order of the rows returned by the SELECT
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master.
Fix:
Issue a warning when statements like the above are exectued and
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
Six new warning messages has been added for each unsafe statement.
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
Before BUG#28796, an empty host was used to identify that an instance was no
longer a slave. However, BUG#28796 changed this behavior and one cannot set
an empty host. Besides, a RESET SLAVE only cleans up information on the next
event to retrieve from the master, disables ssl and resets heartbeat period.
So a call to SHOW SLAVE STATUS after issuing a RESET SLAVE still returns some
valid information, such as host, port, user and password.
To fix this problem, we have introduced the command RESET SLAVE ALL that does
what a regular RESET SLAVE does and also clears host, port, user and password
information thus allowing users to identify when an instance is no longer a
slave.
Before BUG#28796, an empty host was used to identify that an instance was no
longer a slave. However, BUG#28796 changed this behavior and one cannot set
an empty host. Besides, a RESET SLAVE only cleans up information on the next
event to retrieve from the master, disables ssl and resets heartbeat period.
So a call to SHOW SLAVE STATUS after issuing a RESET SLAVE still returns some
valid information, such as host, port, user and password.
To fix this problem, we have introduced the command RESET SLAVE ALL that does
what a regular RESET SLAVE does and also clears host, port, user and password
information thus allowing users to identify when an instance is no longer a
slave.
- add support for choosing the engine of test
table(t1) with $engine_type
- add primary key to the test table(t1) to support
replication of BLOB/TEXT (also with ENGINE=ndb)
- change the suppression since the warning printed to error log
now says "Column 1"
- add support for choosing the engine of test
table(t1) with $engine_type
- add primary key to the test table(t1) to support
replication of BLOB/TEXT (also with ENGINE=ndb)
- change the suppression since the warning printed to error log
now says "Column 1"
The slave was not able to find the correct row in the innodb
table, because the row fetched from the innodb table would not
match the before image. This happened because the (don't care)
bytes in the NULLed fields would change once the row was stored
in the storage engine (from zero to the default value). This
would make bulk memory comparison (using memcmp) to fail.
We fix this by taking a preventing measure and avoiding memcmp
for tables that contain nullable fields. Therefore, we protect
the slave search routine from engines that return arbitrary
values for don't care bytes (in the nulled fields). Instead, the
slave thread will only check null_bits and those fields that are
not set to NULL when comparing the before image against the
storage engine row.
mysql-test/extra/rpl_tests/rpl_record_compare.test:
Added test case to the include file so that this is tested
with more than one engine.
mysql-test/suite/rpl/r/rpl_row_rec_comp_innodb.result:
Result update.
mysql-test/suite/rpl/r/rpl_row_rec_comp_myisam.result:
Result update.
mysql-test/suite/rpl/t/rpl_row_rec_comp_myisam.test:
Moved the include file last, so that the result from
BUG#11766865 is not intermixed with the result for
BUG#11760454.
sql/log_event.cc:
Skips memory comparison if the table has nullable
columns and compares only non-nulled fields in the
field comparison loop.
The slave was not able to find the correct row in the innodb
table, because the row fetched from the innodb table would not
match the before image. This happened because the (don't care)
bytes in the NULLed fields would change once the row was stored
in the storage engine (from zero to the default value). This
would make bulk memory comparison (using memcmp) to fail.
We fix this by taking a preventing measure and avoiding memcmp
for tables that contain nullable fields. Therefore, we protect
the slave search routine from engines that return arbitrary
values for don't care bytes (in the nulled fields). Instead, the
slave thread will only check null_bits and those fields that are
not set to NULL when comparing the before image against the
storage engine row.
Currently, rpl_semi_sync is failing in PB due to the warning message:
"Slave SQL: slave SQL thread is being stopped in the middle of "
"applying of a group having updated a non-transaction table; "
"waiting for the group completion ..."
The problem started happening after the fix for BUG#11762407 what was
automatically suppressing some warning messages.
To fix the current issue, we suppress the aforementioned warning message
and exploit the opportunity to make the sentence clearer.
Currently, rpl_semi_sync is failing in PB due to the warning message:
"Slave SQL: slave SQL thread is being stopped in the middle of "
"applying of a group having updated a non-transaction table; "
"waiting for the group completion ..."
The problem started happening after the fix for BUG#11762407 what was
automatically suppressing some warning messages.
To fix the current issue, we suppress the aforementioned warning message
and exploit the opportunity to make the sentence clearer.
FAILED DROP DATABASE CAN BREAK STATEMENT BASED REPLICATION
The first phase of DROP DATABASE is to delete the tables in the database.
If deletion of one or more of the tables fail (e.g. due to a FOREIGN KEY
constraint), DROP DATABASE will be aborted. However, some tables could
still have been deleted. The problem was that nothing would be written
to the binary log in this case, so any slaves would not delete these tables.
Therefore the master and the slaves would get out of sync.
This patch fixes the problem by making sure that DROP TABLE is written
to the binary log for the tables that were in fact deleted by the failed
DROP DATABASE statement.
Test case added to binlog.binlog_database.test.
FAILED DROP DATABASE CAN BREAK STATEMENT BASED REPLICATION
The first phase of DROP DATABASE is to delete the tables in the database.
If deletion of one or more of the tables fail (e.g. due to a FOREIGN KEY
constraint), DROP DATABASE will be aborted. However, some tables could
still have been deleted. The problem was that nothing would be written
to the binary log in this case, so any slaves would not delete these tables.
Therefore the master and the slaves would get out of sync.
This patch fixes the problem by making sure that DROP TABLE is written
to the binary log for the tables that were in fact deleted by the failed
DROP DATABASE statement.
Test case added to binlog.binlog_database.test.
- Fixed some issues with partitions and connection_string, which also fixed lp:716890 "Pre- and post-recovery crash in Aria"
- Fixed wrong assert in Aria
Now need to merge with latest xtradb before pushing
sql/ha_partition.cc:
Ensure that m_ordered_rec_buffer is not freed before close.
sql/mysqld.cc:
Changed to use opt_stack_trace instead of opt_pstack.
Removed references to pstack
sql/partition_element.h:
Ensure that connect_string is initialized
storage/maria/ma_key_recover.c:
Fixed wrong assert