This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc
Changes:
- Don't print out the value of system variables as one can't depend on
them to being constants.
- Don't set global variables to 'default' as the default may not
be the same as the test was started with if there was an additional
option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
a while.
Cherry picking:
Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE
commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95
Description:
============
In row based replication, when replicating from a table with a field with
character set set to UTF8mb3 to the same table with the same field set to
character set UTF8mb4 I get a confusing error message:
For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4'
"Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to
type 'varchar(1)'"
Similar issue with CHAR type as well.
Issue with respect to BLOB types:
For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type.
"Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type
'tinyblob'"
For BINARY to BINARY - Error message displays incorrect type for master side
field.
"Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type
'binary(10)'"
Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'.
Analysis:
=========
In Row based replication charset information is not sent as part of metadata
from master to slave.
For VARCHAR field its character length is converted into equivalent
octets/bytes and stored internally. At the time of displaying the data to user
it is converted back to original character length.
For example:
VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6)
At the time of displaying it to user
VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2).
At present the internally converted octect length is sent from master to slave
with out providing the charset information. On slave side if the type
conversion fails 'show_sql_type' function is used to get the type specific
information from metadata. Since there is no charset information is available
the filed type is displayed as VARCHAR(6).
This results in confused error message.
For CHAR fields
CHAR(1)- utf8mb3 - CHAR(3)
CHAR(1)- utf8mb4 - CHAR(4)
'show_sql_type' function which retrieves type information from metadata uses
(bytes/local charset length) to get actual character length. If slave's chaset
is 'utf8mb4' then
CHAR(3/4)-->CHAR(0)
CHAR(4/4)-->CHAR(1).
This results in confused error message.
Analysis for BLOB type issue:
BLOB's length is represented in two forms.
1. Actual length
i.e
(length < 256) type= MYSQL_TYPE_TINY_BLOB;
(length < 65536) type= MYSQL_TYPE_BLOB; ...
2. packlength - The number of bytes used to represent the length of the blob
1- tinyblob
2- blob ...
In row based replication only the packlength is written in the binary log. On
the slave side this packlength is interpreted as actual length of the blob.
Hence the length is always < 256 and the type is displayed as tiny blob.
Analysis for BINARY to BINARY type issue:
The character set information is needed to identify a filed's type as char or
binary. Since master side character set information is not available on the
slave side both binary and char fields are displayed as char.
Fix:
===
For CHAR and VARCHAR fields display their length in octets for both source and
target fields. For target field display the charset information if it is
relevant.
For blob type changed the code to use the packlength and display appropriate
blob type in error message.
For binary and varbinary fields use the slave side character set as reference
to map them to binary or varbinary fields.
Problem:
=======
DROP TABLE IF EXISTS was killed. The table still exists on
the master but the DDL was still logged.
Analysis:
=========
During the execution of DROP TABLE command "ha_delete_table" call is invoked
to delete the table. If the query is killed at this point, the kill command
is not handled within the code. This results in two issues.
1) The table which is not dropped also gets written into the binary log.
2) The code continues further upon receiving 'KILL QUERY'.
Fix:
===
Upon receiving the KILL command the query should stop its current execution.
Tables which were successfully dropped prior to KILL command should be
included in the binary log.
MDEV-17614 flags INSERT…ON DUPLICATE KEY UPDATE unsafe for statement-based
replication when there are multiple unique indexes. This correctly fixes
something whose attempted fix in MySQL 5.7
in mysql/mysql-server@c93b0d9a97
caused lock conflicts. That change was reverted in MySQL 5.7.26
in mysql/mysql-server@066b6fdd43
(with a substantial amount of other changes).
In MDEV-17073 we already disabled the unfortunate MySQL change when
statement-based replication was not being used. Now, thanks to MDEV-17614,
we can actually remove the change altogether.
This reverts commit 8a346f31b9 (MDEV-17073)
and mysql/mysql-server@c93b0d9a97 while
keeping the test cases.
- mysqltest didn't free read_command_buf
- wait_for_slave_param did write different things to the log if valgrind
was used.
- Table open cache should not write the initial variable value as it
can depend on the configuration or if valgrind is used
- A variable in GetResult was used uninitalized
Problem:-
When mysql executes INSERT ON DUPLICATE KEY INSERT, the storage engine checks
if the inserted row would generate a duplicate key error. If yes, it returns
the existing row to mysql, mysql updates it and sends it back to the storage
engine.When the table has more than one unique or primary key, this statement
is sensitive to the order in which the storage engines checks the keys.
Depending on this order, the storage engine may determine different rows
to mysql, and hence mysql can update different rows.The order that the
storage engine checks keys is not deterministic. For example, InnoDB checks
keys in an order that depends on the order in which indexes were added to
the table. The first added index is checked first. So if master and slave
have added indexes in different orders, then slave may go out of sync.
Solution:-
Make INSERT...ON DUPLICATE KEY UPDATE unsafe while using stmt or mixed format
When there is more then one unique key.
Although there is two exception.
1. Auto Increment key is not counted because Innodb will get gap lock for
failed Insert and concurrent insert will get a next increment value. But if
user supplies auto inc value it can be unsafe.
2. Count only unique keys for which insertion is performed.
So this patch also addresses the bug id #72921
- The commit ab6dd77408 wrongly sets the
condition inside innobase_srv_conc_enter_innodb(). Problem is that
InnoDB makes the thread to sleep indefinitely if it is a replication
slave thread.
Thanks to Sujatha Sivakumar for contributing the replication test case.
Problem:
=======
Failed CREATE OR REPLACE TEMPORARY TABLE statement which dropped the table but
failed at a later stage of creation of temporary table is not written to
binarylog in row based replication. This causes the slave to diverge.
Analysis:
========
CREATE OR REPLACE statements work as shown below.
CREATE OR REPLACE TABLE table_name (a int);
is basically the same as:
DROP TABLE IF EXISTS table_name;
CREATE TABLE table_name (a int);
Hence every CREATE OR REPLACE TABLE command which dropped the table should be
written to binary log, even when following CREATE TABLE part fails. In order
to achieve this, during the execution of CREATE OR REPLACE command, when a
table is dropped 'thd->log_current_statement' flag is set. When table creation
results in an error within 'mysql_create_table' code, the error handling part
looks for this flag. If it is set the failed CREATE OR REPLACE statement is
written into the binary log inspite of error. This ensure that slave doesn't
diverge from the master. In case of row based replication the error handling
code returns very early, if the table is of type temporary. This is done based
on the assumption that temporary tables are not replicated in row based
replication.
It fails to handle the cases where a temporary table was created as part of
statement based replication at an earlier stage and the binary log format was
changed to row because of an unsafe statement. In this case when a CREATE OR
REPLACE statement is executed on this temporary table it will dropped but the
query will not be written to binary log. Hence slave diverges.
Fix:
===
In error handling code check the return status of create table operation. If
it is successful and replication mode is row based and table is of type
temporary then return. Other wise proceed further to the code which checks for
thd->log_current_statement flag and does appropriate logging.
MDEV-5589 commit set up a policy to skip DROP TEMPORARY TABLE binary logging
in case the target table has not been "CREATEed" in binlog (no CREATE
Query-log-event was logged into the binary log).
It turns out that
1. the rule did not cover non-existing table DROPped with IF-EXISTS clause.
The logged-create knowledge for the non-existing one does not even need
MDEV-5589 patch, and
2. connection close disobeys it to trigger automatic DROP-IF-EXISTS
binlogging.
Either 1 or 2 or even both is/are also responsible for unexpected binlog
records observed in MDEV-17863, actually rendering a referred
@@global.read_only irrelevant as far as the described stored procedure
definition *and* the ROW binlog-format are concerned.
Problem:
=======
Executing test with following options will result in test failure.
./mtr rpl.kill_race_condition{,,,,,,,,,,} --repeat=10 --par 12 --mem
Fix:
====
Test simulates applier thread kill scenario while applying a row event. But it
doesn't wait for applier to catch the error stop.
Added :wait_for_slave_sql_error.inc to catch the error.
Test uses START SLAVE as a final step and doesn't wait for both threads to
start.
Added: start_slave.inc
The test allowed non-deterministic execution thanks to unresetable status
var of Slave_connections.
Fixed with expecting a correct value for Slaves_connected.
Problem:
=======
rpl_blackhole.test fails when executed with following options
mysqld=--binlog_annotate_row_events=1, mysqld=--replicate_annotate_row_events=1
Test output:
------------
worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019
rpl.rpl_blackhole_bug 'mix' [ pass ] 791
rpl.rpl_blackhole_bug 'row' [ fail ]
Replicate_Wild_Ignore_Table
Last_Errno 1032
Last_Error Could not execute Update_rows_v1 event on table test.t1; Can't find
record in 't1', Error_code: 1032; handler error HA_ERR_END_OF_FILE; the event's
master log master-bin.000001, end_log_pos 1510
Analysis:
=========
Enabling "replicate_annotate_row_events" on slave, Tells the slave to write
annotate rows events received from the master to its own binary log. The
received annotate events are applied after the Gtid event as shown below.
thd->query() will be set to the actual query received from the master, through
annotate event. Annotate_rows event should not be deleted after the event is
applied as the thd->query will be used to generate new Annotate_rows event
during applying the subsequent Rows events. After the last Rows event has been
applied, the saved Annotate_rows event (if any) will be deleted.
In balckhole engine all the DML operations are noops as they donot store any
data. They simply return success without doing any operation. But the existing
strictly expects thd->query() to be 'NULL' to identify that row based
replication is in use. This assumption will fail when row annotations are
enabled as the query is not 'NULL'. Hence various row based operations like
'update', 'delete', 'index lookup' will fail when row annotations are enabled.
Fix:
===
Extend the row based replication check to include row annotations as well.
i.e Either the thd->query() is NULL or thd->query() points to query and row
annotations are in use.
Simulation of a big-sized event in rpl.rpl_semi_sync_skip_repl did not clean
up after itself so screw the last binlog event offset which could jump
backwards.
The test is refined to rotate a binlog file with simulation and use the next
one for logics of the test incl master-slave synchonization.
Problem:
========
We have a Master/Master Setup on two servers, but are only writing to one of
those servers (so it is essentially Master/Slave) We upgraded from 10.1.* to
10.2.22 last week and starting with the upgrade, we are getting duplicate key
errors on the slave. BINLOG=mixed.
Analysis:
=========
This issue happens with LOCK TABLES and binlog_format=MIXED combination. When an
UNSAFE statement is encountered in 'MIXED' mode, it is logged in the form of
'ROW' format. For all the tables that are part of LOCK TABLES list their table maps
are written into the binary log. For each table in the list a check is
done to see if 'check_table_binlog_row_based_done' flag is set or not. If it is not set
a check process is initiated to see if table qualifies for row based binary
logging or not and 'check_table_binlog_row_based_done' is set. This flag will be
cleared at the time of closing thread tables.
But there can be special cases where the LOCK TABLES contains more number of
tables but the unsafe query is actually using subset of tables from LOCK TABLES
list.
For example: LOCK TABLES locks t1,t2,t3 but the unsafe statement makes use of
only two tables t1,t3. In this case the 'check_table_binlog_row_based_done' flag
is enabled for table 't2' while writing table map, but 'close_thread_tables'
function call will not reset this flag. Since the flag is not cleared for table
't2' even a safe statement which used t2 will be logged in the form of row based
format.
This leads to an assert on debug builds and causes duplicate entries in release
builds. In release builds a statement is logged in the form of both ROW and
STATEMENT format. This causes the slave to fail with duplicate key error.
Fix:
===
During 'close_thread_tables' when LOCK TABLE modes are active "ha_reset" is done
for all the tables which were part of current statement. As mentioned in the
example 'ha_reset' is called for tables 't1' and 't3'. This will clear the
'check_table_binlog_row_based_done' flag. At this point add a check for the rest
of the tables to see if 'check_table_binlog_row_based_done' is enabled or not.
If enabled clear the flag.
Problem:
=======
Whel rpl.rpl_row_mysqlbinlog test is executed as shown below it fails with
result content mismatch.
perl mtr rpl_row_mysqlbinlog --mysqld=--binlog-annotate-row-events=1
Analysis:
=========
When row annotations are enabled the actual query is written into the binlog
which helps users to understand the query, even when row based replication is
enabled.
For example: Simple insert in row based replication looks like shown below.
#190402 16:31:27 server id 1 end_log_pos 526 Annotate_rows:
#Q> insert into t values (10)
#190402 16:31:27 server id 1 end_log_pos 566 Table_map: `test`.`t` mapped to number 19
# at 566
#190402 16:31:27 server id 1 end_log_pos 600 Write_rows: table id 19 flags: STMT_END_F
BINLOG '
B0GjXBMBAAAAKAAAADYCAAAAABMAAAAAAAEABHRlc3QAAXQAAQMAAQ==
B0GjXBcBAAAAIgAAAFgCAAAAABMAAAAAAAEAAf/+CgAAAA==
'/*!*/;
# at 600
The test creates some binary log events and redirects them into a SQL file.
Executes RESET MASTER and sources the SQL file back on clean master and verifies
that the data is available. Please refer following steps.
../client/mysqlbinlog ./var/mysqld.1/data/master-bin.000001 > test.sql
../client/mysql -uroot -S./var/tmp/mysqld.1.sock -Dtest < test.sql
../client/mysqlbinlog ./var/mysqld.1/data/master-bin.000001 -v > row.sql
When the row based replication specific SQL file is sourced once again on master
the newly generated binlog will treat the entire "BASE 64" encoded event as
query and write it into the binary log.
Output from 'row.sql':
#Q> BINLOG '
#Q> B0GjXBMBAAAAKAAAADYCAAAAABMAAAAAAAEABHRlc3QAAXQAAQMAAQ==
#Q> B0GjXBcBAAAAIgAAAFgCAAAAABMAAAAAAAEAAf/+CgAAAA==
#190402 16:31:27 server id 1 end_log_pos 657 Table_map: `test`.`t` mapped to number 23
# at 657
#190402 16:31:27 server id 1 end_log_pos 691 Write_rows: table id 23 flags: STMT_END_F
BINLOG '
B0GjXBMBAAAAKAAAAJECAAAAABcAAAAAAAEABHRlc3QAAXQAAQMAAQ==
B0GjXBcBAAAAIgAAALMCAAAAABcAAAAAAAEAAQH+CgAAAA==
### INSERT INTO `test`.`t`
### SET
### @1=10
'/*!*/;
# at 691
This is expected behaviour as we cannot extract query from BASE 64 encoded
input. This causes more number of binary logs to be generated when the test is
executed with row annotations.
The following lines from test assumes that only two binary logs will contain
entire data.
--echo --- Test 4 Second Remote test --
---exec $MYSQL_BINLOG --read-from-remote-server --user=root --host=127.0.0.1
--port=$MASTER_MYPORT master-bin.000001 > $MYSQLTEST_VARDIR/tmp/remote.sql
---exec $MYSQL_BINLOG --read-from-remote-server --user=root --host=127.0.0.1
--port=$MASTER_MYPORT master-bin.000002 >> $MYSQLTEST_VARDIR/tmp/remote.sql
In a case when row annotations are enabled the data gets spread across four
binary logs. As test uses only the first two binary log files, data available in
other binary logs gets missed. Hence test fails with result content mismatch as
less data is avaialble.
Fix:
====
Use "-to-the-last" option of "mysqlbinlog" tool which will ensure that all the
available binary log specific contents are included in .sql file.
Problem:
========
When attempting to delay a Slave attached with GTID, there appears to be an
extra delay applied initially. For example, this output reflects a Slave that is
already delayed by 43200 seconds. When switching to GTID replication,
replication is paused until SQL_Remaining_Delay counts down to 0:
CHANGE MASTER TO master_use_gtid=current_pos; CHANGE MASTER TO
MASTER_DELAY=43200;
Seconds_Behind_Master: 44847
Using_Gtid: Current_Pos
SQL_Delay: 43200
SQL_Remaining_Delay: 43089
Slave_SQL_Running_State: Waiting until MASTER_DELAY seconds after master
executed event
Analysis:
=========
When slave initiates a GTID based connection request to master, the master sends
two GTID_LIST events. The first one is actual GTID_LIST event and the second
one is a fake GTID_LIST event. This is sent by master to provide its current
binlary log file position. The fake GTID_LIST events will have their ev->when=0.
'when' (the timestamp) is set to 0 so that slave could distinguish between real
and fake Rotate events.
On slave side when MASTER_DELAY is configured to "X" the applier will ensure
that there is a time delay of "X" seconds before the event is applied.
General behaviour of MASTER_DELAY example:-
Master
timestamp of event e1=10
timestamp of event e2=11
On slave MASTER_DELAY=5
Event e1 will be applied at = 15
e2 will be applied at =16
In bug scenario:-
On Master: With GTIDs
timestamp of event e1=10
timestamp of event e2=0
On Slave:
e1 will be applied at = 10 + 5 =15
For e2, since "e2->when=0" e2->when is set to current timestamp.
i.e since the e2->when and current timestamp on slave is the same applier waits
for additional master_delay=5 seconds. the ev->when contributes to
"rli->last_master_timestamp".
rli->last_master_timestamp= ev->when + (time_t) ev->exec_time;
Fake events should not update the "ev->when" to "current timestamp" on slave.
Fix:
===
Remove the assignment of current timestamp to "ev->when" when "ev->when=0".
select from I_S
Problem:
========
When applier thread tries to access 'variable_name' of
INFORMATION_SCHEMA.SESSION_VARIABLES table through triggers, it results in an
abnormal exit of slave server.
Analysis:
========
At the time of replication of stored routines and triggers, their associated
security context will be sent by the master. The applier thread on the slave
server will use this information to set the required security context for the
execution of stored routines and triggers. This is achieved as follows.
->The stored routine object has a member named 'm_security_ctx' which holds the
security context received from master.
->The applier thread's security_ctx is stored into a 'backup' object.
->Set the applier thread's security_ctx to 'm_security_ctx'.
->Upon the completion of stored routine execution restore the original security
context of applier thread from the backup.
During the above process the 'm_security_ctx' object is not initialized
properly. Hence the 'external_user' of 'm_security_ctx' has invalid value for
this variable and accessing this variable results in abnormal exit of server.
Fix:
===
Invoke the Security_context::init() call from the constructor of stored routine
so that 'm_security_ctx' gets initialized properly.
32 bit int
Row-based slave applier could not parse correctly the table id when
the value exceeded the max of 32 bit unsigned int.
The reason turns out in that the being parsed value placeholder
was sized as 4 bytes.
The type is fixed to ulonglong.
Additionally the patch works around Rows_log_event::m_table_id 4 bytes
size on 32 bits platforms. In case of last_table_id value overflows
the 4 byte max, there won't be the zero value for m_table_id generated
and the first wrapped-around value is one, this is thanks to excluding
UINT_MAX32 + 1 from TABLE_SHARE::table_map_id.