columns (default datatype value is assigned).
The mysql_update function has been modified to generate
an error when trying to set a NOT NULL field to NULL rather than a warning
in the set_field_to_null_with_conversions function.
without PK
Bug#31609 Not all RBR slave errors reported as errors
bug#32468 delete rows event on a table with foreign key constraint fails
The first two bugs comprise idempotency issues.
First, there was no error code reported under conditions of the bug
description although the slave sql thread halted.
Second, executions were different with and without presence of prim key in
the table.
Third, there was no way to instruct the slave whether to ignore an error
and skip to the following event or to halt.
Fourth, there are handler errors which might happen due to idempotent
applying of binlog but those were not listed among the "idempotent" error
list.
All the named issues are addressed.
Wrt to the 3rd, there is the new global system variable, changeble at run
time, which controls the slave sql thread behaviour.
The new variable allows further extensions to mimic the sql_mode
session/global variable.
To address the 4th, the new bug#32468 had to be fixed as it was staying
in the way.
DROP DATABASE statement writes changes to mysql.proc table under RBR
When replicating a DROP DATABASE statement with a database holding
stored procedures, the changes to the mysql.proc table was recorded
in the binary log under row-based replication.
With this patch, the thread uses statement-logging format for the
duration of the DROP DATABASE statement. The logging format is
(already) reset at the end of the statement, so no additional code
for resetting the logging format is necessary.
corrupts a MERGE table
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Bug 25038 - Waiting TRUNCATE
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Bug 19627 - temporary merge table locking
Bug 27660 - Falcon: merge table possible
Bug 30273 - merge tables: Can't lock file (errno: 155)
The problems were:
Bug 26379 - Combination of FLUSH TABLE and REPAIR TABLE
corrupts a MERGE table
1. A thread trying to lock a MERGE table performs busy waiting while
REPAIR TABLE or a similar table administration task is ongoing on
one or more of its MyISAM tables.
2. A thread trying to lock a MERGE table performs busy waiting until all
threads that did REPAIR TABLE or similar table administration tasks
on one or more of its MyISAM tables in LOCK TABLES segments do UNLOCK
TABLES. The difference against problem #1 is that the busy waiting
takes place *after* the administration task. It is terminated by
UNLOCK TABLES only.
3. Two FLUSH TABLES within a LOCK TABLES segment can invalidate the
lock. This does *not* require a MERGE table. The first FLUSH TABLES
can be replaced by any statement that requires other threads to
reopen the table. In 5.0 and 5.1 a single FLUSH TABLES can provoke
the problem.
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Trying DML on a MERGE table, which has a child locked and
repaired by another thread, made an infinite loop in the server.
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Locking a MERGE table and its children in parent-child order
and flushing the child deadlocked the server.
Bug 25038 - Waiting TRUNCATE
Truncating a MERGE child, while the MERGE table was in use,
let the truncate fail instead of waiting for the table to
become free.
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Repairing a child of an open MERGE table corrupted the child.
It was necessary to FLUSH the child first.
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Flushing and optimizing locked MERGE children crashed the server.
Bug 19627 - temporary merge table locking
Use of a temporary MERGE table with non-temporary children
could corrupt the children.
Temporary tables are never locked. So we do now prohibit
non-temporary chidlren of a temporary MERGE table.
Bug 27660 - Falcon: merge table possible
It was possible to create a MERGE table with non-MyISAM children.
Bug 30273 - merge tables: Can't lock file (errno: 155)
This was a Windows-only bug. Table administration statements
sometimes failed with "Can't lock file (errno: 155)".
These bugs are fixed by a new implementation of MERGE table open.
When opening a MERGE table in open_tables() we do now add the
child tables to the list of tables to be opened by open_tables()
(the "query_list"). The children are not opened in the handler at
this stage.
After opening the parent, open_tables() opens each child from the
now extended query_list. When the last child is opened, we remove
the children from the query_list again and attach the children to
the parent. This behaves similar to the old open. However it does
not open the MyISAM tables directly, but grabs them from the already
open children.
When closing a MERGE table in close_thread_table() we detach the
children only. Closing of the children is done implicitly because
they are in thd->open_tables.
For more detail see the comment at the top of ha_myisammrg.cc.
Changed from open_ltable() to open_and_lock_tables() in all places
that can be relevant for MERGE tables. The latter can handle tables
added to the list on the fly. When open_ltable() was used in a loop
over a list of tables, the list must be temporarily terminated
after every table for open_and_lock_tables().
table_list->required_type is set to FRMTYPE_TABLE to avoid open of
special tables. Handling of derived tables is suppressed.
These details are handled by the new function
open_n_lock_single_table(), which has nearly the same signature as
open_ltable() and can replace it in most cases.
In reopen_tables() some of the tables open by a thread can be
closed and reopened. When a MERGE child is affected, the parent
must be closed and reopened too. Closing of the parent is forced
before the first child is closed. Reopen happens in the order of
thd->open_tables. MERGE parents do not attach their children
automatically at open. This is done after all tables are reopened.
So all children are open when attaching them.
Special lock handling like mysql_lock_abort() or mysql_lock_remove()
needs to be suppressed for MERGE children or forwarded to the parent.
This depends on the situation. In loops over all open tables one
suppresses child lock handling. When a single table is touched,
forwarding is done.
Behavioral changes:
===================
This patch changes the behavior of temporary MERGE tables.
Temporary MERGE must have temporary children.
The old behavior was wrong. A temporary table is not locked. Hence
even non-temporary children were not locked. See
Bug 19627 - temporary merge table locking.
You cannot change the union list of a non-temporary MERGE table
when LOCK TABLES is in effect. The following does *not* work:
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ...;
LOCK TABLES t1 WRITE, t2 WRITE, m1 WRITE;
ALTER TABLE m1 ... UNION=(t1,t2) ...;
However, you can do this with a temporary MERGE table.
You cannot create a MERGE table with CREATE ... SELECT, neither
as a temporary MERGE table, nor as a non-temporary MERGE table.
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ... SELECT ...;
Gives error message: table is not BASE TABLE.
Actually, the failure happened with 3innodb as well. Most probably
the reason is in failing to delete a binlog file on __NT__ so that
that master increments the index of the binlog file.
The test results hide valueable warning that windows could generate
about that.
The scope of this fix is to make sure we have such warning and
to lessen chances for binlog file being held at time of closing.
The dump thread is getting a good chance to leave and
release the file for its successful deletion.
We shall watch over the two tests as regression is not excluded.
In that case we would have an extra info possibly explaining why
__NT__ env can not close/delete the file.
However, regardless of that reason, there is alwasy workaround to mask out
non-deterministic binlog index number.
is possible):
When skipping the beginning of a transaction starting with BEGIN, the OPTION_BEGIN
flag was not set correctly, which caused the slave to not recognize that it was
inside a group. This patch sets the OPTION_BEGIN flag for BEGIN, COMMIT, ROLLBACK,
and XID events. It also adds checks if inside a group before decreasing the
slave skip counter to zero.
Begin_query_log_event was not marked that it could not end a group, which is now
corrected.
Refactoring code to add parameter to pack() and unpack() functions with
purpose of indicating if data should be packed in little-endian or
native order. Using new functions to always pack data for binary log
in little-endian order. The purpose of this refactoring is to allow
proper implementation of endian-agnostic pack() and unpack() functions.
Eliminating several versions of virtual pack() and unpack() functions
in favor for one single virtual function which is overridden in
subclasses.
Implementing pack() and unpack() functions for some field types that
packed data in native format regardless of the value of the
st_table_share::db_low_byte_first flag.
The field types that were packed in native format regardless are:
Field_real, Field_decimal, Field_tiny, Field_short, Field_medium,
Field_long, Field_longlong, and Field_blob.
Before the patch, row-based logging wrote the rows incorrectly on
big-endian machines where the storage engine defined its own
low_byte_first() to be FALSE on big-endian machines (the default
is TRUE), while little-endian machines wrote the fields in correct
order. The only known storage engine that does this is NDB. In effect,
this means that row-based replication from or to a big-endian
machine where the table was using NDB as storage engine failed if the
other engine was either non-NDB or on a little-endian machine.
With this patch, row-based logging is now always done in little-endian
order, while ORDER BY uses the native order if the storage engine
defines low_byte_first() to return FALSE for big-endian machines.
In addition, the max_data_length() function available in Field_blob
was generalized to the entire Field hierarchy to give the maximum
number of bytes that Field::pack() will write.
Delete: mysql-test/suite/rpl/t/rpl_stm_extraColmaster_ndb.test
.del-rpl_row_extraColmaster_ndb.result~a2c64bae75b49d2:
Delete: mysql-test/suite/rpl/r/rpl_row_extraColmaster_ndb.result
.del-rpl_row_extraColmaster_ndb.test~523b0954869c4423:
Delete: mysql-test/suite/rpl/t/rpl_row_extraColmaster_ndb.test
Many files:
merged and cleanup of test cases
using TPC-B):
Problem: A RBR event can contain incomplete row data (only key value and
fields which have been changed). In that case, when the row is unpacked
into record and written to a table, the missing fields get incorrect NULL
values leading to master-slave inconsistency.
Solution: Use values found in slave's table for columns which are not given
in the rows event. The code for writing a single row uses the following
algorithm:
1. unpack row_data into table->record[0],
2. try to insert record,
3. if duplicate record found, fetch it into table->record[0],
4. unpack row_data into table->record[0],
5. write table->record[0] into the table.
Where row_data is the row as stored in the data area of a rows event.
Thus:
a) unpacking of row_data happens at the time when row is written into
a table,
b) when unpacking (in step 4), only columns present in row_data are
overwritten - all other columns remain as they were found in the table.
Since all data needed for the above algorithm is stored inside
Rows_log_event class, functions which locate and write rows are turned
into methods of that class.
replace_record() -> Rows_log_event::write_row()
find_and_fetch_row() -> Rows_log_event::find_row()
Both methods take row data from event's data buffer - the row being
processed is pointed by m_curr_row. They unpack the data as needed into
table's record buffers record[0] or record[1]. When row is unpacked,
m_curr_row_end is set to point at next row in the data buffer.
Other changes introduced in this changeset:
- Change signature of unpack_row(): don't report errors and don't
setup table's rw_set here. Errors can happen only when setting default
values in prepare_record() function and are detected there.
- In Rows_log_event and derived classes, don't pass arguments to
the execution primitives (do_...() member functions) but use class
members instead.
- Move old row handling code into log_event_old.cc to be used by
*_rows_log_event_old classes.
Also, a new test rpl_ndb_2other is added which tests basic replication
from master using ndb tables to slave storing the same tables using
(possibly) different engine (myisam,innodb).
Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb.
However, these tests doesn't work for various reasons and currently are
disabled (see BUG#19227).
The new test differs from the ones it is based on as follows:
1. Single test tests replication with different storage engines on slave
(myisam, innodb, ndb).
2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing
original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test
which doesn't contain tests using partitioned tables as these don't work
currently. Instead, it tests replication to a slave which has more or
less columns than master.
3. Include file include/rpl_multi_engine3.inc is replaced with
include/rpl_multi_engine2.inc. The later differs by performing slightly
different operations (updating more than one row in the table) and
clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM"
as replication of "DELETE" doesn't work well in this setting.
4. Slave must use option --log-slave-updates=0 as otherwise execution of
replication events generated by ndb fails if table uses a different
storage engine on slave (see BUG#29569).
This patch adds functionality to row-based replication to ensure the
slave's column sizes are >= to that of the master.
It also includes some refactoring for the code from WL#3228.
Fixing tests and results to work when replicating to fewer columns on
slave than on master. One test that previously should fail, now works,
and some log positions have changed as a result of adding metadata to
the events.