The size of the Innodb_buffer_pool_pages differs by one byte on row versus statement
log, so neuter the last position of the stringified decimal representation. Innobase
says the size isn't very important in any case.
Also, split out the "mixed" format to its own file, as mtr seems to dislike having only
stm and row but not mix.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.
The replication filtering rules were inappropiately applied when
executing BINLOG pseudo-query. The rules are supposed to be active
only at times when the slave's sql thread executes an event.
Fixed with correcting a condition to call replication rules only if
the slave sql thread executes the event.
tables open
When executing a DROP DATABASE statement in ROW mode and having temporary
tables open at the same time, the existance of temporary tables prevent
the server from switching back to row mode after temporarily switching to
statement mode to handle the logging of the statement.
Fixed the problem by removing the code to switch to statement mode and added
code to temporarily disable the binary log while dropping the objects in the
database.
Details of the fix:
- wrong command and state in processlist -> insert poll routine
- unexpected additional session -> abort if unexpected session found
When executing a DROP DATABASE statement in ROW mode and having temporary
tables open at the same time, the existance of temporary tables prevent
the server from switching back to row mode after temporarily switching to
statement mode to handle the logging of the statement.
Fixed the problem by removing the code to switch to statement mode and added
code to temporarily disable the binary log while dropping the objects in the
database.
INSTALL PLUGIN and UNINSTALL PLUGIN worked with statement-based and
mixed-mode replication only, but not with row-based replication.
There is no statement-based replication of these statements.
But there was row-based replication of the inserts and deletes
to and from the mysql.plugin table.
The fix is to suppress binlogging during insert and delete to
and from the mysql.plugin table.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
pb notices differences in results at the very beginning of the test.
Absense of mysql.ndb_apply_status must be benign anyway, but the warning should not happen
if have_ndb.inc is invoked ahead of ndb_master-slave.
Fixed with relocation of the macros.
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
manually resolved conflicts:
Text conflict in client/mysqltest.c
Contents conflict in mysql-test/include/have_bug25714.inc
Text conflict in mysql-test/include/have_ndbapi_examples.inc
Text conflict in mysql-test/mysql-test-run.pl
Text conflict in mysql-test/suite/parts/inc/partition_check_drop.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout_check1.inc
Text conflict in mysql-test/suite/parts/inc/partition_layout_check2.inc
Text conflict in mysql-test/suite/parts/r/partition_alter1_1_2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter1_1_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter1_2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter2_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_alter3_innodb.result
Text conflict in mysql-test/suite/parts/r/partition_alter3_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_basic_innodb.result
Text conflict in mysql-test/suite/parts/r/partition_basic_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_basic_symlink_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_engine_myisam.result
Text conflict in mysql-test/suite/parts/r/partition_syntax_myisam.result
Text conflict in mysql-test/suite/rpl_ndb/t/disabled.def
Text conflict in mysql-test/t/disabled.def
The problem of this bug is that we need to get the list of tables
to be updated for a multi-table update statement, which requires to
open all the tables referenced by the statement and resolve all
the fields involved in update in order to figure out the list of
tables for update. However if there are replicate filter rules,
some tables might not exist on slave and result in a failure
before we could examine the filter rules.
I think the whole problem can not be solved on slave alone,
the master must record and send the information of tables
involved for update to slave, so that the slave do not need to
open all the tables referenced by the multi-table update statement to
figure out which tables are involved for update.
So a status variable is added to Query_log event to store the
value of table map for update on master. And on slave, it will
try to get the value of this variable and use it to examine
filter rules without opening any tables on slave, if this values
is not available, the old approach is used and thus the bug will
still occur for when replicating from old masters.