The next number (AUTO_INCREMENT) field of the table for write
rows events are not initialized, and cause some engines (innodb)
not correctly update the tables's auto_increment value.
This patch fixed this problem by honor next number fields if present.
Problem: When an Incident_log_event contains a bad incident number on disk,
the server crashes with an assertion.
Fix: Don't validate input with assertions. Use errors.
after rollback on master
When starting a transaction with a statement containing changes
to both transactional tables and non-transactional tables, the
statement is considered as non-transactional and is therefore
written directly to the binary log. This behaviour was present
in 5.0, and has propagated to 5.1.
If a trigger containing a change of a non-transactional table is
added to a transactional table, any changes to the transactional
table is "tainted" as non-transactional.
This patch solves the problem by removing the existing "hack" that
allows non-transactional statements appearing first in a transaction
to be written directly to the binary log. Instead, anything inside
a transaction is treaded as part of the transaction and not written
to the binary log until the transaction is committed.
Problem 1: BUG#36625: rpl_redirect doesn't do anything useful. It tests an
obsolete feature that was never fully implemented.
Fix 1: Remove rpl_redirect.
Problem 2: rpl_innodb_bug28430 and rpl_flushlog_loop are disabled despite the
bugs for which they were disabled have been fixed.
Fix 2: Re-enable rpl_innodb_bug28430 and rpl_flushlog_loop.
The partitioning clause is only a very long single line, which is very
hard to interpret for a human. This patch breaks the partitioning
syntax into one line for the partitioning type, and one line per
partition/subpartition.
In certain situations, a scan of the table will return the error
code HA_ERR_RECORD_DELETED, and this error code is not
correctly caught in the Rows_log_event::find_row() function, which
causes an error to be returned for this case.
This patch fixes the problem by adding code to either ignore the
record and continuing with the next one, the the event of a table
scan, or change the error code to HA_ERR_KEY_NOT_FOUND, in the event
that a key lookup is attempted.
The code to get read the value of a system variable was extracting its value
on PREPARE stage and was substituting the value (as a constant) into the parse tree.
Note that this must be a reversible transformation, i.e. it must be reversed before
each re-execution.
Unfortunately this cannot be reliably done using the current code, because there are
other non-reversible source tree transformations that can interfere with this
reversible transformation.
Fixed by not resolving the value at PREPARE, but at EXECUTE (as the rest of the
functions operate). Added a cache of the value (so that it's constant throughout
the execution of the query). Note that the cache also caches NULL values.
Updated an obsolete related test suite (variables-big) and the code to test the
result type of system variables (as per bug 74).
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
Adds --general-log-file, --slow-query-log-file command-
line options to match system variables of the same names.
Deprecates --log, --log-slow-queries command-line option
and log, log_slow_queries system-variables for v7.0; they
are superseded by general_log/general_log_file and
slow_query_log/slow_query_log_file, respectively.
The Blackhole engine did not support row-based replication
since the delete_row(), update_row(), and the index and range
searching functions were not implemented.
This patch adds row-based replication support for the
Blackhole engine by implementing the two functions mentioned
above, and making the engine pretend that it has found the
correct row to delete or update when executed from the slave
SQL thread by implementing index and range searching functions.
It is necessary to only pretend this for the SQL thread, since
a SELECT executed on the Blackhole engine will otherwise never
return EOF, causing a livelock.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.
The replication filtering rules were inappropiately applied when
executing BINLOG pseudo-query. The rules are supposed to be active
only at times when the slave's sql thread executes an event.
Fixed with correcting a condition to call replication rules only if
the slave sql thread executes the event.
INSTALL PLUGIN and UNINSTALL PLUGIN worked with statement-based and
mixed-mode replication only, but not with row-based replication.
There is no statement-based replication of these statements.
But there was row-based replication of the inserts and deletes
to and from the mysql.plugin table.
The fix is to suppress binlogging during insert and delete to
and from the mysql.plugin table.
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
the reason for the failure is that io thread passes through a sequence of state
changes before it eventually got stuck at the expect running state as NO.
It's unreasonble to wait for the running status while the whole idea of the test is
to get to the IO thread error.
Fixed with changing the waiting condition.
Problem: master binlog has 'create table t1'. Master binlog
was removed before slave could replicate it. In test's cleanup
code, master did 'drop table t1', which caused slave sql
thread to stop with an error since slave sql thread did not
know about t1.
Fix: t1 is just an auxiliary construction, only needed on
master. Hence, we turn off binlogging before t1 is created,
drop t1 as soon as we don't need it anymore, and then turn
on binlogging again.
Many dump threads can exist due to a way the new version of mtr governs suites.
For this immediate problem the test is refined not to use I_S but rather to reconnect
explicitly with preserving logics of a an old target bug fixes verification.
Problem: the test set @@global.init_slave to garbage at a time
which was not guaranteed to be after the time when the slave's
SQL thread used it. That would cause the slave's SQL thread to
stop in rare cases.
Fix: The test does not care about the value of
@@global.init_slave, except that it should be different on
master and slave. Hence, we set @@global.init_slave to
something that is valid SQL.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.