Before, the arrival of same GTID twice in multi-source replication
would cause double-apply or in gtid strict mode an error.
Keep the behaviour, but add an option --gtid-ignore-duplicates which
allows to correctly handle duplicates, ignoring all but the first.
This relies on the user ensuring correct configuration so that
sequence numbers are strictly increasing within each replication
domain; then duplicates can be detected simply by comparing the
sequence numbers against what is already applied.
Only one master connection (but possibly multiple parallel worker
threads within that connection) is allowed to apply events within
one replication domain at a time; any other connection that
receives a GTID in the same domain either discards it (if it is
already applied) or waits for the other connection to not have
any events to apply.
Intermediate patch, as proof-of-concept for testing. The main limitation
is that currently it is only implemented for parallel replication,
@@slave_parallel_threads > 0.
- Removed double call to trans_begin() for GTID BEGIN event
- Don't set OPTION_BEGIN before calling trans_begin() as this causes extra work in trans_begin()
sql/log_event.cc:
Removed double call to trans_begin for GTID BEGIN event.
Don't set OPTION_BEGIN before calling trans_begin() as this causes extra work in trans_begin().
This was done by removing parsing of "BEGIN" and instead executing trans_begin() direct.
This is much faster, but we lost the ability logging of possible slow "BEGIN" statements. As this should never happen it can be ignored.
The problem was when a GTID event was part of a group commit, and so contained
a commit id. The code that replaces GTID with a BEGIN event for old slaves did
not correctly handle this case.
Fix the code so that the GTID with commit id can also be properly replaced
with a BEGIN query event. The extra two bytes are in the BEGIN event replaced
with a dummy, empty time zone string.
With parallel replication, there can be any number of events queued on
in-memory lists in the worker threads.
For normal STOP SLAVE, we want to skip executing any remaining events on those
lists and stop as quickly as possible.
However, for START SLAVE UNTIL, when the UNTIL position is reached in the SQL
driver thread, we must _not_ stop until all already queued events for the
workers have been executed - otherwise we would stop too early, before the
actual UNTIL position had been completely reached.
The code did not handle UNTIL correctly, stopping too early due to not
executing the queued events to completion. Fix this, and also implement that
an explicit STOP SLAVE in the middle (when the SQL driver thread has reached
the UNTIL position but the workers have not) _will_ cause an immediate stop.
This is port of fix for MySQL BUG#17647863.
revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM
Rename test() macro to MY_TEST() to avoid conflict with libc++.
As a side-effect of purge_relay_logs(), sql_slave_skip_counter
was silently ignored in GTID mode.
But sql_slave_skip_counter in fact is not a good match with GTID.
And it is not really needed either, as users can explicitly set
@@gtid_slave_pos to skip specific GTIDs, in a way that matches
well how GTID replication works.
So with this patch, we give an error on attempts to set
sql_slave_skip_counter when using GTID, with a suggestion to use
gtid_slave_pos instead, if needed.
- CREATE TABLE is by default executed on the slave as CREATE OR REPLACE
- DROP TABLE is by default executed on the slave as DROP TABLE IF NOT EXISTS
This means that a slave will by default continue even if we try to create
a table that existed on the slave (the table will be deleted and re-created) or
if we try to drop a table that didn't exist on the slave.
This should be safe as instead of having the slave stop because of an inconsistency between
master and slave, it will fix the inconsistency.
Those that would prefer to get a stopped slave instead for the above cases can set slave_ddl_exec_mode to STRICT.
- Ensure that a CREATE OR REPLACE TABLE which dropped a table is replicated
- DROP TABLE that generated an error on master is handled as an identical DROP TABLE on the slave (IF NOT EXISTS is not added in this case)
- Added slave_ddl_exec_mode variable to decide how DDL's are replicated
New logic for handling BEGIN GTID ... COMMIT from the binary log:
- When we find a BEGIN GTID, we start a transaction and set OPTION_GTID_BEGIN
- When we find COMMIT, we reset OPTION_GTID_BEGIN and execute the normal COMMIT code.
- While OPTION_GTID_BEGIN is set:
- We don't generate implict commits before or after statements
- All tables are regarded as transactional tables in the binary log (to ensure things are executed exactly as on the master)
- We reset OPTION_GTID_BEGIN also on rollback
This will help ensuring that we don't get any sporadic commits (and thus new GTID's) on the slave and will help keep the GTID's between master and slave in sync.
mysql-test/extra/rpl_tests/rpl_log.test:
Added testing of mode slave_ddl_exec_mode=STRICT
mysql-test/r/mysqld--help.result:
New help messages
mysql-test/suite/rpl/r/create_or_replace_mix.result:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/r/create_or_replace_row.result:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/r/create_or_replace_statement.result:
Testing replication of create or replace
mysql-test/suite/rpl/r/rpl_gtid_startpos.result:
Test must be run in slave_ddl_exec_mode=STRICT as part of the test depends on that DROP TABLE should fail on slave.
mysql-test/suite/rpl/r/rpl_row_log.result:
Updated result
mysql-test/suite/rpl/r/rpl_row_log_innodb.result:
Updated result
mysql-test/suite/rpl/r/rpl_row_show_relaylog_events.result:
Updated result
mysql-test/suite/rpl/r/rpl_stm_log.result:
Updated result
mysql-test/suite/rpl/r/rpl_stm_mix_show_relaylog_events.result:
Updated result
mysql-test/suite/rpl/r/rpl_temp_table_mix_row.result:
Updated result
mysql-test/suite/rpl/t/create_or_replace.inc:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_mix.cnf:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_mix.test:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_row.cnf:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_row.test:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_statement.cnf:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_statement.test:
Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/rpl_gtid_startpos.test:
Test must be run in slave_ddl_exec_mode=STRICT as part of the test depends on that DROP TABLE should fail on slave.
mysql-test/suite/rpl/t/rpl_stm_log.test:
Removed some lines
mysql-test/suite/sys_vars/r/slave_ddl_exec_mode_basic.result:
Testing of slave_ddl_exec_mode
mysql-test/suite/sys_vars/t/slave_ddl_exec_mode_basic.test:
Testing of slave_ddl_exec_mode
sql/handler.cc:
Regard all tables as transactional in commit if OPTION_GTID_BEGIN is set.
This is to ensure that statments are not commited too early if non transactional tables are used.
sql/log.cc:
Regard all tables as transactional in commit if OPTION_GTID_BEGIN is set.
Also treat 'direct' log events as transactional (to get them logged as they where on the master)
sql/log_event.cc:
Ensure that the new error from DROP TABLE when trying to drop a view is treated same as the old one.
Store error code that slave expects in THD.
Set OPTION_GTID_BEGIN if we find a BEGIN.
Reset OPTION_GTID_BEGIN if we find a COMMIT.
sql/mysqld.cc:
Added slave_ddl_exec_mode_options
sql/mysqld.h:
Added slave_ddl_exec_mode_options
sql/rpl_gtid.cc:
Reset OPTION_GTID_BEGIN if we record a gtid (safety)
sql/sql_class.cc:
Regard all tables as transactional in commit if OPTION_GTID_BEGIN is set.
sql/sql_class.h:
Added to THD: log_current_statement and slave_expected_error
sql/sql_insert.cc:
Ensure that CREATE OR REPLACE is logged if table was deleted.
Don't do implicit commit for CREATE if we are under OPTION_GTID_BEGIN
sql/sql_parse.cc:
Change CREATE TABLE -> CREATE OR REPLACE TABLE for slaves
Change DROP TABLE -> DROP TABLE IF EXISTS for slaves
CREATE TABLE doesn't force implicit commit in case of OPTION_GTID_BEGIN
Don't do commits before or after any statement if OPTION_GTID_BEGIN was set.
sql/sql_priv.h:
Added OPTION_GTID_BEGIN
sql/sql_show.cc:
Enhanced store_create_info() to also be able to handle CREATE OR REPLACE
sql/sql_show.h:
Updated prototype
sql/sql_table.cc:
Ensure that CREATE OR REPLACE is logged if table was deleted.
sql/sys_vars.cc:
Added slave_ddl_exec_mode
sql/transaction.cc:
Added warning if we got a GTID under OPTION_GTID_BEGIN
Performance schema tables are local to a server and they should not
be allowed to be executed by the slave from the relay log.
From 5.6.10, P_S events are not written into the binary log.
But prior to that, from mysql 5.5 onwards, P_S events are written
to the binary log by master.
The following are problematic scenarios:
1. Master 5.5 -> Slave 5.5
========================
A) RBR: Slave crashes
B) SBR: P_S statements are replicated.
2.Master 5.5 -> Slave 5.6
========================
A) RBR: SQL thd generates error
B) SBR : P_S statements are replicated
3. 5.5 binlog executed on a server 5.5 using mysqlbinlog|mysql
=================================================================
A) RBR: Server crash (because of BINLOG'... statement)
B) SBR: P_S statements are executed
4. 5.5 binlog executed on server 5.6 using mysqlbinlog|mysql
================================================================
A) RBR: SQL error (because of BINLOG'... statement)
B) SBR: P_S statements are executed.
The generalized behaviour should be:
a) Slave SQL thread should certainly ignore P_S events read from
the relay log.
b) mysqlbinlog|mysql should replay the binlog succesfully.
Performance schema tables are local to a server and they should not
be allowed to be executed by the slave from the relay log.
From 5.6.10, P_S events are not written into the binary log.
But prior to that, from mysql 5.5 onwards, P_S events are written
to the binary log by master.
The following are problematic scenarios:
1. Master 5.5 -> Slave 5.5
========================
A) RBR: Slave crashes
B) SBR: P_S statements are replicated.
2.Master 5.5 -> Slave 5.6
========================
A) RBR: SQL thd generates error
B) SBR : P_S statements are replicated
3. 5.5 binlog executed on a server 5.5 using mysqlbinlog|mysql
=================================================================
A) RBR: Server crash (because of BINLOG'... statement)
B) SBR: P_S statements are executed
4. 5.5 binlog executed on server 5.6 using mysqlbinlog|mysql
================================================================
A) RBR: SQL error (because of BINLOG'... statement)
B) SBR: P_S statements are executed.
The generalized behaviour should be:
a) Slave SQL thread should certainly ignore P_S events read from
the relay log.
b) mysqlbinlog|mysql should replay the binlog succesfully.
OF ROW DATA
Problem:
========
Inserting a row larger than 4G when server uses RBR leads
to crash.
Analysis:
========
Row-based binary logging logs changes in individual table
rows. During the execution of DML statements in RBR the
actual row data will be stored within "m_rows_buf" buffer
and this buffer contents will be written to binary log.
"m_rows_buf" is prepared within the following function
"Rows_log_event::do_add_row_data".
When a huge row is specified as in this bug scenario where
row size is 4294971520 > UINT_MAX (4294967295) then the
"m_rows_buf" is reallocated to accommodate the row data and
then the row is copied to the buffer. During this realloc
call, the length is getting type casted to "uint" which
results in overflow. Because of the overflow the reallocated
memory happens to be incorrect than what was requested
and it results in a crash during copy of rowdata to buffer.
Hence rows of size > 4GB cannot be written to binary log.
By default the event_length can be stored within 4 bytes
which in turn restricts an event's size to grow. Hence large
rows cannot be replicated using row based replication.
Fix:
===
An error is generated if the row size exceeds 4GB value.
sql/log_event.cc:
An error is generated if the row size exceeds 4GB value.
Debug simulations are added to test the fix.
OF ROW DATA
Problem:
========
Inserting a row larger than 4G when server uses RBR leads
to crash.
Analysis:
========
Row-based binary logging logs changes in individual table
rows. During the execution of DML statements in RBR the
actual row data will be stored within "m_rows_buf" buffer
and this buffer contents will be written to binary log.
"m_rows_buf" is prepared within the following function
"Rows_log_event::do_add_row_data".
When a huge row is specified as in this bug scenario where
row size is 4294971520 > UINT_MAX (4294967295) then the
"m_rows_buf" is reallocated to accommodate the row data and
then the row is copied to the buffer. During this realloc
call, the length is getting type casted to "uint" which
results in overflow. Because of the overflow the reallocated
memory happens to be incorrect than what was requested
and it results in a crash during copy of rowdata to buffer.
Hence rows of size > 4GB cannot be written to binary log.
By default the event_length can be stored within 4 bytes
which in turn restricts an event's size to grow. Hence large
rows cannot be replicated using row based replication.
Fix:
===
An error is generated if the row size exceeds 4GB value.
- Added MALLOC_LIBRARY variable to hold name of malloc library
- Back ported valgrind related fixes from jemalloc 3.4.1 to the included jemalloc 3.3.1
- Renamed bitmap_init() and bitmap_free() to my_bitmap_init() and my_bitmap_free() to avoid clash with jemalloc 3.4.1
- Use option --soname-synonyms=somalloc=NON to valgrind when using jemalloc
- Show version related variables in mysqld --help
-- Added SHOW_VALUE_IN_HELP marker
Increased back_log to 150 as the original value was a bit too small
CMakeLists.txt:
Added MALLOC_LIBRARY variable to hold name of malloc library
cmake/jemalloc.cmake:
Added MALLOC_LIBRARY variable to hold name of malloc library
config.h.cmake:
Added MALLOC_LIBRARY variable to hold name of malloc library
extra/jemalloc/ChangeLog:
Updates changelog
extra/jemalloc/include/jemalloc/internal/arena.h:
Backported valgrind fixes from jemalloc 3.4.1
extra/jemalloc/include/jemalloc/internal/jemalloc_internal.h.in:
Backported valgrind fixes from jemalloc 3.4.1
extra/jemalloc/include/jemalloc/internal/private_namespace.h:
Backported valgrind fixes from jemalloc 3.4.1
extra/jemalloc/include/jemalloc/internal/tcache.h:
Backported valgrind fixes from jemalloc 3.4.1
extra/jemalloc/src/arena.c:
Backported valgrind fixes from jemalloc 3.4.1
include/my_bitmap.h:
Renamed bitmap_init() and bitmap_free() to my_bitmap_init() and my_bitmap_free() to avoid clash with jemalloc 3.4.1
mysql-test/mysql-test-run.pl:
Use option --soname-synonyms=somalloc=NON to valgrind when using jemalloc
mysql-test/valgrind.supp:
Supression of memory leak in OpenSuse 12.3
mysys/my_bitmap.c:
Renamed bitmap_init() and bitmap_free() to my_bitmap_init() and my_bitmap_free()
sql/ha_ndbcluster_binlog.cc:
Renames
sql/ha_ndbcluster_cond.h:
Renames
sql/ha_partition.cc:
Renames
sql/handler.cc:
Renames
sql/item_subselect.cc:
Renames
sql/log_event.cc:
Renames
sql/log_event_old.cc:
Renames
sql/mysqld.cc:
Renames
Show version related variables in mysqld --help
sql/opt_range.cc:
Renames
sql/opt_table_elimination.cc:
Renames
sql/partition_info.cc:
Renames
sql/rpl_injector.h:
Renames
sql/set_var.h:
Renames
sql/slave.cc:
Renames
sql/sql_bitmap.h:
Renames
sql/sql_insert.cc:
Renames
sql/sql_lex.h:
Renames
sql/sql_parse.cc:
Renames
sql/sql_partition.cc:
Renames
sql/sql_select.cc:
Renames
sql/sql_show.cc:
Renames
sql/sql_update.cc:
Renames
sql/sys_vars.cc:
Show version related variables in mysqld --help
sql/sys_vars.h:
Added SHOW_VALUE_IN_HELP marker for variables that should be shown in --help
sql/table.cc:
Renames
sql/table.h:
Removed not used bitmap_init_value
storage/connect/ha_connect.cc:
Removed compiler warning
storage/maria/ma_open.c:
Renames
unittest/mysys/bitmap-t.c:
Renames
AUTO_INC COLUMN ONLY ON SLAVE
In RBR, if the slave's table as one additional auto_inc column,
then, it will insert the value 0 instead of generating the next
auto_inc number.
We fix this by checking that if an auto_inc extra column exists,
when compared to column data of the row event, we explicitly set
it to NULL and flag the engine that a nulled auto_inc column will
be inserted.
AUTO_INC COLUMN ONLY ON SLAVE
In RBR, if the slave's table as one additional auto_inc column,
then, it will insert the value 0 instead of generating the next
auto_inc number.
We fix this by checking that if an auto_inc extra column exists,
when compared to column data of the row event, we explicitly set
it to NULL and flag the engine that a nulled auto_inc column will
be inserted.
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)
Problem: If slave receives a corrupted row event,
slave server is crashing.
Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.
Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.
sql/field.h:
Removed a function which is not required anymore
sql/log_event.cc:
Adding a validation on the field length before
the tool tries to print the value.
sql/log_event.h:
Changing unpack_row call according to the new arguments
sql/log_event_old.h:
Changing unpack_row call according to the new arguments
sql/rpl_record.cc:
Adding a new argument 'row_end' which tells
the end position of the complete data in the
row event. It will be used to do validation
before doing 'unpack' field.
sql/rpl_record.h:
Adding a new argument 'row_end' which tells
the end position of the complete data in the
row event. It will be used to do validation
before doing 'unpack' field.
sql/rpl_utility.cc:
Now calc_field_size() is required for client too.
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)
Problem: If slave receives a corrupted row event,
slave server is crashing.
Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.
Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.
Add a test case for killing a waiting query in parallel replication.
Fix several bugs found:
- We should not wakeup_subsequent_commits() in ha_rollback_trans(), since we
do not know the right wakeup_error() to give.
- When a wait_for_prior_commit() is killed, we must unregister from the
waitee so we do not race and get an extra (non-kill) wakeup.
- We need to deal with error propagation correctly in queue_for_group_commit
when one thread is killed.
- Fix one locking issue in queue_for_group_commit(), we could unlock the
waitee lock too early and this end up processing wakeup() with insufficient
locking.
- Fix Xid_log_event::do_apply_event; if commit fails it must not update the
in-memory @@gtid_slave_pos state.
- Fix and cleanup some things in the rpl_parallel.cc error handling.
- Add a missing check for killed in the slave sql driver thread, to avoid a
race.
(and valgrind warnings)
* move thd userstat initialization to the same function
that was adding thd userstat to global counters.
* initialize thd->start_bytes_received in THD::init
(when thd->userstat_running is set)
"SHOW BINLOG EVENTS"
Problem:
========
mysql was crashed after executing "show binlog events in
'mysql-bin.000005' from 99", the crash happened randomly.
Analysis:
========
During construction of LOAD EVENT or NEW LOAD EVENT object
if the starting offset is provided as incorrect value then
all the object members that are retrieved from the offset
are also invalid. Some times it will lead to out of bound
address offsets. In the bug scenario, the file name is
extracrated from an invalid address and the same is fed to
strlen(fname) function. Passing invalid address to strlen
will lead to crash.
Fix:
===
Validate if the given offset falls within the event boundary
or not.
sql/log_event.cc:
Added code to validate fname's address. "fname" should
be within event boundary. Added code to find invalid
invents.
"SHOW BINLOG EVENTS"
Problem:
========
mysql was crashed after executing "show binlog events in
'mysql-bin.000005' from 99", the crash happened randomly.
Analysis:
========
During construction of LOAD EVENT or NEW LOAD EVENT object
if the starting offset is provided as incorrect value then
all the object members that are retrieved from the offset
are also invalid. Some times it will lead to out of bound
address offsets. In the bug scenario, the file name is
extracrated from an invalid address and the same is fed to
strlen(fname) function. Passing invalid address to strlen
will lead to crash.
Fix:
===
Validate if the given offset falls within the event boundary
or not.
MDEV-5217: Last_sql_error lost in parallel replication.
For some reason, the query execution code in log_event.cc call
rli->clear_error for each event (part of clear_all_errors()).
This causes a problem in parallel replication, where the
execution in one worker thread could clear the error set by
another thread, causing the SQL thread to stop but leaving no
error visible in SHOW SLAVE STATUS.
There seems to be no reason to clear the global error code
in Relay_log_info for each event execution, from code review
and from running the test suite. So remove this clearing of
the error code to make things work also in the parallel case.
The merge is still missing a few hunks related to temporary tables and
InnoDB log file size. The associated code did not seem to exist in
10.0, so the merge of that needs more work. Until this is fixed, there
are a number of test failures as a result.
In parallel replication, there are two kinds of events which are
executed in different ways.
Normal events that are part of event groups/transactions are executed
asynchroneously by being queued for a worker thread.
Other events like format description and rotate and such are executed
directly in the driver SQL thread.
If the direct execution of the other events were to update the old-style
position, then the position gets updated too far ahead, before the normal
events that have been queued for a worker thread have been executed. So
this patch adds some special cases to prevent such position updates ahead
of time, and instead queues dummy events for the worker threads, so that
they will at an appropriate time do the position updates instead.
(Also fix a race in a test case that happened to trigger while running
tests for this patch).
Fix some more parts of old-style position updates.
Now we save in rgi some coordinates for master log and relay log, so
that in do_update_pos() we can use the right set of coordinates with
the right events.
The Rotate_log_event::do_update_pos() is fixed in the parallel case
to not directly update relay-log.info (as Rotate event runs directly
in the driver SQL thread, ahead of actual event execution). Instead,
group_master_log_file is updated as part of do_update_pos() in each
event execution.
In the parallel case, position updates happen in parallel without
any ordering, but taking care that position is not updated backwards.
Since position update happens only after event execution this leads
to the right result.
Also fix an access-after-free introduced in an earlier commit.
Fix some part of update of old-style coordinates in parallel replication:
- Ignore XtraDB request for old-style coordinates, not meaningful for
parallel replication (must use GTID to get crash-safe parallel slave).
- Only update relay log coordinates forward, not backwards, to ensure
that parallel threads do not conflict with each other.
- Move future_event_relay_log_pos to rgi.
REPLICATION FILTERS ARE USED.
Problem:
When Filtered-slave applies Int_var_log_event and when it
tries to write the event to its own binlog, LAST_INSERT_ID
value is written wrongly.
Analysis:
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is a variable which is set when LAST_INSERT_ID() is used by
a statement. If it is set, first_successful_insert_id_in_
prev_stmt_for_binlog will be stored in the statement-based
binlog. This variable is CUMULATIVE along the execution of
a stored function or trigger: if one substatement sets it
to 1 it will stay 1 until the function/trigger ends,
thus making sure that first_successful_insert_id_in_
prev_stmt_for_binlog does not change anymore and is
propagated to the caller for binlogging. This is achieved
using the following code
if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)
{
/* It's the first time we read it */
first_successful_insert_id_in_prev_stmt_for_binlog=
first_successful_insert_id_in_prev_stmt;
stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;
}
Slave server, after receiving Int_var_log_event event from
master, it is setting
stmt_depends_on_first_successful_insert_id_in_prev_stmt
to true(*which is wrong*) and not setting
first_successful_insert_id_in_prev_stmt_for_binlog. Because
of this problem, when the actual DML statement with
LAST_INSERT_ID() is parsed by slave SQL thread,
first_successful_insert_id_in_prev_stmt_for_binlog is not
set. Hence the value zero (default value) is written to
slave's binlog.
Why only *Filtered slave* is effected when the code is
in common place:
-------------------------------------------------------
In Query_log_event::do_apply_event,
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is reset to zero at the end of the function. In case of
normal slave (No Filters), this variable will be reset.
In Filtered slave, Slave SQL thread defers all IRU events's
execution until IRU's Query_log event is received. Once it
receives Query_log_event it executes all pending IRU events
and then it executes Query_log_event. Hence the variable is
not getting reset to 0, causing this bug.
Fix: As described above, the root cause was setting
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
when Int_var_log_event was executed by a SQL thread. Hence
removing the problematic line from the code.
REPLICATION FILTERS ARE USED.
Problem:
When Filtered-slave applies Int_var_log_event and when it
tries to write the event to its own binlog, LAST_INSERT_ID
value is written wrongly.
Analysis:
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is a variable which is set when LAST_INSERT_ID() is used by
a statement. If it is set, first_successful_insert_id_in_
prev_stmt_for_binlog will be stored in the statement-based
binlog. This variable is CUMULATIVE along the execution of
a stored function or trigger: if one substatement sets it
to 1 it will stay 1 until the function/trigger ends,
thus making sure that first_successful_insert_id_in_
prev_stmt_for_binlog does not change anymore and is
propagated to the caller for binlogging. This is achieved
using the following code
if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)
{
/* It's the first time we read it */
first_successful_insert_id_in_prev_stmt_for_binlog=
first_successful_insert_id_in_prev_stmt;
stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;
}
Slave server, after receiving Int_var_log_event event from
master, it is setting
stmt_depends_on_first_successful_insert_id_in_prev_stmt
to true(*which is wrong*) and not setting
first_successful_insert_id_in_prev_stmt_for_binlog. Because
of this problem, when the actual DML statement with
LAST_INSERT_ID() is parsed by slave SQL thread,
first_successful_insert_id_in_prev_stmt_for_binlog is not
set. Hence the value zero (default value) is written to
slave's binlog.
Why only *Filtered slave* is effected when the code is
in common place:
-------------------------------------------------------
In Query_log_event::do_apply_event,
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is reset to zero at the end of the function. In case of
normal slave (No Filters), this variable will be reset.
In Filtered slave, Slave SQL thread defers all IRU events's
execution until IRU's Query_log event is received. Once it
receives Query_log_event it executes all pending IRU events
and then it executes Query_log_event. Hence the variable is
not getting reset to 0, causing this bug.
Fix: As described above, the root cause was setting
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
when Int_var_log_event was executed by a SQL thread. Hence
removing the problematic line from the code.
-row_stmt_start_timestamp
-last_event_start_time
-long_find_row_note
-trans_retries
Added slave_executed_entries_lock to protect rli->executed_entries
Added primitives for thread safe 64 bit increment
Update rli->executed_entries when event has executed, not when event has been sent to sql execution thread
sql/log_event.cc:
row_stmt_start and long_find_row_note is now in rpl_group_info
sql/mysqld.cc:
Added slave_executed_entries_lock to protect rli->executed_entries
sql/mysqld.h:
Added slave_executed_entries_lock to protect rli->executed_entries
Added primitives for thread safe 64 bit increment
sql/rpl_parallel.cc:
Update rli->executed_entries when event has executed, not when event has been sent to sql execution thread
sql/rpl_rli.cc:
Moved row_stmt_start_timestamp, last_event_start_time and long_find_row_note from Relay_log_info to rpl_group_info
sql/rpl_rli.h:
Moved trans_retries, row_stmt_start_timestamp, last_event_start_time and long_find_row_note from Relay_log_info to rpl_group_info
sql/slave.cc:
Use rgi for trans_retries and last_event_start_time
Update rli->executed_entries when event has executed, not when event has been sent to sql execution thread
Reset trans_retries when object is created
- Made slaves temporary table multi-thread slave safe by adding mutex around save_temporary_table usage.
- rli->save_temporary_tables is the active list of all used temporary tables
- This is copied to THD->temporary_tables when temporary tables are opened and updated when temporary tables are closed
- Added THD->lock_temporary_tables() and THD->unlock_temporary_tables() to simplify this.
- Relay_log_info->sql_thd renamed to Relay_log_info->sql_driver_thd to avoid wrong usage for merged code.
- Added is_part_of_group() to mark functions that are part of the next function. This replaces setting IN_STMT when events are executed.
- Added is_begin(), is_commit() and is_rollback() functions to Query_log_event to simplify code.
- If slave_skip_counter is set run things in single threaded mode. This simplifies code for skipping events.
- Updating state of relay log (IN_STMT and IN_TRANSACTION) is moved to one single function: update_state_of_relay_log()
We can't use OPTION_BEGIN to check for the state anymore as the sql_driver and sql execution threads may be different.
Clear IN_STMT and IN_TRANSACTION in init_relay_log_pos() and Relay_log_info::cleanup_context() to ensure the flags doesn't survive slave restarts
is_in_group() is now independent of state of executed transaction.
- Reset thd->transaction.all.modified_non_trans_table() if we did set it for single table row events.
This was mainly for keeping the flag as documented.
- Changed slave_open_temp_tables to uint32 to be able to use atomic operators on it.
- Relay_log_info::sleep_lock -> rpl_group_info::sleep_lock
- Relay_log_info::sleep_cond -> rpl_group_info::sleep_cond
- Changed some functions to take rpl_group_info instead of Relay_log_info to make them multi-slave safe and to simplify usage
- do_shall_skip()
- continue_group()
- sql_slave_killed()
- next_event()
- Simplifed arguments to io_salve_killed(), check_io_slave_killed() and sql_slave_killed(); No reason to supply THD as this is part of the given structure.
- set_thd_in_use_temporary_tables() removed as in_use is set on usage
- Added information to thd_proc_info() which thread is waiting for slave mutex to exit.
- In open_table() reuse code from find_temporary_table()
Other things:
- More DBUG statements
- Fixed the rpl_incident.test can be run with --debug
- More comments
- Disabled not used function rpl_connect_master()
mysql-test/suite/perfschema/r/all_instances.result:
Moved sleep_lock and sleep_cond to rpl_group_info
mysql-test/suite/rpl/r/rpl_incident.result:
Updated result
mysql-test/suite/rpl/t/rpl_incident-master.opt:
Not needed anymore
mysql-test/suite/rpl/t/rpl_incident.test:
Fixed that test can be run with --debug
sql/handler.cc:
More DBUG_PRINT
sql/log.cc:
More comments
sql/log_event.cc:
Added DBUG statements
do_shall_skip(), continue_group() now takes rpl_group_info param
Use is_begin(), is_commit() and is_rollback() functions instead of inspecting query string
We don't have set slaves temporary tables 'in_use' as this is now done when tables are opened.
Removed IN_STMT flag setting. This is now done in update_state_of_relay_log()
Use IN_TRANSACTION flag to test state of relay log.
In rows_event_stmt_cleanup() reset thd->transaction.all.modified_non_trans_table if we had set this before.
sql/log_event.h:
do_shall_skip(), continue_group() now takes rpl_group_info param
Added is_part_of_group() to mark events that are part of the next event. This replaces setting IN_STMT when events are executed.
Added is_begin(), is_commit() and is_rollback() functions to Query_log_event to simplify code.
sql/log_event_old.cc:
Removed IN_STMT flag setting. This is now done in update_state_of_relay_log()
do_shall_skip(), continue_group() now takes rpl_group_info param
sql/log_event_old.h:
Added is_part_of_group() to mark events that are part of the next event.
do_shall_skip(), continue_group() now takes rpl_group_info param
sql/mysqld.cc:
Changed slave_open_temp_tables to uint32 to be able to use atomic operators on it.
Relay_log_info::sleep_lock -> Rpl_group_info::sleep_lock
Relay_log_info::sleep_cond -> Rpl_group_info::sleep_cond
sql/mysqld.h:
Updated types and names
sql/rpl_gtid.cc:
More DBUG
sql/rpl_parallel.cc:
Updated TODO section
Set thd for event that is execution
Use new is_begin(), is_commit() and is_rollback() functions.
More comments
sql/rpl_rli.cc:
sql_thd -> sql_driver_thd
Relay_log_info::sleep_lock -> rpl_group_info::sleep_lock
Relay_log_info::sleep_cond -> rpl_group_info::sleep_cond
Clear IN_STMT and IN_TRANSACTION in init_relay_log_pos() and Relay_log_info::cleanup_context() to ensure the flags doesn't survive slave restarts.
Reset table->in_use for temporary tables as the table may have been used by another THD.
Use IN_TRANSACTION instead of OPTION_BEGIN to check state of relay log.
Removed IN_STMT flag setting. This is now done in update_state_of_relay_log()
sql/rpl_rli.h:
Changed relay log state flags to bit masks instead of bit positions (most other code we have uses bit masks)
Added IN_TRANSACTION to mark if we are in a BEGIN ... COMMIT section.
save_temporary_tables is now thread safe
Relay_log_info::sleep_lock -> rpl_group_info::sleep_lock
Relay_log_info::sleep_cond -> rpl_group_info::sleep_cond
Relay_log_info->sql_thd renamed to Relay_log_info->sql_driver_thd to avoid wrong usage for merged code
is_in_group() is now independent of state of executed transaction.
sql/slave.cc:
Simplifed arguments to io_salve_killed(), sql_slave_killed() and check_io_slave_killed(); No reason to supply THD as this is part of the given structure.
set_thd_in_use_temporary_tables() removed as in_use is set on usage in sql_base.cc
sql_thd -> sql_driver_thd
More DBUG
Added update_state_of_relay_log() which will calculate the IN_STMT and IN_TRANSACTION state of the relay log after the current element is executed.
If slave_skip_counter is set run things in single threaded mode.
Simplifed arguments to io_salve_killed(), check_io_slave_killed() and sql_slave_killed(); No reason to supply THD as this is part of the given structure.
Added information to thd_proc_info() which thread is waiting for slave mutex to exit.
Disabled not used function rpl_connect_master()
Updated argument to next_event()
sql/sql_base.cc:
Added mutex around usage of slave's temporary tables. The active list is always kept up to date in sql->rgi_slave->save_temporary_tables.
Clear thd->temporary_tables after query (safety)
More DBUG
When using temporary table, set table->in_use to current thd as the THD may be different for slave threads.
Some code is ifdef:ed with REMOVE_AFTER_MERGE_WITH_10 as the given code in 10.0 is not yet in this tree.
In open_table() reuse code from find_temporary_table()
sql/sql_binlog.cc:
rli->sql_thd -> rli->sql_driver_thd
Remove duplicate setting of rgi->rli
sql/sql_class.cc:
Added helper functions rgi_lock_temporary_tables() and rgi_unlock_temporary_tables()
Would have been nicer to have these inline, but there was no easy way to do that
sql/sql_class.h:
Added functions to protect slaves temporary tables
sql/sql_parse.cc:
Added DBUG_PRINT
sql/transaction.cc:
Added comment
Currently several places use description_event->common_header_len instead of
LOG_EVENT_MINIMAL_HEADER_LEN when parsing events with "frozen" headers (such
as Start_event_v3 and its subclasses such as Format_description_log_event, as
well as Rotate_event). This causes events with extra headers (which would otherwise
be valid and those headers ignored) to be corrupted due to over-reading or skipping
into the data portion of the log events.
It is rewritten in some details patch of Jeremy Cole (See MDEV):
- The virtual function returns length to avoid IFs (and only one call of the virtual function made)
- Printing function avoids printing strings
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.
To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.
This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.
Finally, two warnings are added when dump threads encounter the temporary
errors.
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.
To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.
This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.
Finally, two warnings are added when dump threads encounter the temporary
errors.
mysql-test/suite/rpl/r/last_insert_id.result:
Test case for last_insert_id
mysql-test/suite/rpl/t/last_insert_id.cnf:
Test case for last_insert_id
mysql-test/suite/rpl/t/last_insert_id.test:
Test case for last_insert_id
sql/log_event.cc:
Added DBUG_PRINT
Set thd->first_successful_insert_id_in_prev_stmt_for_binlog when setting thd->first_successful_insert_id_in_prev_stmt.
This is required to get last_insert_id() replicated.
This is analog to how read_first_successful_insert_id_in_prev_stmt() works.
sql/rpl_utility.cc:
Added DBUG_PRINT
The ignored events are not written to the relay log, but instead a fake
Rotate event is generated to handle update of position.
Extend this for Gtid so we similarly generate a fake Gtid_list event
to update the GTID position.
Also fix an unrelated test issue that got triggered by the added test cases.
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.
The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.
The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.
The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.
This patch solves these two problems in the following way:
- In case when MDL deadlock is detect transaction rollback is requested
by setting THD::transaction_rollback_request flag.
- Code performing rollback of transaction if THD::transaction_rollback_request
is moved out from trans_rollback_stmt(). Now we handle rollback request
on the same level as we call trans_rollback_stmt() and release statement/
transaction MDL locks.
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.
The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.
The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.
The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.
This patch solves these two problems in the following way:
- In case when MDL deadlock is detect transaction rollback is requested
by setting THD::transaction_rollback_request flag.
- Code performing rollback of transaction if THD::transaction_rollback_request
is moved out from trans_rollback_stmt(). Now we handle rollback request
on the same level as we call trans_rollback_stmt() and release statement/
transaction MDL locks.
includes:
* remove some remnants of "Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING"
* introduce LOCK_share, now LOCK_ha_data is strictly for engines
* rea_create_table() always creates .par file (even in "frm-only" mode)
* fix a 5.6 bug, temp file leak on dummy ALTER TABLE
Fix a bunch of issues found with locking, ordering, and non-thread-safe stuff
in Relay_log_info.
Now able to do a simple benchmark, showing 4.5 times speedup for applying a
binlog with 10000 REPLACE statements.
- Make query plan be re-saved after the first join execution
(saving it after JOIN::cleanup is too late because EXPLAIN output
is currently produced before that)
- Handle QPF allocation/deallocation for edge cases, like unsuccessful
BINLOG command.
- Work around the problem with UNION's direct subselects not being visible.
- Update test results ("Using temporary; Using filesort" are now always printed
last in the Extra column)
- This cset gets rid of memory leaks/crashes. Some result mismatches still remain.
Fixed OPTIMIZE with innodb
include/my_sys.h:
Removed ATTRIBUTE_FORMAT() as it gave warnings for %'s
sql/log_event.cc:
Optimization:
use my_b_write() and my_b_write_byte() instead of my_b_printf()
use strmake() instead of my_snprintf()
sql/sql_admin.cc:
Fixed bug in admin_recreate_table()
Fixed OPTIMIZE with innodb
sql/sql_table.cc:
Indentation fixes
strings/my_vsnprintf.c:
Changed fprintf() to fputs()
In record_gtid(), too many rows were deleted from the slave position
hash - we need to always keep on to the most recent committed row,
so we have a valid slave position at all times.
- temporary tables now works
- mysql-system_tables updated to not use temporary tables
- PASSWORD() function fixed
- Support for STATS_AUTO_RECALC, STATS_PERSISTENT and STATS_SAMPLE_PAGES table options
See also MySQL Bug #39750 and similar ones.
Fix my_delete() on Windows, to safely remvoe files on Windows, including files that are opened by another threads in the same process, antiviruses and backup applications. If file to be deleted is also opened by another thread, the file is renamed to unique name prior to deletion - this makes it possible to create file with the same name right after deletion.
With this patch my_delete_allow_opened() becomes obsolete and is replaced with my_delete().
This patch is rework of the patch http://lists.mysql.com/commits/59327 for MySQL bug#39750.
When @@GLOBAL.gtid_strict_mode=1, then certain operations result
in error that would otherwise result in out-of-order binlog files
between servers.
GTID sequence numbers are now allocated independently per domain;
this results in less/no holes in GTID sequences, increasing the
likelyhood that diverging binlogs will be caught by the slave when
GTID strict mode is enabled.
bzr merge lp:maria/5.5 -rtag:mariadb-5.5.31
Text conflict in cmake/cpack_rpm.cmake
Text conflict in debian/dist/Debian/control
Text conflict in debian/dist/Ubuntu/control
Text conflict in sql/CMakeLists.txt
Conflict adding file sql/db.opt. Moved existing file to sql/db.opt.moved.
Conflict adding file sql/db.opt.moved. Moved existing file to sql/db.opt.moved.moved.
Text conflict in sql/mysqld.cc
Text conflict in support-files/mysql.spec.sh
8 conflicts encountered.
The problem was the Gtid_list event which is logged to the binlog in
10.0 and is not understood by the 5.5 server.
This event is supposed to be replaced with a dummy event for 5.5
servers. But the very first event logged in the very first binlog
has an empty list of GTID, which makes the event too short to be
replacable with an empty event.
The fix is to pad the empty Gtid_list event to be big enough to
be replacable by a dummy event.
Change of user interface to be more logical and more in line with expectations
to work similar to old-style replication.
User can now explicitly choose in CHANGE MASTER whether binlog position is
taken into account (master_gtid_pos=current_pos) or not (master_gtid_pos=
slave_pos) when slave connects to master.
@@gtid_pos is replaced by three separate variables @@gtid_slave_pos (can
be set by user, replicated GTIDs only), @@gtid_binlog_pos (read only), and
@@gtid_current_pos (a combination of the two, most recent GTID within each
domain). mysql.rpl_slave_state is renamed to mysql.gtid_slave_pos to match.
This fixes MDEV-4474.
Implement START SLAVE UNTIL master_gtid_pos = "<GTID position>".
Add test cases, including a test showing how to use this to promote
a new master among a set of slaves.
MDEV-4489 "Replication of big5, cp932, gbk, sjis strings makes wrong values on slave"
has been fixed.
Problem:
String constants of some Asian charsets (big5,cp932,gbk,sjis)
can have backslash '\' (0x5C) in the second byte of multi-byte characters.
Replicating of such constants using the standard '\'-escaping is dangerous.
Therefore, constants of these charsets are replicated using hex notation:
INSERT INTO t1 (a) VALUES (0x815C);
However, 0xHHHH constants do not work well in some cases,
because they can behave as strings and as numbers, depending on context
(for example, depending on the data type of the column in an INSERT statement).
This SQL script was not replicated correctly with statement-based replication:
SET NAMES gbk;
PREPARE STMT FROM 'INSERT INTO t1 (a) VALUES (?)';
SET @a = '1';
EXECUTE STMT USING @a;
The INSERT statement was replicated as:
INSERT INTO t1 (a) VALUES (0x31);
'1' was correctly converted to the number 1 on master.
But the 0x31 constant was treated as number 49 on slave.
Fix:
1. Binary log now uses X'HHHH' instead of 0xHHHH constants.
2. The X'HHHH' constants now work always as strings, in all contexts.
This is the SQL standard compliant behaviour.
After the fix, the above statement is replicated as:
INSERT INTO t1 (a) VALUES (X'31');
X'31' is treated as string '1' on slave, and is correctly converted to 1.
modified:
@ mysql-test/r/ctype_cp932_binlog_stm.result
@ mysql-test/r/select.result
@ mysql-test/r/select_jcl6.result
@ mysql-test/r/select_pkeycache.result
@ mysql-test/r/user_var-binlog.result
@ mysql-test/r/varbinary.result
@ mysql-test/suite/binlog/r/binlog_stm_ctype_ucs.result
@ mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
@ mysql-test/suite/rpl/r/rpl_charset_sjis.result
@ mysql-test/suite/rpl/r/rpl_mdev382.result
@ mysql-test/suite/rpl/t/rpl_charset_sjis.test
@ mysql-test/t/ctype_cp932_binlog_stm.test
@ mysql-test/t/select.test
@ mysql-test/t/varbinary.test
Adding and updating tests
@ sql/item.cc
@ sql/item.h
@ sql/sql_yacc.yy
@ sql/sql_lex.cc
Splitting the implementations of X'HH' and 0xHH constants into two
separate classes. Fixing the parser to distinguish the two syntaxes.
@ sql/log_event.cc
Using X'HH' instead of 0xHH for binary logging for string constants
of the "dangerous" charsets.
@ sql/sql_string.h
Adding a helped method String::append_hex().
- Calls to cleanup_load_tmpdir() could delete temporary files for another master connection
- Concurrent LOAD DATA commands from two master connections could use the same file name
Other bug fixes:
- Enlarge buffer for connection names with 'special characters' one can't store in filenames
Optimization:
- Don't do 'lower case' of connection names. We can use cmp_connection_name, where we already have the connection name in lower case.
mysql-test/suite/multi_source/load_data.result:
Test case for MDEV-4352
mysql-test/suite/multi_source/load_data.test:
Test case for MDEV-4352
sql/log_event.cc:
Fixed: MDEV-4352
- Calls to cleanup_load_tmpdir() could delete temporary files for another master connection
- Concurrent LOAD DATA commands from two master connections could use the same file name
The fix was to add the connection name (if one exists) to all slave temporary files used by LOAD DATA
sql/rpl_mi.cc:
Enlarge buffer for connection names with 'special characters' one can't store in filenames
Use mi->cmp_connection_name for connection file names.
sql/rpl_rli.cc:
Use mi->cmp_connection_name for connection file names.
sql/slave.cc:
Removed not needed empty line
sql/sql_const.h:
Added MAX_FILENAME_MBWIDTH to be able to calculate buffer length for connection_names stored in file names
sql/sql_repl.cc:
Use mi->cmp_connection_name for connection file names.
When the slave connects, the master skips binlog event groups
until it reaches the position requested by the slave. To
identify event groups, it needs to detect COMMIT events. But
this detection did not correctly handle binlog checksums, so
could incorrectly skip extra groups due to not detecting the
end of an event group.
Server shutdown timeout of 10 seconds in test cases is too little for heavily
loaded test servers.
Fix innodb_bug12902967 to not fail with wrong error log output if we have
warnings about too few AIO handles for InnoDB.
Fix typo which could lead to unnecessarily replacing GTID event with dummy
event.
Users can set different repplication filter rules for each replication connection, in my.cnf or command line.
But the rules set online will not record in master.info, it means if users restart MySQL, these rules will lose.
So if users wantn't their replication filter rules lose, they should write the rules in my.cnf.
Users can set rules by 2 ways:
1. Online SET command, "SET connection_name.replication_filter_settings = rules;".
2. In my.cnf, "connection_name.replication_filter_settings = rules".
If no connection_name in my.cnf, this rule will apply for ALL replication connection.
If no connetion_name in SET statement, this rull will apply for default_connection_name.
Merge of 10.0-mdev26 feature tree into 10.0-base.
Global transaction ID is prepended to each event group in the binlog.
Slave connect can request to start from GTID position instead of specifying
file name/offset of master binlog. This facilitates easy switch to a new
master.
Slave GTID state is stored in a table mysql.rpl_slave_state, which can be
InnoDB to get crash-safe slave state.
GTID includes a replication domain ID, allowing to keep track of distinct
positions for each of multiple masters.
* replace pointer acrobatics with a struct
* make sorting explicit: MY_DONT_SORT -> MY_WANT_SORT
(if you want something to be done - say it. fixes all places where
my_dir() was used without thinking)
* typo s/number_off_files/number_of_files/
* directory_file_name() doesn't need to be extern
* remove #ifdef __BORLANDC__
* ignore '.' and '..' entries
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
mysys/mf_iocache2.c:
Added a check for "cache->error" and simulation of cache read failure
mysys/my_read.c:
Simulation of read failure
sql/log_event.cc:
Added debug simulation
sql/sql_repl.cc:
Added a check for cache error
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
sql/log_event.cc:
Storing the Uservar parsing time query id into a new member of the event
to to temporarily substitute
with it the actual thread id at the Uservar execution time.
sql/log_event.h:
Storage for keeping query-id in User-var intance is added.
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
Adjust full test suite to work with GTID.
Huge patch, mainly due to having to update .result file for all SHOW BINLOG
EVENTS and mysqlbinlog outputs, where the new GTID events pop up.
Everything was painstakingly checked to be still correct and valid .result
file updates.
The output of mysqlbinlog (with "-v --base64-output=DECODE-ROWS" flags) can not always be read or parsed correctly
when string columns contain single-quotes or backslash characters.
The fix for this bug is to escape single-quote and backslash characters on output, so that the result is both more
readable and more easily parse-able.
Note that this is just for comments, so it doesn't affect the replication.
sql/log_event.cc:
Escape \ and ' properly for mysqlbin user comments.
Improvements to record_gtid():
- Check for correct table definition of mysql.rpl_slave_state
- Use autocommit, to save one call to ha_commit_trans()
- Slightly more efficient way to set table->write_set
- Use ha_index_read_map() to locate rows to support any storage engine.
Fix that CHANGE MASTER ... MASTER_GTID_POS="" works to start from the very
beginning of the binary log (with test case).
Fix that not finding the requested GTID position in master binlog results in
fatal error, not endless connect retry.
Remove the two-component form of GTID with implicit domain_id=0, as it
is likely to cause more confusion than help.
Give a better error for CHANGE MASTER ... MASTER_GTID_POS='gtid,gitd,...'
when two specified GTIDs have conflicting domain_id.
- Fix skipping initial MyISAM DML when connecting using GTID.
- Fix RESET MASTER not clearing in-memory binlog state.
- Fix not reading standalone flag in Gtid_log_event::peek().
- Fix skipping DDL that the slave has already seen when using GTID position.
- Fix that binlog_gtid_pos() (and hence slave connect) does not work
correctly in the very first binlog file (due to not logging empty
Gtid_list_log_event).
- Remove one instance of the stupid domain_id-0-is-implicit.
- Rename the confusing Gtid_Pos_Auto in SHOW SLAVE STATUS to Using_Gtid.
- Fix memory leak.
- Fix that slave GTID state was updated from the wrong place in the code,
causing random crashing and other misery.
- Fix updates to mysql.rpl_slave_state to not go to binlog (this would cause
duplicate key errors on the slave and is generally the wrong thing to do).
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.
Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.
Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
Slave now loads the GTID state from the master when connecting with
old-style filename/offset position.
This allows the user to use MASTER_GTID_POS=AUTO on next CHANGE MASTER
without any other action needed.
Implement binlog_gtid_pos() function. This will be used so that
the slave can obtain the gtid position automatically from first
connect with old-style position - then MASTER_GTID_POS=AUTO will
work the next time. Can also be used by mysqldump --master-data
to give the current gtid position directly.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
When starting slave, check binlog state in addition to mysql.rpl_slave.state.
This allows to switch a previous master to be a slave directly
with MASTER_GTID_POS=AUTO.
RATHER THAN A TABLE
Problem: In RBR, If a table is converted into a view at slave,
(i.e., "drop table 'object1'" & "create view 'object1'"), then any
DML operations on the table at master are causing crash at slave.
Analysis: Slave prepares tables to be opened for DML list when it
receives Table_map_log_event(s). And the same list will be sent to
open_table function. Open_table logic assumes that if the list
contains a view object, it also contains "select_lex" object of
that view. In the above special case, the table object does not
contain 'select_lex' as it is base table at master. Since it
is a view at slave, open_table logic goes to 'mysql_make_view()'
function which assumes that 'select_lex' exists for the object.
Fix: While preparing 'tables to be opened' list, we should make
sure that table required type is 'base table'. If it is not
base table while opening the object, mysql_make_view will throw an
error similar to 'object is not a base table'
sql/log_event.cc:
Restrict that all table_map_log_event's objects should be
base tables @ slave also.
RATHER THAN A TABLE
Problem: In RBR, If a table is converted into a view at slave,
(i.e., "drop table 'object1'" & "create view 'object1'"), then any
DML operations on the table at master are causing crash at slave.
Analysis: Slave prepares tables to be opened for DML list when it
receives Table_map_log_event(s). And the same list will be sent to
open_table function. Open_table logic assumes that if the list
contains a view object, it also contains "select_lex" object of
that view. In the above special case, the table object does not
contain 'select_lex' as it is base table at master. Since it
is a view at slave, open_table logic goes to 'mysql_make_view()'
function which assumes that 'select_lex' exists for the object.
Fix: While preparing 'tables to be opened' list, we should make
sure that table required type is 'base table'. If it is not
base table while opening the object, mysql_make_view will throw an
error similar to 'object is not a base table'
KILL now breaks locks inside InnoDB
Fixed possible deadlock when running INNODB STATUS
Added ha_kill_query() and kill_query() to send kill signal to all storage engines
Added reset_killed() to ensure we don't reset killed state while awake() is getting called
include/mysql/plugin.h:
Added thd_mark_as_hard_kill()
include/mysql/plugin_audit.h.pp:
Added thd_mark_as_hard_kill()
include/mysql/plugin_auth.h.pp:
Added thd_mark_as_hard_kill()
include/mysql/plugin_ftparser.h.pp:
Added thd_mark_as_hard_kill()
sql/handler.cc:
Added ha_kill_query() to send kill signal to all storage engines
sql/handler.h:
Added ha_kill_query() and kill_query() to send kill signal to all storage engines
sql/log_event.cc:
Use reset_killed()
sql/mdl.cc:
use thd->killed instead of thd_killed() to abort on soft kill
sql/sp_rcontext.cc:
Use reset_killed()
sql/sql_class.cc:
Fixed possible deadlock in INNODB STATUS by not getting thd->LOCK_thd_data if it's locked.
Use reset_killed()
Tell storge engines that KILL has been sent
sql/sql_class.h:
Added reset_killed() to ensure we don't reset killed state while awake() is getting called.
Added mark_as_hard_kill()
sql/sql_insert.cc:
Use reset_killed()
sql/sql_parse.cc:
Simplify detection of killed queries.
Use reset_killed()
sql/sql_select.cc:
Use reset_killed()
sql/sql_union.cc:
Use reset_killed()
storage/innobase/handler/ha_innodb.cc:
Added innobase_kill_query()
Fixed error reporting for interrupted queries.
storage/xtradb/handler/ha_innodb.cc:
Added innobase_kill_query()
Fixed error reporting for interrupted queries.
FORMAT_DESCRIPTION_LOG_EVENT::CALC_SERVER_VERSION_SPLIT
Problem: When reading a Format_description_log_event, it supposes MySQL
version is always valid and DBUG_ASSERTION is used check the version number.
However, user may give a wrong binlog offset, even give a faked binary event
which includes an invalid MySQL version. This will cause server crash.
Fix: The assertions are removed and an error will be reported if MySQL
version in Format_description_log_event is invalid.
FORMAT_DESCRIPTION_LOG_EVENT::CALC_SERVER_VERSION_SPLIT
Problem: When reading a Format_description_log_event, it supposes MySQL
version is always valid and DBUG_ASSERTION is used check the version number.
However, user may give a wrong binlog offset, even give a faked binary event
which includes an invalid MySQL version. This will cause server crash.
Fix: The assertions are removed and an error will be reported if MySQL
version in Format_description_log_event is invalid.
Now slave can connect to master, sending start position as slave state
rather than old-style binlog name/position.
This enables to switch to a new master by changing just connection
information, replication slave GTID state ensures that slave starts
at the correct point in the new master.
VARIABLES
Analysis:
-------------
After executing the query, new value of the user defined
variables are set in the function "select_dumpvar::send_data".
"select_dumpvar::send_data" first calls function
"Item_func_set_user_var::save_item_result()". This function
checks the nullness of the Item_field passed as parameter
to it and saves it. The nullness of item is stored with
arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls
"Item_func_set_user_var::update()" which notices null
result that was saved and calls "Item_func_set_user_var::
update_hash". But here null_value is not set and args[0]
is different from that given to function "Item_func_set_user_var::
set_item_result()". This causes "Item_func_set_user_var::
update_hash" function to believe that its getting non-null value.
"user_var_entry::length" set to 0 and hence "user_var_entry::value"
is made to point to extra_area allocated in "user_var_entry".
And "Item_func_set_user_var::update_hash" tries to write
at memory beyond extra_area for result type DECIMAL. Because of
this invalid write issue is reported by Valgrind.
Before this bug was introduced, we avoided this problem by
creating "Item_func_set_user_var" object with the same
Item_field as arg[0] and as parameter to
Item_func_set_user_var::save_item_result(). But now
they are refering to different args[0]. Because of this
null_value flag set in parameter Item_field in function
"Item_func_set_user_var::save_item_result()" is not
reflected in "Item_func_set_user_var" object.
Fix:
------------
This issue is reported on versions 5.5.24. Issue does not exists
in 5.5.23, 5.1, 5.6 and trunk.
This issue was introduced by
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for
bug #12408412), which was pushed into 5.5 and later releases. This patch
has later been reversed in 5.6 and trunk by
revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for
bug #14664077). Backported this patch in 5.5 also to fix this issue.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
VARIABLES
Analysis:
-------------
After executing the query, new value of the user defined
variables are set in the function "select_dumpvar::send_data".
"select_dumpvar::send_data" first calls function
"Item_func_set_user_var::save_item_result()". This function
checks the nullness of the Item_field passed as parameter
to it and saves it. The nullness of item is stored with
arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls
"Item_func_set_user_var::update()" which notices null
result that was saved and calls "Item_func_set_user_var::
update_hash". But here null_value is not set and args[0]
is different from that given to function "Item_func_set_user_var::
set_item_result()". This causes "Item_func_set_user_var::
update_hash" function to believe that its getting non-null value.
"user_var_entry::length" set to 0 and hence "user_var_entry::value"
is made to point to extra_area allocated in "user_var_entry".
And "Item_func_set_user_var::update_hash" tries to write
at memory beyond extra_area for result type DECIMAL. Because of
this invalid write issue is reported by Valgrind.
Before this bug was introduced, we avoided this problem by
creating "Item_func_set_user_var" object with the same
Item_field as arg[0] and as parameter to
Item_func_set_user_var::save_item_result(). But now
they are refering to different args[0]. Because of this
null_value flag set in parameter Item_field in function
"Item_func_set_user_var::save_item_result()" is not
reflected in "Item_func_set_user_var" object.
Fix:
------------
This issue is reported on versions 5.5.24. Issue does not exists
in 5.5.23, 5.1, 5.6 and trunk.
This issue was introduced by
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for
bug #12408412), which was pushed into 5.5 and later releases. This patch
has later been reversed in 5.6 and trunk by
revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for
bug #14664077). Backported this patch in 5.5 also to fix this issue.
MDEV-26: Global transaction id, partial commit
Change server_id to be a session variable.
User with SUPER can set it to binlog with different server_id.
Implement backward-compatible ::server_id mirror for plugins.
Generalized support for auto-updated and/or auto-initialized timestamp
and datetime columns. This patch is a reimplementation of MySQL's
"WL#5874: CURRENT_TIMESTAMP as DEFAULT for DATETIME columns". In order to
ease future merges, this implementation reused few function and variable
names from MySQL's patch, however the implementation is quite different.
TODO:
The only unresolved problem in this patch is the semantics of LOAD DATA for
TIMESTAMP and DATETIME columns in the cases when there are missing or NULL
columns. I couldn't fully comprehend the logic behind MySQL's behavior and
its relationship with their own documentation, so I left the results to be
more consistent with all other LOAD cases.
The problematic test cases can be seen by running the test file function_defaults,
and observing the test case differences. Those were left on purpose for discussion.
This bug had two problems:
P1) Reads out of bounds;
P2) Writes out of bounds.
PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.
Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.
PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.
Currently value is allocated on heap memory.
This bug had two problems:
P1) Reads out of bounds;
P2) Writes out of bounds.
PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.
Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.
PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.
Currently value is allocated on heap memory.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
This allows us to avoid calculating variables (including those involving mutex) that doesn't match the given
wildcard in SHOW STATUS LIKE '...'
Removed all references to active_mi that could cause problems for multi-source replication.
Added START|STOP ALL SLAVES
Added SHOW ALL SLAVES STATUS
include/mysql/plugin.h:
Added SHOW_SIMPLE_FUNC
include/mysql/plugin_audit.h.pp:
Updated .pp file
include/mysql/plugin_auth.h.pp:
Updated .pp file
include/mysql/plugin_ftparser.h.pp:
Updated .pp file
mysql-test/suite/multi_source/info_logs.result:
New columns in SHOW ALL SLAVES STATUS
mysql-test/suite/multi_source/info_logs.test:
Test new syntax
mysql-test/suite/multi_source/simple.result:
New columns in SHOW ALL SLAVES STATUS
mysql-test/suite/multi_source/simple.test:
test new syntax
mysql-test/suite/multi_source/syntax.result:
Updated result
mysql-test/suite/multi_source/syntax.test:
Test new syntax
mysql-test/suite/rpl/r/rpl_skip_replication.result:
Updated result
plugin/semisync/semisync_master_plugin.cc:
SHOW_FUNC -> SHOW_SIMPLE_FUNC
sql/item_create.cc:
Simplify code
sql/lex.h:
Added SLAVES keyword
sql/log.cc:
Constant -> define
sql/log_event.cc:
Added comment
sql/mysqld.cc:
SHOW_FUNC -> SHOW_SIMPLE_FUNC
Made slave_retried_trans, slave_received_heartbeats and heartbeat_period multi-source safe
Clear variable denied_connections and slave_retried_transactions on startup
sql/mysqld.h:
Added slave_retried_transactions
sql/rpl_mi.cc:
create_signed_file_name -> create_logfile_name_with_suffix
Added start_all_slaves() and stop_all_slaves()
sql/rpl_mi.h:
Added prototypes
sql/rpl_rli.cc:
create_signed_file_name -> create_logfile_name_with_suffix
added executed_entries
sql/rpl_rli.h:
Added executed_entries
sql/share/errmsg-utf8.txt:
More and better error messages
sql/slave.cc:
Added more fields to SHOW ALL SLAVES STATUS
Added slave_retried_transactions
Changed constants -> defines
sql/sql_class.h:
Added comment
sql/sql_insert.cc:
active_mi.rli -> thd->rli_slave
sql/sql_lex.h:
Added SQLCOM_SLAVE_ALL_START & SQLCOM_SLAVE_ALL_STOP
sql/sql_load.cc:
active_mi.rli -> thd->rli_slave
sql/sql_parse.cc:
Added START|STOP ALL SLAVES
sql/sql_prepare.cc:
Added SQLCOM_SLAVE_ALL_START & SQLCOM_SLAVE_ALL_STOP
sql/sql_reload.cc:
Made REFRESH RELAY LOG multi-source safe
sql/sql_repl.cc:
create_signed_file_name -> create_logfile_name_with_suffix
Don't send my_ok() from start_slave() or stop_slave() so that we can call it for all master connections
sql/sql_show.cc:
Compare wild cards early for all variables
sql/sql_yacc.yy:
Added START|STOP ALL SLAVES
Added SHOW ALL SLAVES STATUS
sql/sys_vars.cc:
Made replicate_events_marked_for_skip,slave_net_timeout and rpl_filter multi-source safe.
sql/sys_vars.h:
Simplify Sys_var_rpl_filter
Changed names of multi-source log files so that original suffixes are kept.
include/my_sys.h:
Added fn_ext2(), which returns pointer to last '.' in file name
mysql-test/extra/rpl_tests/rpl_max_relay_size.test:
Updated test
mysql-test/suite/multi_source/info_logs-master.opt:
Test with strange file names
mysql-test/suite/multi_source/info_logs.result:
Updated results
mysql-test/suite/multi_source/info_logs.test:
Changed to test with complex names to be able to verify the filename generator code
mysql-test/suite/multi_source/relaylog_events.result:
Updated results
mysql-test/suite/multi_source/reset_slave.result:
Updated results
mysql-test/suite/multi_source/skip_counter.result:
Updated results
mysql-test/suite/multi_source/skip_counter.test:
Added testing of max_relay_log_size
mysql-test/suite/rpl/r/rpl_row_max_relay_size.result:
Updated results
mysql-test/suite/rpl/r/rpl_stm_max_relay_size.result:
Updated results
mysql-test/suite/sys_vars/r/max_relay_log_size_basic.result:
Updated results
mysql-test/suite/sys_vars/t/max_relay_log_size_basic.test:
Updated results
mysys/mf_fn_ext.c:
Added fn_ext2(), which returns pointer to last '.' in file name
sql/log.cc:
Removed some wrong casts
sql/log.h:
Updated comment to reflect new code
sql/log_event.cc:
Updated DBUG_PRINT
sql/mysqld.cc:
Added that max_relay_log_size copies it's values from max_binlog_size
sql/mysqld.h:
Removed max_relay_log_size
sql/rpl_mi.cc:
Changed names of multi-source log files so that original suffixes are kept.
sql/rpl_mi.h:
Updated prototype
sql/rpl_rli.cc:
Updated comment to reflect new code
Made max_relay_log_size depending on master connection.
sql/rpl_rli.h:
Made max_relay_log_size depending on master connection.
sql/set_var.h:
Made option global so that one can check and change min & max values (sorry Sergei)
sql/sql_class.h:
Made max_relay_log_size depending on master connection.
sql/sql_repl.cc:
Updated calls to create_signed_file_name()
sql/sys_vars.cc:
Made max_relay_log_size depending on master connection.
Made old code more reusable
sql/sys_vars.h:
Changed Sys_var_multi_source_uint to ulong to be able to handle max_relay_log_size
Made old code more reusable
and small collateral changes
mysql-test/lib/My/Test.pm:
somehow with "print" we get truncated writes sometimes
mysql-test/suite/perfschema/r/digest_table_full.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/dml_handler.result:
host table is not ported over yet
mysql-test/suite/perfschema/r/information_schema.result:
host table is not ported over yet
mysql-test/suite/perfschema/r/nesting.result:
this differs, because we don't rewrite general log queries, and multi-statement
packets are logged as a one entry. this result file is identical to what mysql-5.6.5
produces with the --log-raw option.
mysql-test/suite/perfschema/r/relaylog.result:
MariaDB modifies the binlog index file directly, while MySQL 5.6 has a feature "crash-safe binlog index" and modifies a special "crash-safe" shadow copy of the index file and then moves it over. That's why this test shows "NONE" index file writes in MySQL and "MANY" in MariaDB.
mysql-test/suite/perfschema/r/server_init.result:
MariaDB initializes the "manager" resources from the "manager" thread, and starts this thread only when --flush-time is not 0. MySQL 5.6 initializes "manager" resources unconditionally on server startup.
mysql-test/suite/perfschema/r/stage_mdl_global.result:
this differs, because MariaDB disables query cache when query_cache_size=0. MySQL does not
do that, and this causes useless mutex locks and waits.
mysql-test/suite/perfschema/r/statement_digest.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_consumers.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_long_query.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/rpl/r/rpl_mixed_drop_create_temp_table.result:
will be updated to match 5.6 when alfranio.correia@oracle.com-20110512172919-c1b5kmum4h52g0ni and anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y are merged
mysql-test/suite/rpl/r/rpl_non_direct_mixed_mixing_engines.result:
will be updated to match 5.6 when anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y is merged
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
create table t1 (a smallint primary key auto_increment);
insert into t1 values(32767);
insert into t1 values(NULL);
ERROR 1062 (23000): Duplicate entry '32767' for key 'PRIMARY
Now on always gets error HA_ERR_AUTOINC_RANGE=167 "Out of range value for column", independent of
store engine, SQL Mode or number of inserted rows. This is an unique error that is easier to test for in replication.
Another bug fix is that we now get an error when trying to insert a too big auto-generated value, even in non-strict mode.
Before one get insted the max column value inserted.
This patch also fixes some issues with inserting negative numbers in an auto-increment column.
Fixed the ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave.
This ensures that replication works between an old server to a new slave for auto-increment overflow errors.
Added SQLSTATE errors for handler errors
Smaller bug fixes:
* Added warnings for duplicate key errors when using INSERT IGNORE
* Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0"
* Allow one to see how cmake is called by using --just-print --just-configure
BUILD/FINISH.sh:
--just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake.
cmake/configure.pl:
--just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake.
include/CMakeLists.txt:
Added handler_state.h
include/handler_state.h:
SQLSTATE for handler error messages.
Required for HA_ERR_AUTOINC_ERANGE, but solves also some other cases.
mysql-test/extra/binlog_tests/binlog.test:
Fixed old wrong behaviour
Added more tests
mysql-test/extra/binlog_tests/binlog_insert_delayed.test:
Reset binary log to only print what's necessary in show_binlog_events
mysql-test/extra/rpl_tests/rpl_auto_increment.test:
Update to new error codes
mysql-test/extra/rpl_tests/rpl_insert_delayed.test:
Ignore warnings as this depends on how the test is run
mysql-test/include/strict_autoinc.inc:
On now gets an error on overflow
mysql-test/r/auto_increment.result:
Update results after fixing error message
mysql-test/r/auto_increment_ranges_innodb.result:
Test new behaviour
mysql-test/r/auto_increment_ranges_myisam.result:
Test new behaviour
mysql-test/r/commit_1innodb.result:
Added warnings for duplicate key error
mysql-test/r/create.result:
Added warnings for duplicate key error
mysql-test/r/insert.result:
Added warnings for duplicate key error
mysql-test/r/insert_select.result:
Added warnings for duplicate key error
mysql-test/r/insert_update.result:
Added warnings for duplicate key error
mysql-test/r/mix2_myisam.result:
Added warnings for duplicate key error
mysql-test/r/myisam_mrr.result:
Added warnings for duplicate key error
mysql-test/r/null_key.result:
Added warnings for duplicate key error
mysql-test/r/replace.result:
Update to new error codes
mysql-test/r/strict_autoinc_1myisam.result:
Update to new error codes
mysql-test/r/strict_autoinc_2innodb.result:
Update to new error codes
mysql-test/r/strict_autoinc_3heap.result:
Update to new error codes
mysql-test/r/trigger.result:
Added warnings for duplicate key error
mysql-test/r/xtradb_mrr.result:
Added warnings for duplicate key error
mysql-test/suite/binlog/r/binlog_innodb_row.result:
Updated result
mysql-test/suite/binlog/r/binlog_row_binlog.result:
Out of range data for auto-increment is not inserted anymore
mysql-test/suite/binlog/r/binlog_statement_insert_delayed.result:
Updated result
mysql-test/suite/binlog/r/binlog_stm_binlog.result:
Out of range data for auto-increment is not inserted anymore
mysql-test/suite/binlog/r/binlog_unsafe.result:
Updated result
mysql-test/suite/innodb/r/innodb-autoinc.result:
Update to new error codes
mysql-test/suite/innodb/r/innodb-lock.result:
Updated results
mysql-test/suite/innodb/r/innodb.result:
Updated results
mysql-test/suite/innodb/r/innodb_bug56947.result:
Updated results
mysql-test/suite/innodb/r/innodb_mysql.result:
Updated results
mysql-test/suite/innodb/t/innodb-autoinc.test:
Update to new error codes
mysql-test/suite/maria/maria3.result:
Updated result
mysql-test/suite/maria/mrr.result:
Updated result
mysql-test/suite/optimizer_unfixed_bugs/r/bug43617.result:
Updated result
mysql-test/suite/rpl/r/rpl_auto_increment.result:
Update to new error codes
mysql-test/suite/rpl/r/rpl_insert_delayed,stmt.rdiff:
Updated results
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
Updated results
mysql-test/t/auto_increment.test:
Update to new error codes
mysql-test/t/auto_increment_ranges.inc:
Test new behaviour
mysql-test/t/auto_increment_ranges_innodb.test:
Test new behaviour
mysql-test/t/auto_increment_ranges_myisam.test:
Test new behaviour
mysql-test/t/replace.test:
Update to new error codes
mysys/my_getopt.c:
Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0"
sql/handler.cc:
Ignore negative values for signed auto-increment columns
Always give an error if we get an overflow for an auto-increment-column (instead of inserting the max value)
Ensure that the row number is correct for the out-of-range-value error message.
******
Fixed wrong printing of column namn for "Out of range value" errors
Fixed that INSERT_ID is correctly replicated also for out-of-range autoincrement values
Fixed that print_keydup_error() can also be used to generate warnings
******
Return HA_ERR_AUTOINC_ERANGE (167) instead of ER_WARN_DATA_OUT_OF_RANGE for auto-increment overflow
sql/handler.h:
Allow INSERT IGNORE to continue also after out-of-range inserts.
Fixed that print_keydup_error() can also be used to generate warnings
sql/log_event.cc:
Added DBUG_PRINT
Fixed the ER_AUTOINC_READ_FAILED, ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave.
This ensures that replication works between an old server to a new slave for auto-increment overflow errors.
sql/sql_insert.cc:
Add warnings for duplicate key errors when using INSERT IGNORE
sql/sql_state.c:
Added handler errors
sql/sql_table.cc:
Update call to print_keydup_error()
storage/innobase/handler/ha_innodb.cc:
Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
storage/xtradb/handler/ha_innodb.cc:
Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
bzr merge lp:maria/5.5
...
Text conflict in CMakeLists.txt
Text conflict in sql/mysqld.cc
Text conflict in sql/sql_class.h
Text conflict in sql/sql_truncate.cc
4 conflicts encountered.
Introduce a new storage engine API method commit_checkpoint_request().
This is used to replace the fsync() at the end of every storage engine
commit with a single fsync() when a binlog is rotated.
Binlog rotation is now done during group commit instead of being
delayed until unlog(), removing some server stall and avoiding an
expensive lock/unlock of LOCK_log inside unlog().
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)