two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
Keep track of how many pending XIDs (transactions that are prepared in
storage engine and written into binlog, but not yet durably committed
on disk in the engine) there are in each binlog.
When the count of one binlog drops to zero, write a new binlog checkpoint
event, telling which is the oldest binlog with pending XIDs.
When doing XA recovery after a crash, check the last binlog checkpoint
event, and scan all binlog files from that point onwards for XIDs that
must be committed if found in prepared state inside engine.
Remove the code in binlog rotation that waits for all prepared XIDs to
be committed before writing a new binlog file (this is no longer necessary
when recovery can scan multiple binlog files).
Add function to replace arbitrary event with dummy event.
Add code which uses this to fix the bug that enabling row_annotate events
on the master breaks slaves which do not request such events.
Add that slaves set a variable @mariadb_slave_capability to inform the
master in a robust way about which events it can, and cannot, handle.
Add tests.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.cc:
The max_allowed_packet is not evaluated to the new option
slave_max_allowed_packet after the fix.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
The dump thread's max_allowed_packet is set to the upper limit
which makes it independent and it now reads whatever gets
logged in the binary log.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.cc:
The max_allowed_packet is not evaluated to the new option
slave_max_allowed_packet after the fix.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
The dump thread's max_allowed_packet is set to the upper limit
which makes it independent and it now reads whatever gets
logged in the binary log.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
IN THE ERROR LOG
Problem:
Using mysqlbinlog with the --read-from-remote-server option as shown below
prints a message in error log for each call. This happens for 5.5 and above
versions
mysqlbinlog -uroot -p --read-from-remote-server --host=localhost test
Message in error log file is given below:
120312 10:27:57 [Note] Start binlog_dump to slave_server(0), pos(test, 4)
The problem is that it can fill up the error log if the command is called
very often.
Analysis:
The below mentioned print function is called from "mysql_binlog_send" function
which causes the "Start binlog_dump..." string to be printed in error log file.
sql_print_information("Start binlog_dump to master_thread_id(%lu)
slave_server(%d)..."
Fix:
A condition has been added in such a way that the 'sql_print_information'
will be invoked only when the "log_warnings" variable is set to >1
otherwise don't call the 'sql_print_information' function.
IN THE ERROR LOG
Problem:
Using mysqlbinlog with the --read-from-remote-server option as shown below
prints a message in error log for each call. This happens for 5.5 and above
versions
mysqlbinlog -uroot -p --read-from-remote-server --host=localhost test
Message in error log file is given below:
120312 10:27:57 [Note] Start binlog_dump to slave_server(0), pos(test, 4)
The problem is that it can fill up the error log if the command is called
very often.
Analysis:
The below mentioned print function is called from "mysql_binlog_send" function
which causes the "Start binlog_dump..." string to be printed in error log file.
sql_print_information("Start binlog_dump to master_thread_id(%lu)
slave_server(%d)..."
Fix:
A condition has been added in such a way that the 'sql_print_information'
will be invoked only when the "log_warnings" variable is set to >1
otherwise don't call the 'sql_print_information' function.
Problem
========
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value ,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
Problem
========
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,
con1: Run Query on a table.
It is important that this SELECT must back-off while
trying to open the t1 and enter into wait_for_condition().
The SELECT then is blocked trying to lock mysys_var->mutex
which is held by con3. The very significant fact here is
that mysys_var->current_mutex will still point to LOCK_open,
even if LOCK_open is no longer held by con1 at this point.
con2: Try dropping table used in con1 or query some table.
It will hold LOCK_open and be blocked trying to lock
kernel_mutex held by con4.
con3: Try killing the query run by con1.
It will hold THD::LOCK_thd_data belonging to con1 while
trying to lock mysys_var->current_mutex belonging to con1.
But current_mutex will point to LOCK_open which is held
by con2.
con4: Get innodb engine status
It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
belonging to con1 which is held by con3.
So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.
CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
kernel_mutex -> THD::LOCK_thd_data)
FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.
Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.
Note: The problem is not found in 5.5. Introduction MDL subsystem
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced.
As the use of LOCK_open is removed in open_table() and
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,
con1: Run Query on a table.
It is important that this SELECT must back-off while
trying to open the t1 and enter into wait_for_condition().
The SELECT then is blocked trying to lock mysys_var->mutex
which is held by con3. The very significant fact here is
that mysys_var->current_mutex will still point to LOCK_open,
even if LOCK_open is no longer held by con1 at this point.
con2: Try dropping table used in con1 or query some table.
It will hold LOCK_open and be blocked trying to lock
kernel_mutex held by con4.
con3: Try killing the query run by con1.
It will hold THD::LOCK_thd_data belonging to con1 while
trying to lock mysys_var->current_mutex belonging to con1.
But current_mutex will point to LOCK_open which is held
by con2.
con4: Get innodb engine status
It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
belonging to con1 which is held by con3.
So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.
CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
kernel_mutex -> THD::LOCK_thd_data)
FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.
Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.
Note: The problem is not found in 5.5. Introduction MDL subsystem
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced.
As the use of LOCK_open is removed in open_table() and
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.
The problem is solved by moving 'LOG_INFO linfo;' to function scope.
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.
The problem is solved by moving 'LOG_INFO linfo;' to function scope.
mysql-test/suite/innodb/t/group_commit_crash.test:
remove autoincrement to avoid rbr being used for insert ... select
mysql-test/suite/innodb/t/group_commit_crash_no_optimize_thread.test:
remove autoincrement to avoid rbr being used for insert ... select
mysys/my_addr_resolve.c:
a pointer to a buffer is returned to the caller -> the buffer cannot be on the stack
mysys/stacktrace.c:
my_vsnprintf() is ok here, in 5.5
nes prefixed with .\ or ./
- Add my_basename() to mysys.
- Do not compile files that are not needed on Windows (my_addr_resolve, an
d safemalloc related stuff it it is not used)
Avoids linker warnings about compilation of essentially empty files.
A follow-up patch corrects max sizes of printed strings and changes llstr() to %lld.
Credits go to Davi who provided a great feedback.
sql/share/errmsg-utf8.txt:
Max size for the whole message is 512 so a part of - like '%-.512s' should be less,
reduction to 320 is safe and with good chances won't cut off a part of a rather log
message in Last_IO_Error = 'Got fatal error 1236 ...'
sql/sql_repl.cc:
llstr() is replaced by %lld.
The server crashes when receiving a COM_BINLOG_DUMP command with a position of 0 or
larger than the file size.
The execution proceeds to an error block having the last read replication coordinates
pointer be NULL and its dereferencing crashed the server.
Fixed with making "public" previously used only for heartbeat coordinates.
mysql-test/extra/rpl_tests/rpl_start_stop_slave.test:
regression test for bug#3593869-64035 is added.
mysql-test/suite/rpl/r/rpl_cant_read_event_incident.result:
results updated (error mess format is changed).
mysql-test/suite/rpl/r/rpl_log_pos.result:
results updated (error mess format is changed).
mysql-test/suite/rpl/r/rpl_manual_change_index_file.result:
results updated (error mess format is changed).
mysql-test/suite/rpl/r/rpl_packet.result:
results updated (error mess format is changed).
mysql-test/suite/rpl/r/rpl_stm_start_stop_slave.result:
results updated (error mess format is changed).
mysql-test/suite/rpl/t/rpl_stm_start_stop_slave.test:
Slave is stopped by bug#3593869-64035 tests so
-let $rpl_only_running_threads= 1 is set prior to rpl_end.
sql/share/errmsg-utf8.txt:
Increasing the max length of explanatory message to 512.
sql/sql_repl.cc:
Making `coord' to carry the last read from binlog event coordinates
regardless of heartbeat.
Renaming, small cleanup and simplifying the code after if (coord) becomes unnecessary.
Adding yet another 3rd pair of coordinates - the starting replication -
into error text.
The server crashes when receiving a COM_BINLOG_DUMP command with a position of 0 or
larger than the file size.
The execution proceeds to an error block having the last read replication coordinates
pointer be NULL and its dereferencing crashed the server.
Fixed with making "public" previously used only for heartbeat coordinates.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
mysql-test/suite/rpl/t/rpl_empty_master_host.test:
the test is binlog-format insensitive so it will be run with MIXED mode only.
mysql-test/suite/rpl/t/rpl_server_id_ignore.test:
the test is binlog-format insensitive so it will be run with MIXED mode only.
sql/sql_class.cc:
relocating the dyn-array
destruction into THD::cleanup_after_query().
sql/sql_lex.cc:
LEX.mi zero initialization is done in LEX().
sql/sql_lex.h:
Optimization for repl_ignore_server_ids to base on a static buffer
which size is chosen to fit to most common use cases.
sql/sql_repl.cc:
dyn-array destruction is relocated to THD::cleanup_after_query().
sql/sql_yacc.yy:
Refining logics of Lex->mi.repl_ignore_server_ids initialization.
The array is initialized once a corresponding option in CHANGE MASTER token sequence
is found.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
Fix typo causing too low timeout value for wait_for_slave_param.inc.
Fix binlog checksums following 5.5 merge.
Make sure the rpl suite can run with --mysqld=--binlog-checksum=CRC32
Fix a number of problems in the code when checksums are enabled.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Fixed with relocating the dyn-array's destructor at ~LEX() that is
the end of the session, per Gleb's patch idea.
Two optimizations were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
mysql-test/suite/rpl/t/rpl_empty_master_host.test:
the test is binlog-format insensitive so it will be run with MIXED mode only.
sql/sql_lex.cc:
the new flag is initialized.
sql/sql_lex.h:
A new bool flag new member to LEX.mi is added to stay UP since after
LEX.mi.repl_ignore_server_ids dynarray initialization was called
for the first time on the session. So it is set once and its life time
is session.
The array is destroyed at the end of the session.
sql/sql_repl.cc:
dyn-array destruction is relocated to ~LEX.
sql/sql_yacc.yy:
Refining logics of Lex->mi.repl_ignore_server_ids initialization.
The array is initialized once a corresponding option in CHANGE MASTER token sequence
is found.
The fact of initialization is memorized into the new flag.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Fixed with relocating the dyn-array's destructor at ~LEX() that is
the end of the session, per Gleb's patch idea.
Two optimizations were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.