consistently) on Replication Slave
lower_case_table_names 0 -> 1 replication works, it's safe as long as
mixed case names mapping to the lower case ones is one-to-one
This would happen especially in optimistic parallel replication, where there
is a good chance that a transaction will be rolled back (due to conflicts)
after it has executed record_gtid(). If the transaction did any deletions of
old rows as part of record_gtid(), those deletions will be undone as well.
And the code did not properly ensure that the deletions would be re-tried.
This patch makes record_gtid() remember the list of deletions done as part
of a transaction. Then in rpl_slave_state::update() when the changes have
been committed, we discard the list. However, in case of error and rollback,
in cleanup_context() we will instead put the list back into
rpl_global_gtid_slave_state so that the deletions will be re-tried later.
Probably fixes part of the cause of MDEV-12147 as well.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Observed and described
partitioned engine execution time difference
between master and slave was caused by excessive invocation
of base_engine::rnd_init which was done also for partitions
uninvolved into Rows-event operation.
The bug's slave slowdown therefore scales with the number of partitions.
Fixed with applying an upstream patch.
References:
----------
https://bugs.mysql.com/bug.php?id=73648
Bug#25687813 REPLICATION REGRESSION WITH RBR AND PARTITIONED TABLES
replicate_events_marked_for_skip=FILTER_ON_MASTER
[Note this is a cherry-pick from 10.2 branch.]
When events of a big transaction are binlogged offsetting over 2GB from
the beginning of the log the semisync master's dump thread
lost such events.
The events were skipped by the Dump thread that found their skipping
status erroneously.
The current fixes make sure the skipping status is computed correctly.
The test verifies them simulating the 2GB offset.
Problem
=======
When facing decoding of corrupt binary log files, server may misbehave
without detecting the events corruption.
This patch makes MySQL server more resilient to binary log decoding.
Fixes for events de-serialization and apply
===========================================
@sql/log_event.cc
Query_log_event::Query_log_event: added a check to ensure query length
is respecting event buffer limits.
Query_log_event::do_apply_event: extended a debug print, added a check
to character set to determine if it is "parseable" or not, verified if
database name is valid for system collation.
Start_log_event_v3::do_apply_event: report an error on applying a
non-supported binary log version.
Load_log_event::copy_log_event: added a check to table_name length.
User_var_log_event::User_var_log_event: added checks to avoid reading
out of buffer limits.
User_var_log_event::do_apply_event: reported an sanity check error
properly and added individual sanity checks for variable types that
expect fixed (or minimum) amount of bytes to be read.
Rows_log_event::Rows_log_event: added checks to avoid reading out of
buffer limits.
@sql/log_event_old.cc
Old_rows_log_event::Old_rows_log_event: added a sanity check to avoid
reading out of buffer limits.
@sql/sql_priv.h
Added a sanity check to available_buffer() function.
May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table"
I am not able to reproduce a crash, however there was no protection in
print_keydup_error() if the storage engine reported the wrong key number.
This patch adds such a protection and should stop any further crashes
in this case.
Other things:
- Added extra protection in Aria to not set errkey to more than number of
keys. (Don't think this is cause of this crash, but better safe than
sorry)
- Extend test_if_equal_repl_errors() to handle different cases of
ER_DUP_ENTRY. This is just mainly precaution for the future.
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
The value of wsrep_affected_rows were not reseted properly for
slave. Now we also wsrep_affected_rows in Xid_log_event::do_apply_event
also , apart from THD::cleanup_after_query().
Signed-off-by: Sachin Setiya <sachin.setiya@mariadb.com>
Gtid_list_log_event::do_apply_event() did not free_root(thd->mem_root).
It can allocate on this in record_gtid(), and in some scenarios there is
nothing else that does free_root(), leading to temporary memory leak until
stop of SQL thread. One scenario is in circular replication with only one
master active. The active master receives only its own events on the slave,
all of which are ignored. But whenever the SQL thread catches up with the IO
thread, a Gtid_list_log_event is applied, leading to the leak.
This has no functional changes, but it helps avoid merge problems from 10.0
to 10.1. In 10.0, code that checks for parallel replication uses
opt_slave_parallel_threads > 0, but this check needs to be
mi->using_parallel() in 10.1. By using the same check in 10.0 (with
unchanged semantics), merge problems to 10.1 are avoided.
Fix the replication failure caused by incorect initialization of
THD::invoker_host && THD::invoker_user.
Breakdown of the failure is this:
Query_log_event::host and Query_log_event::user can have their
LEX_STRING's set to length 0, but the actual str member points to
garbage. Code afterwards copies Query_log_event::host and user to
THD::invoker_host and THD::invoker_user.
Calling code for these members expects both members to be initialized.
Eg. the str member be a NULL terminated string and length have
appropriate size.