Problem was that static array was used for storing thread mutex sync levels.
Fixed by using std::vector instead.
Does not contain test case to avoid too big memory/disk space usage
on buildbot VMs.
Parallel replication (in 10.0 / "conservative" mode) relies on binlog group
commits to group transactions that can be safely run in parallel on the
slave. The --binlog-commit-wait-count and --binlog-commit-wait-usec options
exist to increase the number of commits per group. But in case of conflicts
between transactions, this can cause unnecessary delay and reduced througput,
especially on a slave where commit order is fixed.
This patch adds a heuristics to reduce this problem. When transaction T1 goes
to commit, it will first wait for N transactions to queue up for a group
commit. However, if we detect that another transaction T2 is waiting for a row
lock held by T1, then we will skip the wait and let T1 commit immediately,
releasing locks and let T2 continue.
On a slave, this avoids the unfortunate situation where T1 is waiting for T2
to join the group commit, but T2 is waiting for T1 to release locks, causing
no work to be done for the duration of the --binlog-commit-wait-usec timeout.
(The heuristic seems reasonable on the master as well, so it is enabled for
all transactions, not just replication transactions).
Delay spawning parallel replication worker threads until a slave SQL
thread is running, and de-spawn them when the last SQL thread stops.
This is especially useful to avoid needless threads on a master in a
setup where same my.cnf is used on masters and slaves.
available space on disk
Add error handling when disk full situation happens and
intentionally bring server down with stacktrace because
on all cases InnoDB can't continue anyway.
Extend show_slave_status.inc to run SHOW ALL SLAVES STATUS and
SHOW SLAVE 'name' STATUS on demand, and make the test use
the include file instead of direct SHOW statements
Parallel replication depends on locking (table locks, row locks, etc.) to
prevent two conflicting transactions from running and committing in parallel.
But temporary tables are designed to be visible only to one thread, and have
no such locking.
In the concrete issue, an intermediate master could commit a CREATE TEMPORARY
TABLE in the same group commit as in INSERT into that table. Thus, a
lower-level master could attempt to run them in parallel and get an error.
More generally, we need protection from parallel replication trying to run
transactions in parallel that access a common temporary table.
This patch simply causes use of a temporary table from parallel replication
to wait for all previous transactions to commit, serialising the replication
at that point.
(A more fine-grained locking could be added later, possibly. However,
using temporary tables in statement-based replication is in any case
normally undesirable; for example a restart of the server will lose
temporary tables and can break replication).
Note that row-based replication is not affected, as it does not do any
temporary tables on the slave-side.
This patch also cleans up the locking around protecting the list of
temporary tables in Relay_log_info. This used to take the
rli->data_lock at the end of every statement, which is very bad for
concurrency. With this patch, the lock is not taken unless temporary
tables (with statement-based binlogging) are in use on the slave.
when created FK
Analysis: Table name is on filename charset but foreign key
identifiers are not. This lead incorrect foreign key
identifier number to be used.
Fix: Convert foreign key identifier to filename charset before
comparing it to table name when largest foreign key identifier
number is resolved.
During slow execution, e.g. under valgrind, there was a chance
that Aria checkpoint would happen while P_S tables were being
queried; it could cause different data in joined P_S, and
thus combinations of results that the test did not expect.
Fixed by disabling Aria checkpoints for the test.
safe_process puts its children (mysqld, in this case) into a separate
process group, to be able to kill it all at once.
buildslave kills mtr's process group when it loses connection to
the master.
result? buildslave kills mtr and safe_process, but leaves stale
mysqld processes in their own process groups.
fix: put safe_process itself into a separate process group, then
buildslave won't kill it and safe_process will kill mysqld'd
and itself when it will notice that the parent mtr no longer exists.
Analysis: after a red-black-tree lookup we use node withouth
checking did lookup succeed or not. This lead to situation
where NULL-pointer was used.
Fix: Add additional check that found node from red-back-tree
is valid.
Re-applied lost in the merge revision:
commit ed313e8a92
Author: Sergey Vojtovich <svoj@mariadb.org>
Date: Mon Dec 1 14:58:29 2014 +0400
MDEV-7148 - Recurring: InnoDB: Failing assertion: !lock->recursive
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
Analysis: On master when executing (single/multi) row INSERTs/REPLACEs
InnoDB fallback to old style autoinc locks (table locks)
only if another transaction has already acquired the AUTOINC lock.
Instead on slave as we are executing log_events and sql_command
is not correctly set, InnoDB does not use new style autoinc
locks when it could.
Fix: Use new style autoinc locks also when
thd_sql_command(user_thd) == SQLCOM_END i.e. this is RBR event.
The binlog contains specially marked format description events to mark
when a master restart happened (which could have caused temporary
tables to be silently dropped). Such events also cause slave to close
temporary tables.
However, there was a bug that if after this, slave re-connects to the
master in GTID mode, the master can send an old format description
event again. If temporary tables are closed when such event is seen
for the second time, it might drop temporary tables created after that
event, and cause replication failure.
With this patch, the restart flag of the format description event is
cleared by the master when it is sent to the slave in a subsequent
connection, to avoid the errorneous temp table close.
The problem occurs in parallel replication in GTID mode, when we are using
multiple replication domains. In this case, if the SQL thread stops, the
slave GTID position may refer to a different point in the relay log for each
domain.
The bug was that when the SQL thread was stopped and restarted (but the IO
thread was kept running), the SQL thread would resume applying the relay log
from the point of the most advanced replication domain, silently skipping all
earlier events within other domains. This caused replication corruption.
This patch solves the problem by storing, when the SQL thread stops with
multiple parallel replication domains active, the current GTID
position. Additionally, the current position in the relay logs is moved back
to a point known to be earlier than the current position of any replication
domain. Then when the SQL thread restarts from the earlier position, GTIDs
encountered are compared against the stored GTID position. Any GTID that was
already applied before the stop is skipped to avoid duplicate apply.
This patch should have no effect if multi-domain GTID parallel replication is
not used. Similarly, if both SQL and IO thread are stopped and restarted, the
patch has no effect, as in this case the existing relay logs are removed and
re-fetched from the master at the current global @@gtid_slave_pos.
This bug manifests due to wrong computation and evaluation of
keyinfo->key_length. The issues were:
* Using table->file->max_key_length() as an absolute value that must not be
reached for a key, while it represents the maximum number of bytes
possible for a table key.
* Incorrectly computing the keyinfo->key_length size during
KEY_PART_INFO creation. The metadata information regarding the key
such the field length (for strings) was added twice.
When the server starts up, check if the master-bin.state file was lost.
If it was, recover its contents by scanning the last binlog file, thus
avoiding running with a corrupt binlog state.
Temporary table count fix. The number of temporary tables was increased
when the table is not actually created. (when do_not_open was passed
as TRUE to create_tmp_table).
partially cherry-pick from mysql/5.6.
No test case (mysql/5.6 test case is useless, the correct
test case uses too much memory)
commit e061985813db54948f99892d89f7e076242473a5
Author: <Dao-Gang.Qu@sun.com>
Date: Tue Jun 1 15:02:22 2010 +0800
Bug #49931 Incorrect type in read_log_event error
Bug #49932 mysqlbinlog max_allowed_packet hard coded to 1GB
If somehow the COMMIT or XID event in an event group was missing, the code in
parallel replication to handle this was not sufficient, leading to server
deadlock.
In parallel replication, don't rollback inside ha_commit_trans() in case of
error.
The rollback will be done later, but the parallel replication code needs to
run unmark_start_commit() before the rollback to properly control the
sequencing of transactions.
I did not manage to come up with a reliable automatic test case for this, but
I tested it manually.