Making changes to wsrep_mysqld.h causes large parts of server code to
be recompiled. The reason is that wsrep_mysqld.h is included by
sql_class.h, even tough very little of wsrep_mysqld.h is needed in
sql_class.h. This commit introduces a new header file, wsrep_on.h,
which is meant to be included from sql_class.h, and contains only
macros and variable declarations used to determine whether wsrep is
enabled.
Also, header wsrep.h should only contain definitions that are also
used outside of sql/. Therefore, move WSREP_TO_ISOLATION* and
WSREP_SYNC_WAIT macros to wsrep_mysqld.h.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Log MDL state transitions. Trace-friendly message
format. DBUG_LOCK_FILE replaced by thread-local storage.
Logged states legend:
Seized lock was acquired without waiting
Waiting lock is waiting
Acquired lock was acquired after waiting
Released lock was released
Deadlock lock was aborted due to deadlock
Timeout lock was aborted due to timeout >0
Nowait lock was aborted due to zero timeout
Killed lock was aborted due to kill message
OOM can not acquire because out of memory
Usage:
mtr --mysqld=--debug=d,mdl,query:i:o,/tmp/mdl.log
Cleanup from garbage messages:
sed -i -re \
'/(mysql|performance_schema|sys|mtr)\// d; /MDL_BACKUP_/ d' \
/tmp/mdl.log
It was possibile for a user to create an interlocked state which may go on
for a significant period of time. There is a tight loop in the FTWRL code
path that tries to repeatedly acquire a read lock. As the weight of FTWRL
lock is the smallest among others, it's always selected by the deadlock
detector, but can never be killed.
Imaging the following sequence:
connection_0 connection_1
GET_LOCK("l1", 0);
LOCK TABLES t WRITE;
FLUSH TABLES WITH READ LOCK;
GET_LOCK("l1", 1000);
The GET_LOCK statement in connection_1 triggers the deadlock detector,
which tries to select the lock in FTWRL, since its weight is 0. However,
since a loop in Global_read_lock::lock_global_read_lock() tries to always
win, it tries to acquire lock again. Which invokes the deadlock detector,
and that cycle continues until GET_LOCK in connection_1 times out.
This patch resolves the live-locking by introducing a dynamic bonus to the
deadlock weight associated with every lock. Each lock gets a bonus weight
each time it's selected by the deadlock detector. In case of a live-lock
situation, those locks that cannot be killed, get additional weight each
iteration. Eventually their weight becomes so high that the deadlock
detector shifts its attention to other lock, until it find the one that
can be killed.
Some DML operations on tables having unique secondary keys cause scanning
in the secondary index, for instance to find potential unique key violations
in the seconday index. This scanning may involve GAP locking in the index.
As this locking happens also when applying replication events in high priority
applier threads, there is a probabality for lock conflicts between two wsrep
high priority threads.
This PR avoids lock conflicts of high priority wsrep threads, which do
secondary index scanning e.g. for duplicate key detection.
The actual fix is the patch in sql_class.cc:thd_need_ordering_with(), where
we allow relaxed GAP locking protocol between wsrep high priority threads.
wsrep high priority threads (replication appliers, replayers and TOI processors)
are ordered by the replication provider, and they will not need serializability
support gained by secondary index GAP locks.
PR contains also a mtr test, which exercises a scenario where two replication
applier threads have a false positive conflict in GAP of unique secondary index.
The conflicting local committing transaction has to replay, and the test verifies
also that the replaying phase will not conflict with the latter repllication applier.
Commit also contains new test scenario for galera.galera_UK_conflict.test,
where replayer starts applying after a slave applier thread, with later seqno,
has advanced to commit phase. The applier and replayer have false positive GAP
lock conflict on secondary unique index, and replayer should ignore this.
This test scenario caused crash with earlier version in this PR, and to fix this,
the secondary index uniquenes checking has been relaxed even further.
Now innodb trx_t structure has new member: bool wsrep_UK_scan, which is set to
true, when high priority thread is performing unique secondary index scanning.
The member trx_t::wsrep_UK_scan is defined inside WITH_WSREP directive, to make
it possible to prepare a MariaDB build where this additional trx_t member is
not present and is not used in the code base. trx->wsrep_UK_scan is set to true
only for the duration of function call for: lock_rec_lock() trx->wsrep_UK_scan
is used only in lock_rec_has_to_wait() function to relax the need to wait if
wsrep_UK_scan is set and conflicting transaction is also high priority.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
The reason for the failure is that
thd->mdl_context.release_transactional_locks()
was called after commit & rollback even in cases where the current
transaction is still active.
For 10.2, 10.3 and 10.4 the fix is simple:
- Replace all calls to thd->mdl_context.release_transactional_locks() with
thd->release_transactional_locks(). The thd function will only call
the mdl_context function if there are no active transactional locks.
In 10.6 we will better fix where we will change the return value for
some trans_xxx() functions to indicate if transaction did close the
transaction or not. This will avoid the need of the indirect call.
Other things:
- trans_xa_commit() and trans_xa_rollback() will automatically
call release_transactional_locks() if the transaction is closed.
- We can't do that for the other functions as the caller of many of these
are doing additional work (like close_thread_tables) before calling
release_transactional_locks().
- Added missing abort_result_set() and missing DBUG_RETURN in
select_create::send_eof()
- Fixed wrong indentation in injector::transaction::commit()
MDL_lock::Ticket_list::remove_ticket(): reduce algoritmic
complexity from O(N) to O(1)
MDL_lock::Ticket_list::clear_bit_if_not_in_list(): removed
MDL_lock::Ticket_list::m_type_counters: a map of ticket type
to count. Initialization is memset(0) which takes time.
Reverted original patch (c2e0a0b).
For consistency with "LOCK TABLE <table_name> READ" and "FLUSH TABLES
WITH READ LOCK", which are forbidden under "BACKUP STAGE", forbid "FLUSH
TABLE <table_name> FOR EXPORT" and "FLUSH TABLE <table_name> WITH READ
LOCK" as well.
It'd allow consistent fixes for problems like MDEV-18643.
Fixes MDEV-18067, MDEV-18068 and MDEV-18069
The problem was that FLUSH TABLES table_name combined with UNLOCK TABLES
calls MDL_context::set_transaction_duration_for_all_locks(), which
changed backup_locks from MDL_EXPLICT to MDL_TRANSACTION.
Fixed by ensuring that set_transaction_duration_for_all_locks() doesn't
touch BACKUP locks.
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Changed check of Global_only_lock to also include BACKUP lock.
- We store latest MDL_BACKUP_DDL lock in thd->mdl_backup_ticket to be able
to downgrade lock during copy_data_between_tables()
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Added new locks to MDL_BACKUP for all stages of backup locks and
a new MDL lock needed for backup stages.
- Renamed MDL_BACKUP_STMT to MDL_BACKUP_DDL
- flush_tables() takes a new parameter that decides what should be flushed.
- InnoDB, Aria (transactional tables with checksums), Blackhole, Federated
and Federatedx tables are marked to be safe for online backup. We are
using MDL_BACKUP_TRANS_DML instead of MDL_BACKUP_DML locks for these
which allows any DML's to proceed for these tables during the whole
backup process until BACKUP STAGE COMMIT which will block the final
commit.
Part of MDEV-5336 Implement LOCK FOR BACKUP
FLUSH TABLE table_names have changed slighty as we are now opening
tables before taking the MDL lock. The difference is that FLUSH TABLE
table_name will now be blocked by a table that is waiting for FTWRL.
There should not be any new deadlocks as part of this change.
The end result is still better in most cases as FTWRL is now only
waiting for write statements to end, not for read only statements and
it's not flushing tables in use from the table cache.
Share will be needed to be able to determine if table supports online
backup. Appropriate metadata lock type in BACKUP namespace will be
acquired basing on this information.
Also made pending global read lock request to be preferred victim of MDL
deadlock detector. This allows us to hide some non-fatal deadlocks and
make FTWRL less likely to break concurrent queries.