mariadb/storage/innobase/include/lock0priv.inl
Vlad Lesin 3a6af458e6 MDEV-34877 Port "Bug #11745929 Change lock priority so that the transaction holding S-lock gets X-lock first" fix from MySQL to MariaDB
This commit implements
mysql/mysql-server@7037a0bdc8
functionality.

If some transaction 't' requests not-gap X-lock 'Xt' on record 'r', and
locks list of the record 'r' contains not-gap granted S-lock 'St' of
transaction 't', followed by not-gap waiting locks WB={Wb1,
Wb2, ..., Wbn} conflicting with 'Xt', and 'Xt' does not conflict with any
other lock, located in the list after 'St', then grant 'Xt'. Note that
insert-intention locks are also gap locks.

If some transaction 't' holds not-gap lock 'Lt' on record 'r', and some
other transactions have not-gap continuous waiting locks sequence
L(B)={L(b1), L(b2), ..., L(bn)} following L(t) in
the list of locks for the record 'r', and transaction 't' requests not-gap,
what means also not insert intention, as ii-locks are also gap locks,
X-lock conflicting with any lock in L(B), then grant the.

MySQL's commit contains the following explanation of why insert-intention
locks must not overtake a waiting ordinary or gap locks:

"It is important that this decission rule doesn't allow
INSERT_INTENTION locks to overtake WAITING locks on gaps (`S`, `S|GAP`,
`X`, `X|GAP`), as inserting a record into a gap would split such WAITING
lock, violating the invariant that each transaction can have at most
single WAITING lock at any time."

I would add to the explanation the following. Suppose we have trx 1 which
holds ordinary X-lock on some record. And trx 2 executes "DELETE FROM t"
or "SELECT * FOR UPDATE" in RR(see lock_delete_updated.test and
MDEV-27992), i.e. it creates waiting ordinary X-lock on the same record.
And then trx 1 wants to insert some record just before the locked record.
It requests insert-intention lock, and if the lock overtakes trx 2 lock,
there will be phantom records for trx 2 in RR. lock_delete_updated.test
shows how "DELETE" allows to insert some records in already scanned gap
and misses some records to delete.

The current implementation differs from MySQL implementation. There are
two key differences:

1. Lock queue ordering. In MySQL all waiting locks precede all granted
   locks. A new waiting lock is added to the head of the queue, a new
   granted lock is added to the end of the queue, if some waiting lock
   is granted, it's moved to the end of the queue. In MariaDB any new
   lock is added to the end of the queue and waiting lock does not change
   its position in the queue where the lock is granted. The rule is that
   blocking lock must be located before blocked lock in lock queue. We
   maintain the rule with inserting bypassing lock just before bypassed
   one.

2. MySQL implementation uses some object(locksys::Trx_locks_cache) which
   can be passed to consecutive calls to rec_lock_has_to_wait() for the
   same trx and heap_no to cache the result of checking if trx has a
   granted lock which is blocking the waiting lock(see
   locksys::Trx_locks_cache::has_granted_blocker()). The current
   implementation does not use such object, because it looks for such
   granted lock on the level of lock_rec_other_has_conflicting() and
   lock_rec_has_to_wait_in_queue(). I.e. there is no need in additional
   lock queue iteration in
   locksys::Trx_locks_cache::has_granted_blocker(), as we already iterate
   it in lock_rec_other_has_conflicting() and
   lock_rec_has_to_wait_in_queue().

During the testing the following case was found. Suppose we have
delete-marked record and going to do inplace insert into
that delete-marked record. Usually we don't create explicit lock if
there are no conlicting with not gap X-lock locks(see
lock_clust_rec_modify_check_and_lock(), btr_cur_update_in_place()). The
implicit lock will be converted to explicit one by demand.

That can happen during INSERT, the not-gap S-lock can
be acquired on searching for duplicates(see
row_ins_duplicate_error_in_clust()), and, if delete-marked record is
found, inplace insert(see btr_cur_upd_rec_in_place()) modifies the
record, what is treated as implicit lock.

But there can be a case when some transaction trx1 holds not-gap S-lock,
another transaction trx2 creates waiting X-lock, and then trx2 tries to
do inplace insert. Before the fix the waiting X-lock of trx2 would be
conflicting lock, and trx1 would try to create explicit X-lock, what
would cause deadlock, and one of the transactions whould be rolled back.
But after the fix, trx2 waiting X-lock is not treated as conflicting
with trx1 X-lock anymore, as trx1 already holds S-lock. If we don't create
explicit lock, then some other transaction trx3 can create it during
implicit to explicit lock conversion and place it at the end of the
queue. So there can be the following locks order in the queue:

S1(granted) X2(waiting) X1(granted)

The above queue is not valid, because all granted trx1 locks must be
placed before waiting trx2 lock. Besides, lock_rec_release_try() can
remove S(granted, trx1) lock and grant X lock to trx 2, and there can be
two granted X-locks on the same record:

X2(granted) X1(granted)

Taking into account that lock_rec_release_try() can release cell and
lock_sys latches leaving some locks unreleased, the queue validation
function can fail in any unexpected place.

It can be fixed with two ways:

1) Place explicit X(granted, trx1) lock before X(waiting, trx2) lock
   during implicit to explicit lock conversion. This option is implemented
   in MySQL, as granted lock is always placed at the top of locks queue,
   and waiting locks are placed at the bottom of the queue. MariaDB does
   not do this, and implementing this variant would require conflicting
   locks search before converting implicit to explicit lock, what, in
   turns, would require cell and/or lock_sys latch acquiring.

2) Create and place X(granted, trx1) lock before X(waiting, trx2) during
   inplace INSERT, i.e. when lock_rec_lock() is invoked from
   lock_clust_rec_modify_check_and_lock() or
   lock_sec_rec_modify_check_and_lock(), if X(waiting, trx2) is
   bypassed. Such a way we don't need in additional conflicting locks
   search, as they are searched anyway in lock_rec_low().

This fix implements the second variant(see the changes around
c_lock_info.insert_after in lock_rec_lock). I.e. if some record was
delete-marked and we do inplace insert in such a record, and some lock for
bypass was found, create explicit lock to avoid conflicting lock search on
each implicit to explicit lock conversion. We can remove it if MDEV-35624
is implemented.

lock_rec_other_has_conflicting(), lock_rec_has_to_wait_in_queue():
search locks to bypass along with conflicting locks searching in the
same loop. The result is returned in conflicting_lock_info object.
There can be several locks to bypass, only the first one is returned to
limit lock_rec_find_similar_on_page() with the first bypassed lock to
preserve "blocking before blocked" invariant. conflicting_lock_info also
contains a pointer to the lock, after which we can insert bypassing
lock. This lock precedes bypassed one.

Bypassing lock can be next-key lock, and the following cases are
possible:

1. S1(not-gap, granted) II2(granted) X3(waiting for S1),

   When new X1(ordinary) lock is acquired, there will be the following
   locks queue:

   S1(not-gap, granted) II2(granted) X1(ordinary, granted) X3(waiting for
   S1)

   If we had inserted new X1 lock just after S1, and S1 had been released
   on transaction commit or rollback, we would have the following
   sequence in the locks queue:

   X1(ordinary, granted) II2(granted) X3(waiting for X1)
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   This is not a real issue as II lock once granted can be
   ignored but it could possibly hit some assert(taking into account
   that lock_release_try() can release lock_sys latch, and other threads
   can acquire the latch and validate lock queue) as it breaks our design
   constraint that any granted lock in the queue should not conflict
   with locks ahead in the queue. But lock_rec_queue_validate() does not
   check the above constraint. We place new bypassing lock just before
   bypassed one, but there still can be the case when lock bitmap is used
   instead of creating new lock object(see lock_rec_add_to_queue() and
   lock_rec_find_similar_on_page()), and the lock, which owns the
   bitmap, can precede II2(granted). We can either disable
   lock_rec_find_similar_on_page() space optimization for bypassing locks
   or treat "X1(ordinary, granted) II2(granted)" sequence as valid. As
   we don't currently have the function which would fail on the above
   sequence, let treat it as valid for the case, when lock_release()
   execution is in process.

2. S1(ordinary, granted) II2(waiting for S1) X3(waiting for S1)

   When new X1(ordinary) lock is acquired, there will be the following
   locks queue:

   S1(ordinary, granted) II2(waiting for S1) X1(ordinary, granted)
   X3(waiting for S1).

   After S1 releasing there will be:

   II2(granted) X1(ordinary, granted) X3(waiting for X1)
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

   The above queue is valid because ordinary lock does not conflict with
   II-lock(see lock_rec_has_to_wait()).

lock_rec_create_low(): insert new lock to the position which
lock_rec_other_has_conflicting(), lock_rec_has_to_wait_in_queue()
returned if the lock is bypassing.

lock_rec_find_similar_on_page(): add ability to limit similiar lock search
with the certain lock to preserve "blocking before blocked" invariant for
all bypassed locks.

lock_rec_add_to_queue(): don't treat bypassed locks as waiting ones to
let lock bitmap reusing for bypassing locks.

lock_rec_lock(): fix inplace insert case, explained above.

lock_rec_dequeue_from_page(), lock_rec_rebuild_waiting_queue(): move
bypassing lock to the correct place to preserve "blocking before blocked"
invariant.

Reviewed by: Debarun Banerjee, Marko Mäkelä.
2025-01-23 17:38:32 +03:00

250 lines
7.3 KiB
C++

/*****************************************************************************
Copyright (c) 2007, 2014, Oracle and/or its affiliates. All Rights Reserved.
Copyright (c) 2018, 2022, MariaDB Corporation.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
*****************************************************************************/
/**************************************************//**
@file include/lock0priv.ic
Lock module internal inline methods.
Created July 16, 2007 Vasil Dimov
*******************************************************/
/* This file contains only methods which are used in
lock/lock0* files, other than lock/lock0lock.cc.
I.e. lock/lock0lock.cc contains more internal inline
methods but they are used only in that file. */
#ifndef LOCK_MODULE_IMPLEMENTATION
#error Do not include lock0priv.ic outside of the lock/ module
#endif
#include "row0row.h"
/*********************************************************************//**
Checks if some transaction has an implicit x-lock on a record in a clustered
index.
@return transaction id of the transaction which has the x-lock, or 0 */
UNIV_INLINE
trx_id_t
lock_clust_rec_some_has_impl(
/*=========================*/
const rec_t* rec, /*!< in: user record */
const dict_index_t* index, /*!< in: clustered index */
const rec_offs* offsets)/*!< in: rec_get_offsets(rec, index) */
{
ut_ad(dict_index_is_clust(index));
ut_ad(page_rec_is_user_rec(rec));
return(row_get_rec_trx_id(rec, index, offsets));
}
/*********************************************************************//**
Gets the number of bits in a record lock bitmap.
@return number of bits */
UNIV_INLINE
ulint
lock_rec_get_n_bits(
/*================*/
const lock_t* lock) /*!< in: record lock */
{
return(lock->un_member.rec_lock.n_bits);
}
/**********************************************************************//**
Sets the nth bit of a record lock to TRUE. */
inline
void
lock_rec_set_nth_bit(
/*=================*/
lock_t* lock, /*!< in: record lock */
ulint i) /*!< in: index of the bit */
{
ulint byte_index;
ulint bit_index;
ut_ad(!lock->is_table());
ut_ad(i < lock->un_member.rec_lock.n_bits);
byte_index = i / 8;
bit_index = i % 8;
#if defined __GNUC__ && !defined __clang__ && __GNUC__ < 6
# pragma GCC diagnostic push
# pragma GCC diagnostic ignored "-Wconversion" /* GCC 4 and 5 need this here */
#endif
((byte*) &lock[1])[byte_index] |= static_cast<byte>(1 << bit_index);
#if defined __GNUC__ && !defined __clang__ && __GNUC__ < 6
# pragma GCC diagnostic pop
#endif
#ifdef SUX_LOCK_GENERIC
ut_ad(lock_sys.is_writer() || lock->trx->mutex_is_owner());
#else
ut_ad(lock_sys.is_writer() || lock->trx->mutex_is_owner()
|| (xtest() && !lock->trx->mutex_is_locked()));
#endif
lock->trx->lock.n_rec_locks++;
lock->trx->lock.set_nth_bit_calls++;
}
/** Gets the first or next record lock on a page.
@param lock a record lock
@return next lock, NULL if none exists */
UNIV_INLINE
lock_t *lock_rec_get_next_on_page(const lock_t *lock)
{
return const_cast<lock_t*>(lock_rec_get_next_on_page_const(lock));
}
/*********************************************************************//**
Gets the next explicit lock request on a record.
@return next lock, NULL if none exists or if heap_no == ULINT_UNDEFINED */
UNIV_INLINE
lock_t*
lock_rec_get_next(
/*==============*/
ulint heap_no,/*!< in: heap number of the record */
lock_t* lock) /*!< in: lock */
{
do {
lock = lock_rec_get_next_on_page(lock);
} while (lock && !lock_rec_get_nth_bit(lock, heap_no));
return(lock);
}
/*********************************************************************//**
Gets the next explicit lock request on a record.
@return next lock, NULL if none exists or if heap_no == ULINT_UNDEFINED */
UNIV_INLINE
const lock_t*
lock_rec_get_next_const(
/*====================*/
ulint heap_no,/*!< in: heap number of the record */
const lock_t* lock) /*!< in: lock */
{
return lock_rec_get_next(heap_no, const_cast<lock_t*>(lock));
}
/*********************************************************************//**
Gets the nth bit of a record lock.
@return TRUE if bit set also if i == ULINT_UNDEFINED return FALSE*/
UNIV_INLINE
ibool
lock_rec_get_nth_bit(
/*=================*/
const lock_t* lock, /*!< in: record lock */
ulint i) /*!< in: index of the bit */
{
const byte* b;
ut_ad(!lock->is_table());
if (i >= lock->un_member.rec_lock.n_bits) {
return(FALSE);
}
b = ((const byte*) &lock[1]) + (i / 8);
return(1 & *b >> (i % 8));
}
/** Gets the first or next record lock on a page.
@param lock a record lock
@return next lock, NULL if none exists */
UNIV_INLINE
const lock_t *lock_rec_get_next_on_page_const(const lock_t *lock)
{
ut_ad(!lock->is_table());
const page_id_t page_id{lock->un_member.rec_lock.page_id};
while (!!(lock= static_cast<const lock_t*>(lock->hash)))
if (lock->un_member.rec_lock.page_id == page_id)
break;
return lock;
}
/*********************************************************************//**
Calculates if lock mode 1 is compatible with lock mode 2.
@return nonzero if mode1 compatible with mode2 */
UNIV_INLINE
ulint
lock_mode_compatible(
/*=================*/
enum lock_mode mode1, /*!< in: lock mode */
enum lock_mode mode2) /*!< in: lock mode */
{
ut_ad((ulint) mode1 < lock_types);
ut_ad((ulint) mode2 < lock_types);
return(lock_compatibility_matrix[mode1][mode2]);
}
/*********************************************************************//**
Calculates if lock mode 1 is stronger or equal to lock mode 2.
@return nonzero if mode1 stronger or equal to mode2 */
UNIV_INLINE
ulint
lock_mode_stronger_or_eq(
/*=====================*/
enum lock_mode mode1, /*!< in: lock mode */
enum lock_mode mode2) /*!< in: lock mode */
{
ut_ad((ulint) mode1 < lock_types);
ut_ad((ulint) mode2 < lock_types);
return(lock_strength_matrix[mode1][mode2]);
}
/*********************************************************************//**
Checks if a transaction has the specified table lock, or stronger. This
function should only be called by the thread that owns the transaction.
@return lock or NULL */
UNIV_INLINE
const lock_t*
lock_table_has(
/*===========*/
const trx_t* trx, /*!< in: transaction */
const dict_table_t* table, /*!< in: table */
lock_mode in_mode)/*!< in: lock mode */
{
/* Look for stronger locks the same trx already has on the table */
for (lock_list::const_iterator it = trx->lock.table_locks.begin(),
end = trx->lock.table_locks.end(); it != end; ++it) {
const lock_t* lock = *it;
if (lock == NULL) {
continue;
}
ut_ad(trx == lock->trx);
ut_ad(lock->is_table());
ut_ad(lock->un_member.tab_lock.table);
if (table == lock->un_member.tab_lock.table
&& lock_mode_stronger_or_eq(lock->mode(), in_mode)) {
ut_ad(!lock->is_waiting());
return(lock);
}
}
return(NULL);
}