mariadb/mysql-test/suite
Sergey Vojtovich b88a298f2d MDEV-19749 - MDL scalability regression after backup locks
Statements that intend to modify data have to acquire protection
against ongoing backup. Prior to backup locks, protection against
FTWRL was acquired in form of 2 shared metadata locks of GLOBAL
(global read lock) and COMMIT namespaces. These two namespaces
were separate entities, they didn't share data structures and
locking primitives. And thus they were separate contention
points.

With backup locks, introduced by 7a9dfdd, these namespaces were
combined into a single BACKUP namespace. It became a single
contention point, which doubled load on BACKUP namespace data
structures and locking primitives compared to GLOBAL and COMMIT
namespaces. In other words system throughput has halved.

MDL fast lanes solve this problem by allowing multiple contention
points for single MDL_lock. Fast lane is scalable multi-instance
registry for leightweight locks. Internally it is just a list of
granted tickets, close counter and a mutex.

Number of fast lanes (or contention points) is defined by the
metadata_locks_instances system variable. Value of 1 disables fast
lanes and lock requests are served by conventional MDL_lock data
structures.

Since fast lanes allow arbitrary number of contention points, they
outperform pre-backup locks GLOBAL and COMMIT.

Fast lanes are enabled only for BACKUP namespace. Support for other
namespaces is to be implemented separately.

Lock types are divided in 2 categories: lightweight and heavyweight.

Lightweight lock types represent DML: MDL_BACKUP_DML,
MDL_BACKUP_TRANS_DML, MDL_BACKUP_SYS_DML, MDL_BACKUP_DDL,
MDL_BACKUP_ALTER_COPY, MDL_BACKUP_COMMIT. They are fully compatible
with each other. Normally served by corresponding fast lane, which is
determined by thread_id % metadata_locks_instances.

Heavyweight lock types represent ongoing backup: MDL_BACKUP_START,
MDL_BACKUP_FLUSH, MDL_BACKUP_WAIT_FLUSH, MDL_BACKUP_WAIT_DDL,
MDL_BACKUP_WAIT_COMMIT, MDL_BACKUP_FTWRL1, MDL_BACKUP_FTWRL2,
MDL_BACKUP_BLOCK_DDL. These locks are always served by conventional
MDL_lock data structures. Whenever such lock is requested, fast
lanes are closed and all tickets registered in fast lanes are
moved to conventional MDL_lock data structures. Until such locks
are released or aborted, lightweight lock requests are served by
conventional MDL_lock data structures.

Strictly speaking moving tickets from fast lanes to conventional
MDL_lock data structures is not required. But it allows to reduce
complexity and keep intact methods like: MDL_lock::visit_subgraph(),
MDL_lock::notify_conflicting_locks(), MDL_lock::reschedule_waiters(),
MDL_lock::can_grant_lock().

It is not even required to register tickets in fast lanes. They
can be implemented basing on an atomic variable that holds two
counters: granted lightweight locks and granted/waiting heavyweight
locks. Similarly to MySQL solution, which roughly speaking has
"single atomic fast lane". However it appears to be it won't bring
any better performance, while code complexity is going to be much
higher.
2025-10-01 14:05:17 +00:00
..
archive Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
atomic Fix the test: changing charset should be dome when we can not skip the test. 2025-05-09 07:36:15 +02:00
binlog Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
binlog_encryption Merge branch '10.11' into 11.4 2025-07-28 19:40:10 +02:00
client
compat Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
csv MDEV-36050 DATA/INDEX DIRECTORY handling is inconsistent 2025-04-18 09:41:23 +02:00
encryption Merge 11.4 into 11.8 2025-10-01 10:32:47 +03:00
engines Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
events
federated Merge branch '10.11' into 11.4 2025-07-28 19:40:10 +02:00
funcs_1 Merge branch '10.11' into 11.4 2025-07-28 19:40:10 +02:00
funcs_2 Merge 11.4 into 11.8 2025-04-02 14:07:01 +03:00
galera Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
galera_3nodes Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
galera_3nodes_sr Merge 11.4 into 11.8 2025-04-02 14:07:01 +03:00
galera_sr galera tests: synchronization between versions and editions 2025-08-14 17:04:40 +02:00
gcol Merge branch '10.11' into 11.4 2025-07-28 19:40:10 +02:00
handler Merge 11.4 into 11.7 2025-01-09 09:41:38 +02:00
heap Merge branch '11.4' into 11.7 2025-02-06 16:46:36 +01:00
innodb Merge 11.4 into 11.8 2025-10-01 10:32:47 +03:00
innodb_fts MDEV-35163 InnoDB persistent statistics fail to update after ALTER TABLE...ALGORITHM=COPY 2025-09-22 17:39:47 +05:30
innodb_gis Merge branch '11.4' into 11.8 2025-06-18 07:43:24 +02:00
innodb_i_s
innodb_zip fix incorrect merge 15700f54c2 2025-04-18 09:41:24 +02:00
jp
json Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
large_tests
maria MDEV-24 Segmented key cache for Aria 2025-10-01 14:05:10 +00:00
mariabackup Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
merge Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
mtr/t
mtr2
multi_source Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
optimizer_unfixed_bugs
parts Merge branch '11.4' into 11.8 2025-07-28 21:29:29 +02:00
perfschema Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
perfschema_stress
period Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
plugins Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
roles MDEV-7761 Some MTR tests fail when run on a host named 'localhost' 2025-07-21 10:24:14 +02:00
rpl Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
s3 Merge branch '11.4' into 11.8 2025-06-18 07:43:24 +02:00
sql_sequence Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
storage_engine
stress
sys_vars MDEV-19749 - MDL scalability regression after backup locks 2025-10-01 14:05:17 +00:00
sysschema Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
unit
vcol Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
versioning Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
wsrep Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00