MDEV-24142: Replace InnoDB rw_lock_t with sux_lock

InnoDB buffer pool block and index tree latches depend on a
special kind of read-update-write lock that allows reentrant
(recursive) acquisition of the 'update' and 'write' locks
as well as an upgrade from 'update' lock to 'write' lock.
The 'update' lock allows any number of reader locks from
other threads, but no concurrent 'update' or 'write' lock.

If there were no requirement to support an upgrade from 'update'
to 'write', we could compose the lock out of two srw_lock
(implemented as any type of native rw-lock, such as SRWLOCK on
Microsoft Windows). Removing this requirement is very difficult,
so in commit f7e7f487d4b06695f91f6fbeb0396b9d87fc7bbf we
implemented an 'update' mode to our srw_lock.

Re-entrant or recursive locking is mostly needed when writing or
freeing BLOB pages, but also in crash recovery or when merging
buffered changes to an index page. The re-entrancy allows us to
attach a previously acquired page to a sub-mini-transaction that
will be committed before whatever else is holding the page latch.

The SUX lock supports Shared ('read'), Update, and eXclusive ('write')
locking modes. The S latches are not re-entrant, but a single S latch
may be acquired even if the thread already holds an U latch.

The idea of the U latch is to allow a write of something that concurrent
readers do not care about (such as the contents of BTR_SEG_LEAF,
BTR_SEG_TOP and other page allocation metadata structures, or
the MDEV-6076 PAGE_ROOT_AUTO_INC). (The PAGE_ROOT_AUTO_INC field
is only updated when a dict_table_t for the table exists, and only
read when a dict_table_t for the table is being added to dict_sys.)

block_lock::u_lock_try(bool for_io=true) is used in buf_flush_page()
to allow concurrent readers but no concurrent modifications while the
page is being written to the data file. That latch will be released
by buf_page_write_complete() in a different thread. Hence, we use
the special lock owner value FOR_IO.

The index_lock::u_lock() improves concurrency on operations that
involve non-leaf index pages.

The interface has been cleaned up a little. We will use
x_lock_recursive() instead of x_lock() when we know that a
lock is already held by the current thread. Similarly,
a lock upgrade from U to X is only allowed via u_x_upgrade()
or x_lock_upgraded() but not via x_lock().

We will disable the LatchDebug and sync_array interfaces to
InnoDB rw-locks.

The SEMAPHORES section of SHOW ENGINE INNODB STATUS output
will no longer include any information about InnoDB rw-locks,
only TTASEventMutex (cmake -DMUTEXTYPE=event) waits.
This will make a part of the 'innotop' script dead code.

The block_lock buf_block_t::lock will not be covered by any
PERFORMANCE_SCHEMA instrumentation.

SHOW ENGINE INNODB MUTEX and INFORMATION_SCHEMA.INNODB_MUTEXES
will no longer output source code file names or line numbers.
The dict_index_t::lock will be identified by index and table names,
which should be much more useful. PERFORMANCE_SCHEMA is lumping
information about all dict_index_t::lock together as
event_name='wait/synch/sxlock/innodb/index_tree_rw_lock'.

buf_page_free(): Remove the file,line parameters. The sux_lock will
not store such diagnostic information.

buf_block_dbg_add_level(): Define as empty macro, to be removed
in a subsequent commit.

Unless the build was configured with cmake -DPLUGIN_PERFSCHEMA=NO
the index_lock dict_index_t::lock will be instrumented via
PERFORMANCE_SCHEMA. Similar to
commit 1669c8890c
we will distinguish lock waits by registering shared_lock,exclusive_lock
events instead of try_shared_lock,try_exclusive_lock.
Actual 'try' operations will not be instrumented at all.

rw_lock_list: Remove. After MDEV-24167, this only covered
buf_block_t::lock and dict_index_t::lock. We will output their
information by traversing buf_pool or dict_sys.
This commit is contained in:
Marko Mäkelä 2020-12-03 15:18:51 +02:00
commit 03ca6495df
74 changed files with 1289 additions and 4493 deletions

View file

@ -11,3 +11,4 @@
##############################################################################
create-index-debug : MDEV-13680 InnoDB may crash when btr_page_alloc() fails
innodb_wl6326_big : MDEV-24142 FIXME: no instrumentation

View file

@ -243,15 +243,6 @@ innodb_dict_lru_count_idle server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NU
innodb_dblwr_writes server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of doublewrite operations that have been performed (innodb_dblwr_writes)
innodb_dblwr_pages_written server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of pages that have been written for doublewrite operations (innodb_dblwr_pages_written)
innodb_page_size server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 value InnoDB page size in bytes (innodb_page_size)
innodb_rwlock_s_spin_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin waits due to shared latch request
innodb_rwlock_x_spin_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin waits due to exclusive latch request
innodb_rwlock_sx_spin_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin waits due to sx latch request
innodb_rwlock_s_spin_rounds server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin loop rounds due to shared latch request
innodb_rwlock_x_spin_rounds server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin loop rounds due to exclusive latch request
innodb_rwlock_sx_spin_rounds server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin loop rounds due to sx latch request
innodb_rwlock_s_os_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of OS waits due to shared latch request
innodb_rwlock_x_os_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of OS waits due to exclusive latch request
innodb_rwlock_sx_os_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of OS waits due to sx latch request
dml_reads dml 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rows read
dml_inserts dml 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rows inserted
dml_deletes dml 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rows deleted
@ -373,7 +364,7 @@ SPACE NAME ENCRYPTION_SCHEME KEYSERVER_REQUESTS MIN_KEY_VERSION CURRENT_KEY_VERS
Warnings:
Warning 1012 InnoDB: SELECTing from INFORMATION_SCHEMA.innodb_tablespaces_encryption but the InnoDB storage engine is not installed
select * from information_schema.innodb_mutexes;
NAME CREATE_FILE CREATE_LINE OS_WAITS
NAME OS_WAITS
Warnings:
Warning 1012 InnoDB: SELECTing from INFORMATION_SCHEMA.innodb_mutexes but the InnoDB storage engine is not installed
select * from information_schema.innodb_sys_semaphore_waits;

View file

@ -209,15 +209,6 @@ innodb_dict_lru_count_idle disabled
innodb_dblwr_writes disabled
innodb_dblwr_pages_written disabled
innodb_page_size disabled
innodb_rwlock_s_spin_waits disabled
innodb_rwlock_x_spin_waits disabled
innodb_rwlock_sx_spin_waits disabled
innodb_rwlock_s_spin_rounds disabled
innodb_rwlock_x_spin_rounds disabled
innodb_rwlock_sx_spin_rounds disabled
innodb_rwlock_s_os_waits disabled
innodb_rwlock_x_os_waits disabled
innodb_rwlock_sx_os_waits disabled
dml_reads disabled
dml_inserts disabled
dml_deletes disabled
@ -272,15 +263,6 @@ lock_row_lock_time disabled
lock_row_lock_time_max disabled
lock_row_lock_waits disabled
lock_row_lock_time_avg disabled
innodb_rwlock_s_spin_waits disabled
innodb_rwlock_x_spin_waits disabled
innodb_rwlock_sx_spin_waits disabled
innodb_rwlock_s_spin_rounds disabled
innodb_rwlock_x_spin_rounds disabled
innodb_rwlock_sx_spin_rounds disabled
innodb_rwlock_s_os_waits disabled
innodb_rwlock_x_os_waits disabled
innodb_rwlock_sx_os_waits disabled
set global innodb_monitor_enable = "%lock*";
ERROR 42000: Variable 'innodb_monitor_enable' can't be set to the value of '%lock*'
set global innodb_monitor_enable="%%%%%%%%%%%%%%%%%%%%%%%%%%%";

View file

@ -2,7 +2,5 @@ SHOW CREATE TABLE INFORMATION_SCHEMA.INNODB_MUTEXES;
Table Create Table
INNODB_MUTEXES CREATE TEMPORARY TABLE `INNODB_MUTEXES` (
`NAME` varchar(4000) NOT NULL DEFAULT '',
`CREATE_FILE` varchar(4000) NOT NULL DEFAULT '',
`CREATE_LINE` int(11) unsigned NOT NULL DEFAULT 0,
`OS_WAITS` bigint(21) unsigned NOT NULL DEFAULT 0
) ENGINE=MEMORY DEFAULT CHARSET=utf8

View file

@ -39,14 +39,14 @@ ORDER BY event_name;
event_name
wait/synch/rwlock/innodb/dict_operation_lock
wait/synch/rwlock/innodb/fil_space_latch
select operation from performance_schema.events_waits_history_long
where event_name like "wait/synch/sxlock/%"
and operation = "shared_lock" limit 1;
operation
shared_lock
select operation from performance_schema.events_waits_history_long
where event_name like "wait/synch/sxlock/%"
and operation = "exclusive_lock" limit 1;
operation
exclusive_lock
SELECT event_name FROM performance_schema.events_waits_history_long
WHERE event_name = 'wait/synch/sxlock/innodb/index_tree_rw_lock'
AND operation IN ('try_shared_lock','shared_lock') LIMIT 1;
event_name
wait/synch/sxlock/innodb/index_tree_rw_lock
SELECT event_name from performance_schema.events_waits_history_long
WHERE event_name = 'wait/synch/sxlock/innodb/index_tree_rw_lock'
AND operation IN ('try_exclusive_lock','exclusive_lock') LIMIT 1;
event_name
wait/synch/sxlock/innodb/index_tree_rw_lock
UPDATE performance_schema.setup_instruments SET enabled = 'YES', timed = 'YES';

View file

@ -52,26 +52,15 @@ ORDER BY event_name;
# Make sure some shared_lock operations have been executed
select operation from performance_schema.events_waits_history_long
where event_name like "wait/synch/sxlock/%"
and operation = "shared_lock" limit 1;
SELECT event_name FROM performance_schema.events_waits_history_long
WHERE event_name = 'wait/synch/sxlock/innodb/index_tree_rw_lock'
AND operation IN ('try_shared_lock','shared_lock') LIMIT 1;
# Make sure some exclusive_lock operations have been executed
select operation from performance_schema.events_waits_history_long
where event_name like "wait/synch/sxlock/%"
and operation = "exclusive_lock" limit 1;
# The following operations are not verified in this test:
# - shared_exclusive_lock
# - try_shared_lock
# - try_shared_exclusive_lock
# - try_exclusive_lock
# because to observe them:
# - there must be an actual code path using the operation
# (this affects try operations, which are not all used)
# - there must be a repeatable scenario to trigger the
# code path, to use as payload in the test script
SELECT event_name from performance_schema.events_waits_history_long
WHERE event_name = 'wait/synch/sxlock/innodb/index_tree_rw_lock'
AND operation IN ('try_exclusive_lock','exclusive_lock') LIMIT 1;
# Cleanup

View file

@ -234,6 +234,7 @@ SET(INNOBASE_SOURCES
include/row0upd.h
include/row0upd.ic
include/row0vers.h
include/rw_lock.h
include/srv0mon.h
include/srv0mon.ic
include/srv0srv.h
@ -242,8 +243,7 @@ SET(INNOBASE_SOURCES
include/sync0arr.ic
include/sync0debug.h
include/sync0policy.h
include/sync0rw.h
include/sync0rw.ic
include/sux_lock.h
include/sync0sync.h
include/sync0types.h
include/trx0i_s.h
@ -329,7 +329,6 @@ SET(INNOBASE_SOURCES
srv/srv0start.cc
sync/srw_lock.cc
sync/sync0arr.cc
sync/sync0rw.cc
sync/sync0debug.cc
sync/sync0sync.cc
trx/trx0i_s.cc

View file

@ -299,8 +299,6 @@ btr_height_get(
/* Release the S latch on the root page. */
mtr->memo_release(root_block, MTR_MEMO_PAGE_S_FIX);
ut_d(sync_check_unlock(&root_block->lock));
}
return(height);
@ -728,7 +726,7 @@ void btr_page_free(dict_index_t* index, buf_block_t* block, mtr_t* mtr,
: PAGE_HEADER + PAGE_BTR_SEG_TOP];
fseg_free_page(seg_header,
index->table->space, id.page_no(), mtr, space_latched);
buf_page_free(id, mtr, __FILE__, __LINE__);
buf_page_free(id, mtr);
/* The page was marked free in the allocation bitmap, but it
should remain exclusively latched until mtr_t::commit() or until it
@ -2782,8 +2780,7 @@ func_start:
ut_ad(!dict_index_is_online_ddl(cursor->index)
|| (flags & BTR_CREATE_FLAG)
|| dict_index_is_clust(cursor->index));
ut_ad(rw_lock_own_flagged(dict_index_get_lock(cursor->index),
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(cursor->index->lock.have_u_or_x());
block = btr_cur_get_block(cursor);
page = buf_block_get_frame(block);
@ -2933,9 +2930,8 @@ insert_empty:
&& page_is_leaf(page)
&& !dict_index_is_online_ddl(cursor->index)) {
mtr->memo_release(
dict_index_get_lock(cursor->index),
MTR_MEMO_X_LOCK | MTR_MEMO_SX_LOCK);
mtr->memo_release(&cursor->index->lock,
MTR_MEMO_X_LOCK | MTR_MEMO_SX_LOCK);
/* NOTE: We cannot release root block latch here, because it
has segment header and already modified in most of cases.*/

View file

@ -834,7 +834,7 @@ PageBulk::release()
finish();
/* We fix the block because we will re-pin it soon. */
buf_block_buf_fix_inc(m_block, __FILE__, __LINE__);
buf_block_buf_fix_inc(m_block);
/* No other threads can modify this block. */
m_modify_clock = buf_block_get_modify_clock(m_block);
@ -949,9 +949,7 @@ BtrBulk::pageCommit(
page_bulk->set_modified();
}
ut_ad(!rw_lock_own_flagged(&m_index->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX
| RW_LOCK_FLAG_S));
ut_ad(!m_index->lock.have_any());
/* Compress page if it's a compressed table. */
if (page_bulk->getPageZip() != NULL && !page_bulk->compress()) {

View file

@ -344,9 +344,9 @@ btr_cur_latch_leaves(
case BTR_MODIFY_PREV:
mode = latch_mode == BTR_SEARCH_PREV ? RW_S_LATCH : RW_X_LATCH;
/* latch also left sibling */
rw_lock_s_lock(&block->lock);
block->lock.s_lock();
left_page_no = btr_page_get_prev(block->frame);
rw_lock_s_unlock(&block->lock);
block->lock.s_unlock();
if (left_page_no != FIL_NULL) {
latch_leaves.savepoints[0] = mtr_set_savepoint(mtr);
@ -783,14 +783,14 @@ btr_cur_optimistic_latch_leaves(
modify_clock, file, line, mtr));
case BTR_SEARCH_PREV:
case BTR_MODIFY_PREV:
rw_lock_s_lock(&block->lock);
block->lock.s_lock();
if (block->modify_clock != modify_clock) {
rw_lock_s_unlock(&block->lock);
block->lock.s_unlock();
return false;
}
const uint32_t curr_page_no = block->page.id().page_no();
const uint32_t left_page_no = btr_page_get_prev(block->frame);
rw_lock_s_unlock(&block->lock);
block->lock.s_unlock();
const rw_lock_type_t mode = *latch_mode == BTR_SEARCH_PREV
? RW_S_LATCH : RW_X_LATCH;
@ -1481,27 +1481,31 @@ x_latch_index:
upper_rw_latch = RW_X_LATCH;
break;
case BTR_CONT_MODIFY_TREE:
ut_ad(srv_read_only_mode
|| mtr->memo_contains_flagged(&index->lock,
MTR_MEMO_X_LOCK
| MTR_MEMO_SX_LOCK));
if (index->is_spatial()) {
/* If we are about to locate parent page for split
and/or merge operation for R-Tree index, X latch
the parent */
upper_rw_latch = RW_X_LATCH;
break;
}
/* fall through */
case BTR_CONT_SEARCH_TREE:
/* Do nothing */
ut_ad(srv_read_only_mode
|| mtr->memo_contains_flagged(&index->lock,
MTR_MEMO_X_LOCK
| MTR_MEMO_SX_LOCK));
if (dict_index_is_spatial(index)
&& latch_mode == BTR_CONT_MODIFY_TREE) {
/* If we are about to locating parent page for split
and/or merge operation for R-Tree index, X latch
the parent */
upper_rw_latch = RW_X_LATCH;
} else {
upper_rw_latch = RW_NO_LATCH;
}
upper_rw_latch = RW_NO_LATCH;
break;
default:
if (!srv_read_only_mode) {
if (s_latch_by_caller) {
ut_ad(rw_lock_own(dict_index_get_lock(index),
RW_LOCK_S));
ut_ad(mtr->memo_contains_flagged(
&index->lock, MTR_MEMO_S_LOCK));
} else if (!modify_external) {
/* BTR_SEARCH_TREE is intended to be used with
BTR_ALREADY_S_LATCHED */
@ -1710,9 +1714,9 @@ retry_page_get:
rw_latch = upper_rw_latch;
rw_lock_s_lock(&block->lock);
block->lock.s_lock();
left_page_no = btr_page_get_prev(buf_block_get_frame(block));
rw_lock_s_unlock(&block->lock);
block->lock.s_unlock();
if (left_page_no != FIL_NULL) {
ut_ad(prev_n_blocks < leftmost_from_level);
@ -1856,7 +1860,7 @@ retry_page_get:
needs to keep tree sx-latch */
mtr_release_s_latch_at_savepoint(
mtr, savepoint,
dict_index_get_lock(index));
&index->lock);
}
/* release upper blocks */
@ -2013,14 +2017,14 @@ retry_page_get:
lock_mutex_exit();
if (rw_latch == RW_NO_LATCH && height != 0) {
rw_lock_s_lock(&(block->lock));
block->lock.s_lock();
}
lock_prdt_lock(block, &prdt, index, LOCK_S,
LOCK_PREDICATE, cursor->thr);
if (rw_latch == RW_NO_LATCH && height != 0) {
rw_lock_s_unlock(&(block->lock));
block->lock.s_unlock();
}
}
@ -2094,7 +2098,7 @@ need_opposite_intention:
ut_ad(mtr->memo_contains_flagged(
&index->lock, MTR_MEMO_X_LOCK
| MTR_MEMO_SX_LOCK));
rw_lock_s_lock(&block->lock);
block->lock.s_lock();
add_latch = true;
}
@ -2126,7 +2130,7 @@ need_opposite_intention:
}
if (add_latch) {
rw_lock_s_unlock(&block->lock);
block->lock.s_unlock();
}
ut_ad(!page_rec_is_supremum(node_ptr));
@ -3042,7 +3046,7 @@ btr_cur_open_at_rnd_pos_func(
if (!srv_read_only_mode) {
mtr_release_s_latch_at_savepoint(
mtr, savepoint,
dict_index_get_lock(index));
&index->lock);
}
/* release upper blocks */
@ -5147,7 +5151,7 @@ btr_cur_pessimistic_update(
&& page_is_leaf(block->frame)
&& !dict_index_is_online_ddl(index)) {
mtr_memo_release(mtr, dict_index_get_lock(index),
mtr_memo_release(mtr, &index->lock,
MTR_MEMO_X_LOCK | MTR_MEMO_SX_LOCK);
/* NOTE: We cannot release root block latch here, because it
@ -5884,7 +5888,7 @@ return_after_reservations:
&& page_is_leaf(page)
&& !dict_index_is_online_ddl(index)) {
mtr_memo_release(mtr, dict_index_get_lock(index),
mtr_memo_release(mtr, &index->lock,
MTR_MEMO_X_LOCK | MTR_MEMO_SX_LOCK);
/* NOTE: We cannot release root block latch here, because it
@ -7120,7 +7124,7 @@ struct btr_blob_log_check_t {
if (UNIV_UNLIKELY(m_op == BTR_STORE_INSERT_BULK)) {
offs = page_offset(*m_rec);
page_no = (*m_block)->page.id().page_no();
buf_block_buf_fix_inc(*m_block, __FILE__, __LINE__);
buf_block_buf_fix_inc(*m_block);
ut_ad(page_no != FIL_NULL);
} else {
btr_pcur_store_position(m_pcur, m_mtr);
@ -7662,9 +7666,6 @@ btr_free_externally_stored_field(
ut_ad(rec || !block->page.zip.data);
for (;;) {
#ifdef UNIV_DEBUG
buf_block_t* rec_block;
#endif /* UNIV_DEBUG */
buf_block_t* ext_block;
mtr_start(&mtr);
@ -7679,9 +7680,9 @@ btr_free_externally_stored_field(
const page_id_t page_id(page_get_space_id(p),
page_get_page_no(p));
#ifdef UNIV_DEBUG
rec_block =
#endif /* UNIV_DEBUG */
#if 0
buf_block_t* rec_block =
#endif
buf_page_get(page_id, rec_zip_size, RW_X_LATCH, &mtr);
buf_block_dbg_add_level(rec_block, SYNC_NO_ORDER_CHECK);

View file

@ -207,7 +207,7 @@ ATTRIBUTE_COLD static void btr_search_lazy_free(dict_index_t *index)
dict_table_t *table= index->table;
/* Perform the skipped steps of dict_index_remove_from_cache_low(). */
UT_LIST_REMOVE(table->freed_indexes, index);
rw_lock_free(&index->lock);
index->lock.free();
dict_mem_index_free(index);
if (!UT_LIST_GET_LEN(table->freed_indexes) &&
@ -406,8 +406,7 @@ static
bool
btr_search_update_block_hash_info(btr_search_t* info, buf_block_t* block)
{
ut_ad(rw_lock_own_flagged(&block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S));
ut_ad(block->lock.have_x() || block->lock.have_s());
info->last_hash_succ = FALSE;
ut_d(auto state= block->page.state());
@ -695,8 +694,7 @@ btr_search_update_hash_ref(
{
ut_ad(cursor->flag == BTR_CUR_HASH_FAIL);
ut_ad(rw_lock_own_flagged(&block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S));
ut_ad(block->lock.have_x() || block->lock.have_s());
ut_ad(page_align(btr_cur_get_rec(cursor)) == block->frame);
ut_ad(page_is_leaf(block->frame));
assert_block_ahi_valid(block);
@ -1097,23 +1095,21 @@ fail:
ut_ad(block->page.state() == BUF_BLOCK_FILE_PAGE);
DBUG_ASSERT(fail || block->page.status != buf_page_t::FREED);
buf_block_buf_fix_inc(block, __FILE__, __LINE__);
buf_block_buf_fix_inc(block);
hash_lock->read_unlock();
block->page.set_accessed();
buf_page_make_young_if_needed(&block->page);
mtr_memo_type_t fix_type;
if (latch_mode == BTR_SEARCH_LEAF) {
if (!rw_lock_s_lock_nowait(&block->lock,
__FILE__, __LINE__)) {
if (!block->lock.s_lock_try()) {
got_no_latch:
buf_block_buf_fix_dec(block);
goto fail;
}
fix_type = MTR_MEMO_PAGE_S_FIX;
} else {
if (!rw_lock_x_lock_func_nowait_inline(
&block->lock, __FILE__, __LINE__)) {
if (!block->lock.x_lock_try()) {
goto got_no_latch;
}
fix_type = MTR_MEMO_PAGE_X_FIX;
@ -1251,9 +1247,7 @@ retry:
ut_ad(!block->page.buf_fix_count()
|| block->page.state() == BUF_BLOCK_REMOVE_HASH
|| rw_lock_own_flagged(&block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S
| RW_LOCK_FLAG_SX));
|| block->lock.have_any());
ut_ad(page_is_leaf(block->frame));
/* We must not dereference block->index here, because it could be freed
@ -1423,7 +1417,7 @@ void btr_search_drop_page_hash_when_freed(const page_id_t page_id)
/* In all our callers, the table handle should
be open, or we should be in the process of
dropping the table (preventing eviction). */
ut_ad(index->table->get_ref_count() > 0
ut_ad(block->index->table->get_ref_count() > 0
|| mutex_own(&dict_sys.mutex));
btr_search_drop_page_hash_index(block);
}
@ -1478,8 +1472,7 @@ btr_search_build_page_hash_index(
ut_ad(!dict_index_is_ibuf(index));
ut_ad(page_is_leaf(block->frame));
ut_ad(rw_lock_own_flagged(&block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S));
ut_ad(block->lock.have_x() || block->lock.have_s());
ut_ad(block->page.id().page_no() >= 3);
ahi_latch->rd_lock(SRW_LOCK_CALL);
@ -1701,8 +1694,8 @@ btr_search_move_or_delete_hash_entries(
buf_block_t* new_block,
buf_block_t* block)
{
ut_ad(rw_lock_own(&(block->lock), RW_LOCK_X));
ut_ad(rw_lock_own(&(new_block->lock), RW_LOCK_X));
ut_ad(block->lock.have_x());
ut_ad(new_block->lock.have_x());
if (!btr_search_enabled) {
return;
@ -1781,7 +1774,7 @@ void btr_search_update_hash_on_delete(btr_cur_t* cursor)
block = btr_cur_get_block(cursor);
ut_ad(rw_lock_own(&(block->lock), RW_LOCK_X));
ut_ad(block->lock.have_x());
assert_block_ahi_valid(block);
index = block->index;
@ -1850,7 +1843,7 @@ void btr_search_update_hash_node_on_insert(btr_cur_t *cursor,
block = btr_cur_get_block(cursor);
ut_ad(rw_lock_own(&(block->lock), RW_LOCK_X));
ut_ad(block->lock.have_x());
index = block->index;
@ -1927,7 +1920,7 @@ void btr_search_update_hash_on_insert(btr_cur_t *cursor,
block = btr_cur_get_block(cursor);
ut_ad(rw_lock_own(&(block->lock), RW_LOCK_X));
ut_ad(block->lock.have_x());
assert_block_ahi_valid(block);
index = block->index;

View file

@ -50,7 +50,7 @@ void Block_hint::buffer_fix_block_if_still_valid()
page_hash_latch *hash_lock= buf_pool.page_hash.lock<false>(fold);
if (buf_pool.is_uncompressed(m_block) && m_page_id == m_block->page.id() &&
m_block->page.state() == BUF_BLOCK_FILE_PAGE)
buf_block_buf_fix_inc(m_block, __FILE__, __LINE__);
buf_block_buf_fix_inc(m_block);
else
clear();
hash_lock->read_unlock();

View file

@ -48,7 +48,6 @@ Created 11/5/1995 Heikki Tuuri
#include "buf0buddy.h"
#include "buf0dblwr.h"
#include "lock0lock.h"
#include "sync0rw.h"
#include "btr0sea.h"
#include "ibuf0ibuf.h"
#include "trx0undo.h"
@ -1206,26 +1205,20 @@ buf_block_init(buf_block_t* block, byte* frame)
block->frame = frame;
block->modify_clock = 0;
MEM_MAKE_DEFINED(&block->modify_clock, sizeof block->modify_clock);
ut_ad(!block->modify_clock);
block->page.init(BUF_BLOCK_NOT_USED, page_id_t(~0ULL));
#ifdef BTR_CUR_HASH_ADAPT
block->index = NULL;
MEM_MAKE_DEFINED(&block->index, sizeof block->index);
ut_ad(!block->index);
#endif /* BTR_CUR_HASH_ADAPT */
ut_d(block->in_unzip_LRU_list = false);
ut_d(block->in_withdraw_list = false);
page_zip_des_init(&block->page.zip);
ut_d(block->debug_latch = (rw_lock_t *) ut_malloc_nokey(sizeof(rw_lock_t)));
rw_lock_create(PFS_NOT_INSTRUMENTED, &block->lock, SYNC_LEVEL_VARYING);
ut_d(rw_lock_create(PFS_NOT_INSTRUMENTED, block->debug_latch,
SYNC_LEVEL_VARYING));
block->lock.is_block_lock = 1;
ut_ad(rw_lock_validate(&(block->lock)));
MEM_MAKE_DEFINED(&block->lock, sizeof block->lock);
block->lock.init();
}
/** Allocate a chunk of buffer frames.
@ -1361,9 +1354,7 @@ inline const buf_block_t *buf_pool_t::chunk_t::not_freed() const
@param[in,out] block buffer pool block descriptor */
static void buf_block_free_mutexes(buf_block_t* block)
{
rw_lock_free(&block->lock);
ut_d(rw_lock_free(block->debug_latch));
ut_d(ut_free(block->debug_latch));
block->lock.free();
}
/** Create the hash table.
@ -2482,13 +2473,8 @@ as FREED. It avoids the concurrent flushing of freed page.
Currently, this function only marks the page as FREED if it is
in buffer pool.
@param[in] page_id page id
@param[in,out] mtr mini-transaction
@param[in] file file name
@param[in] line line where called */
void buf_page_free(const page_id_t page_id,
mtr_t *mtr,
const char *file,
unsigned line)
@param[in,out] mtr mini-transaction */
void buf_page_free(const page_id_t page_id, mtr_t *mtr)
{
ut_ad(mtr);
ut_ad(mtr->is_active());
@ -2511,14 +2497,12 @@ void buf_page_free(const page_id_t page_id,
return;
}
block->fix();
ut_ad(block->page.buf_fix_count());
ut_ad(fsp_is_system_temporary(page_id.space()) ||
rw_lock_s_lock_nowait(block->debug_latch, file, line));
mtr_memo_type_t fix_type= MTR_MEMO_PAGE_X_FIX;
rw_lock_x_lock_inline(&block->lock, 0, file, line);
mtr_memo_push(mtr, block, fix_type);
buf_block_buf_fix_inc(block);
ut_ad(block->page.buf_fix_count());
mtr->memo_push(block, MTR_MEMO_PAGE_X_FIX);
block->lock.x_lock();
block->page.status= buf_page_t::FREED;
buf_block_dbg_add_level(block, SYNC_NO_ORDER_CHECK);
@ -2580,9 +2564,6 @@ err_exit:
ut_ad(!buf_pool.watch_is_sentinel(*bpage));
switch (bpage->state()) {
case BUF_BLOCK_ZIP_PAGE:
bpage->fix();
goto got_block;
case BUF_BLOCK_FILE_PAGE:
/* Discard the uncompressed page frame if possible. */
if (!discard_attempted)
@ -2595,9 +2576,9 @@ err_exit:
mysql_mutex_unlock(&buf_pool.mutex);
goto lookup;
}
buf_block_buf_fix_inc(reinterpret_cast<buf_block_t*>(bpage),
__FILE__, __LINE__);
/* fall through */
case BUF_BLOCK_ZIP_PAGE:
bpage->fix();
goto got_block;
default:
break;
@ -2765,98 +2746,11 @@ buf_wait_for_read(
added to the page hashtable. */
while (block->page.io_fix() == BUF_IO_READ) {
rw_lock_s_lock(&block->lock);
rw_lock_s_unlock(&block->lock);
block->lock.s_lock();
block->lock.s_unlock();
}
}
#ifdef BTR_CUR_HASH_ADAPT
/** If a stale adaptive hash index exists on the block, drop it.
Multiple executions of btr_search_drop_page_hash_index() on the
same block must be prevented by exclusive page latch. */
ATTRIBUTE_COLD
static void buf_defer_drop_ahi(buf_block_t *block, mtr_memo_type_t fix_type)
{
switch (fix_type) {
case MTR_MEMO_BUF_FIX:
/* We do not drop the adaptive hash index, because safely doing
so would require acquiring block->lock, and that is not safe
to acquire in some RW_NO_LATCH access paths. Those code paths
should have no business accessing the adaptive hash index anyway. */
break;
case MTR_MEMO_PAGE_S_FIX:
/* Temporarily release our S-latch. */
rw_lock_s_unlock(&block->lock);
rw_lock_x_lock(&block->lock);
if (dict_index_t *index= block->index)
if (index->freed())
btr_search_drop_page_hash_index(block);
rw_lock_x_unlock(&block->lock);
rw_lock_s_lock(&block->lock);
break;
case MTR_MEMO_PAGE_SX_FIX:
rw_lock_sx_unlock(&block->lock);
rw_lock_x_lock(&block->lock);
if (dict_index_t *index= block->index)
if (index->freed())
btr_search_drop_page_hash_index(block);
rw_lock_x_unlock(&block->lock);
rw_lock_sx_lock(&block->lock);
break;
default:
ut_ad(fix_type == MTR_MEMO_PAGE_X_FIX);
btr_search_drop_page_hash_index(block);
}
}
#endif /* BTR_CUR_HASH_ADAPT */
/** Lock the page with the given latch type.
@param[in,out] block block to be locked
@param[in] rw_latch RW_S_LATCH, RW_X_LATCH, RW_NO_LATCH
@param[in] mtr mini-transaction
@param[in] file file name
@param[in] line line where called
@return pointer to locked block */
static buf_block_t* buf_page_mtr_lock(buf_block_t *block,
ulint rw_latch,
mtr_t* mtr,
const char *file,
unsigned line)
{
mtr_memo_type_t fix_type;
switch (rw_latch)
{
case RW_NO_LATCH:
fix_type= MTR_MEMO_BUF_FIX;
goto done;
case RW_S_LATCH:
rw_lock_s_lock_inline(&block->lock, 0, file, line);
fix_type= MTR_MEMO_PAGE_S_FIX;
break;
case RW_SX_LATCH:
rw_lock_sx_lock_inline(&block->lock, 0, file, line);
fix_type= MTR_MEMO_PAGE_SX_FIX;
break;
default:
ut_ad(rw_latch == RW_X_LATCH);
rw_lock_x_lock_inline(&block->lock, 0, file, line);
fix_type= MTR_MEMO_PAGE_X_FIX;
break;
}
#ifdef BTR_CUR_HASH_ADAPT
{
dict_index_t *index= block->index;
if (index && index->freed())
buf_defer_drop_ahi(block, fix_type);
}
#endif /* BTR_CUR_HASH_ADAPT */
done:
mtr_memo_push(mtr, block, fix_type);
return block;
}
/** Low level function used to get access to a database page.
@param[in] page_id page id
@param[in] zip_size ROW_FORMAT=COMPRESSED page size, or 0
@ -3221,7 +3115,7 @@ evict_from_pool:
buf_unzip_LRU_add_block(block, FALSE);
block->page.set_io_fix(BUF_IO_READ);
rw_lock_x_lock_inline(&block->lock, 0, file, line);
block->lock.x_lock();
MEM_UNDEFINED(bpage, sizeof *bpage);
@ -3242,7 +3136,7 @@ evict_from_pool:
buf_pool.mutex. */
if (!buf_zip_decompress(block, false)) {
rw_lock_x_unlock(&fix_block->lock);
fix_block->lock.x_unlock();
fix_block->page.io_unfix();
fix_block->unfix();
--buf_pool.n_pend_unzip;
@ -3253,7 +3147,7 @@ evict_from_pool:
return NULL;
}
rw_lock_x_unlock(&block->lock);
block->lock.x_unlock();
fix_block->page.io_unfix();
--buf_pool.n_pend_unzip;
break;
@ -3324,19 +3218,6 @@ re_evict:
ut_ad(fix_block->page.buf_fix_count());
#ifdef UNIV_DEBUG
/* We have already buffer fixed the page, and we are committed to
returning this page to the caller. Register for debugging.
Avoid debug latching if page/block belongs to system temporary
tablespace (Not much needed for table with single threaded access.). */
if (!fsp_is_system_temporary(page_id.space())) {
ibool ret;
ret = rw_lock_s_lock_nowait(
fix_block->debug_latch, file, line);
ut_a(ret);
}
#endif /* UNIV_DEBUG */
/* While tablespace is reinited the indexes are already freed but the
blocks related to it still resides in buffer pool. Trying to remove
such blocks from buffer pool would invoke removal of AHI entries
@ -3364,13 +3245,7 @@ re_evict:
buf_wait_for_read(fix_block);
if (fix_block->page.id() != page_id) {
fix_block->unfix();
#ifdef UNIV_DEBUG
if (!fsp_is_system_temporary(page_id.space())) {
rw_lock_s_unlock(fix_block->debug_latch);
}
#endif /* UNIV_DEBUG */
buf_block_buf_fix_dec(fix_block);
if (err) {
*err = DB_PAGE_CORRUPTED;
@ -3383,7 +3258,7 @@ re_evict:
&& allow_ibuf_merge
&& fil_page_get_type(fix_block->frame) == FIL_PAGE_INDEX
&& page_is_leaf(fix_block->frame)) {
rw_lock_x_lock_inline(&fix_block->lock, 0, file, line);
fix_block->lock.x_lock();
if (fix_block->page.ibuf_exist) {
fix_block->page.ibuf_exist = false;
@ -3394,13 +3269,12 @@ re_evict:
if (rw_latch == RW_X_LATCH) {
mtr->memo_push(fix_block, MTR_MEMO_PAGE_X_FIX);
} else {
rw_lock_x_unlock(&fix_block->lock);
fix_block->lock.x_unlock();
goto get_latch;
}
} else {
get_latch:
fix_block = buf_page_mtr_lock(fix_block, rw_latch, mtr,
file, line);
mtr->page_lock(fix_block, rw_latch);
}
if (!not_first_access && mode != BUF_PEEK_IF_IN_POOL) {
@ -3442,8 +3316,7 @@ buf_page_get_gen(
{
if (buf_block_t *block= recv_sys.recover(page_id))
{
block->fix();
ut_ad(rw_lock_s_lock_nowait(block->debug_latch, file, line));
buf_block_buf_fix_inc(block);
if (err)
*err= DB_SUCCESS;
const bool must_merge= allow_ibuf_merge &&
@ -3453,7 +3326,7 @@ buf_page_get_gen(
else if (must_merge && fil_page_get_type(block->frame) == FIL_PAGE_INDEX &&
page_is_leaf(block->frame))
{
rw_lock_x_lock_inline(&block->lock, 0, file, line);
block->lock.x_lock();
block->page.ibuf_exist= false;
ibuf_merge_or_delete_for_page(block, page_id, block->zip_size());
@ -3462,9 +3335,9 @@ buf_page_get_gen(
mtr->memo_push(block, MTR_MEMO_PAGE_X_FIX);
return block;
}
rw_lock_x_unlock(&block->lock);
block->lock.x_unlock();
}
block= buf_page_mtr_lock(block, rw_latch, mtr, file, line);
mtr->page_lock(block, rw_latch);
return block;
}
@ -3510,7 +3383,7 @@ buf_page_optimistic_get(
return(FALSE);
}
buf_block_buf_fix_inc(block, file, line);
buf_block_buf_fix_inc(block);
hash_lock->read_unlock();
block->page.set_accessed();
@ -3523,11 +3396,17 @@ buf_page_optimistic_get(
if (rw_latch == RW_S_LATCH) {
fix_type = MTR_MEMO_PAGE_S_FIX;
success = rw_lock_s_lock_nowait(&block->lock, file, line);
success = block->lock.s_lock_try();
} else if (block->lock.have_u_not_x()) {
block->lock.u_x_upgrade();
mtr->page_lock_upgrade(*block);
ut_ad(id == block->page.id());
ut_ad(modify_clock == block->modify_clock);
buf_block_buf_fix_dec(block);
goto func_exit;
} else {
fix_type = MTR_MEMO_PAGE_X_FIX;
success = rw_lock_x_lock_func_nowait_inline(
&block->lock, file, line);
success = block->lock.x_lock_try();
}
ut_ad(id == block->page.id());
@ -3542,9 +3421,9 @@ buf_page_optimistic_get(
buf_block_dbg_add_level(block, SYNC_NO_ORDER_CHECK);
if (rw_latch == RW_S_LATCH) {
rw_lock_s_unlock(&block->lock);
block->lock.s_unlock();
} else {
rw_lock_x_unlock(&block->lock);
block->lock.x_unlock();
}
buf_block_buf_fix_dec(block);
@ -3552,7 +3431,7 @@ buf_page_optimistic_get(
}
mtr_memo_push(mtr, block, fix_type);
func_exit:
#ifdef UNIV_DEBUG
if (!(++buf_dbg_counter % 5771)) buf_pool.validate();
#endif /* UNIV_DEBUG */
@ -3595,23 +3474,19 @@ buf_page_try_get_func(
}
buf_block_t *block= reinterpret_cast<buf_block_t*>(bpage);
buf_block_buf_fix_inc(block, file, line);
buf_block_buf_fix_inc(block);
hash_lock->read_unlock();
mtr_memo_type_t fix_type= MTR_MEMO_PAGE_S_FIX;
if (!rw_lock_s_lock_nowait(&block->lock, file, line))
/* We will always try to acquire an U latch.
In lock_rec_print() we may already be holding an S latch on the page,
and recursive S latch acquisition is not allowed. */
if (!block->lock.u_lock_try(false))
{
/* Let us try to get an X-latch. If the current thread
is holding an X-latch on the page, we cannot get an S-latch. */
fix_type= MTR_MEMO_PAGE_X_FIX;
if (!rw_lock_x_lock_func_nowait_inline(&block->lock, file, line))
{
buf_block_buf_fix_dec(block);
return nullptr;
}
buf_block_buf_fix_dec(block);
return nullptr;
}
mtr_memo_push(mtr, block, fix_type);
mtr_memo_push(mtr, block, MTR_MEMO_PAGE_SX_FIX);
#ifdef UNIV_DEBUG
if (!(++buf_dbg_counter % 5771)) buf_pool.validate();
@ -3679,8 +3554,8 @@ loop:
case BUF_BLOCK_FILE_PAGE:
if (!mtr->have_x_latch(*block))
{
buf_block_buf_fix_inc(block, __FILE__, __LINE__);
while (!rw_lock_x_lock_nowait(&block->lock))
buf_block_buf_fix_inc(block);
while (!block->lock.x_lock_try())
{
/* Wait for buf_page_write_complete() to release block->lock.
We must not hold buf_pool.mutex while waiting. */
@ -3716,7 +3591,7 @@ loop:
goto loop;
}
rw_lock_x_lock(&free_block->lock);
free_block->lock.x_lock();
buf_relocate(&block->page, &free_block->page);
buf_flush_relocate_on_flush_list(&block->page, &free_block->page);
@ -3725,7 +3600,7 @@ loop:
hash_lock->write_unlock();
buf_page_free_descriptor(&block->page);
block= free_block;
buf_block_buf_fix_inc(block, __FILE__, __LINE__);
buf_block_buf_fix_inc(block);
mtr_memo_push(mtr, block, MTR_MEMO_PAGE_X_FIX);
break;
}
@ -3754,10 +3629,7 @@ loop:
block= free_block;
/* Duplicate buf_block_buf_fix_inc_func() */
ut_ad(block->page.buf_fix_count() == 1);
ut_ad(fsp_is_system_temporary(page_id.space()) ||
rw_lock_s_lock_nowait(block->debug_latch, __FILE__, __LINE__));
/* The block must be put to the LRU list */
buf_LRU_add_block(&block->page, false);
@ -3767,7 +3639,7 @@ loop:
ut_d(block->page.in_page_hash= true);
HASH_INSERT(buf_page_t, hash, &buf_pool.page_hash, fold, &block->page);
rw_lock_x_lock(&block->lock);
block->lock.x_lock();
if (UNIV_UNLIKELY(zip_size))
{
/* Prevent race conditions during buf_buddy_alloc(), which may
@ -3954,9 +3826,7 @@ ATTRIBUTE_COLD void buf_pool_t::corrupted_evict(buf_page_t *bpage)
bpage->set_corrupt_id();
if (bpage->state() == BUF_BLOCK_FILE_PAGE)
rw_lock_x_unlock_gen(&reinterpret_cast<buf_block_t*>(bpage)->lock,
BUF_IO_READ);
reinterpret_cast<buf_block_t*>(bpage)->lock.x_unlock(true);
bpage->io_unfix();
/* remove from LRU and page_hash */
@ -4227,7 +4097,10 @@ release_page:
did the locking, we use a pass value != 0 in unlock, which simply
removes the newest lock debug record, without checking the thread id. */
if (bpage->state() == BUF_BLOCK_FILE_PAGE)
rw_lock_x_unlock_gen(&((buf_block_t*) bpage)->lock, BUF_IO_READ);
{
buf_block_t *block= reinterpret_cast<buf_block_t*>(bpage);
block->lock.x_unlock(true);
}
bpage->io_unfix();
ut_d(auto n=) buf_pool.n_pend_reads--;

View file

@ -156,7 +156,7 @@ too_small:
tablespace, then the page has not been written to in
doublewrite. */
ut_ad(rw_lock_get_x_lock_count(&new_block->lock) == 1);
ut_ad(new_block->lock.not_recursive());
const page_id_t id= new_block->page.id();
/* We only do this in the debug build, to ensure that the check in
buf_flush_init_for_writing() will see a valid page type. The

View file

@ -38,7 +38,6 @@ Created April 08, 2011 Vasil Dimov
#include "os0thread.h"
#include "srv0srv.h"
#include "srv0start.h"
#include "sync0rw.h"
#include "ut0byte.h"
#include <algorithm>

View file

@ -369,11 +369,8 @@ void buf_page_write_complete(const IORequest &request)
buf_dblwr.write_completed();
}
/* Because this thread which does the unlocking might not be the same that
did the locking, we use a pass value != 0 in unlock, which simply
removes the newest lock debug record, without checking the thread id. */
if (bpage->state() == BUF_BLOCK_FILE_PAGE)
rw_lock_sx_unlock_gen(&((buf_block_t*) bpage)->lock, BUF_IO_WRITE);
reinterpret_cast<buf_block_t*>(bpage)->lock.u_unlock(true);
buf_pool.stat.n_pages_written++;
@ -792,8 +789,7 @@ static void buf_release_freed_page(buf_page_t *bpage)
mysql_mutex_unlock(&buf_pool.flush_list_mutex);
if (uncompressed)
rw_lock_sx_unlock_gen(&reinterpret_cast<buf_block_t*>(bpage)->lock,
BUF_IO_WRITE);
reinterpret_cast<buf_block_t*>(bpage)->lock.u_unlock(true);
buf_LRU_free_page(bpage, true);
mysql_mutex_unlock(&buf_pool.mutex);
@ -815,14 +811,14 @@ static bool buf_flush_page(buf_page_t *bpage, bool lru, fil_space_t *space)
space->atomic_write_supported);
ut_ad(space->referenced());
rw_lock_t *rw_lock;
block_lock *rw_lock;
if (bpage->state() != BUF_BLOCK_FILE_PAGE)
rw_lock= nullptr;
else
{
rw_lock= &reinterpret_cast<buf_block_t*>(bpage)->lock;
if (!rw_lock_sx_lock_nowait(rw_lock, BUF_IO_WRITE))
if (!rw_lock->u_lock_try(true))
return false;
}
@ -870,7 +866,7 @@ static bool buf_flush_page(buf_page_t *bpage, bool lru, fil_space_t *space)
if (UNIV_UNLIKELY(lsn > log_sys.get_flushed_lsn()))
{
if (rw_lock)
rw_lock_sx_unlock_gen(rw_lock, BUF_IO_WRITE);
rw_lock->u_unlock(true);
mysql_mutex_lock(&buf_pool.mutex);
bpage->set_io_fix(BUF_IO_NONE);
return false;
@ -1221,14 +1217,14 @@ static void buf_flush_discard_page(buf_page_t *bpage)
ut_ad(bpage->in_file());
ut_ad(bpage->oldest_modification());
rw_lock_t *rw_lock;
block_lock *rw_lock;
if (bpage->state() != BUF_BLOCK_FILE_PAGE)
rw_lock= nullptr;
else
{
rw_lock= &reinterpret_cast<buf_block_t*>(bpage)->lock;
if (!rw_lock_sx_lock_nowait(rw_lock, 0))
if (!rw_lock->u_lock_try(false))
return;
}
@ -1238,7 +1234,7 @@ static void buf_flush_discard_page(buf_page_t *bpage)
mysql_mutex_unlock(&buf_pool.flush_list_mutex);
if (rw_lock)
rw_lock_sx_unlock(rw_lock);
rw_lock->u_unlock();
buf_LRU_free_page(bpage, true);
}

View file

@ -25,7 +25,6 @@ Created 11/5/1995 Heikki Tuuri
*******************************************************/
#include "buf0lru.h"
#include "sync0rw.h"
#include "fil0fil.h"
#include "btr0btr.h"
#include "buf0buddy.h"

View file

@ -108,15 +108,9 @@ static buf_page_t* buf_page_init_for_read(ulint mode, const page_id_t page_id,
{
block= buf_LRU_get_free_block(false);
block->initialise(page_id, zip_size);
/* We set a pass-type x-lock on the frame because then
the same thread which called for the read operation
(and is running now at this point of code) can wait
for the read to complete by waiting for the x-lock on
the frame; if the x-lock were recursive, the same
thread would illegally get the x-lock before the page
read is completed. The x-lock will be released
/* x_unlock() will be invoked
in buf_page_read_complete() by the io-handler thread. */
rw_lock_x_lock_gen(&block->lock, BUF_IO_READ);
block->lock.x_lock(true);
}
const ulint fold= page_id.fold();
@ -135,7 +129,7 @@ static buf_page_t* buf_page_init_for_read(ulint mode, const page_id_t page_id,
hash_lock->write_unlock();
if (block)
{
rw_lock_x_unlock_gen(&block->lock, BUF_IO_READ);
block->lock.x_unlock(true);
buf_LRU_block_free_non_file_page(block);
}
goto func_exit;

View file

@ -1305,10 +1305,10 @@ dict_index_t *dict_index_t::clone() const
sizeof *stat_n_non_null_key_vals);
mem_heap_t* heap= mem_heap_create(size);
dict_index_t *index= static_cast<dict_index_t*>(mem_heap_dup(heap, this,
sizeof *this));
dict_index_t *index= static_cast<dict_index_t*>
(mem_heap_alloc(heap, sizeof *this));
*index= *this;
rw_lock_create(index_tree_rw_lock_key, &index->lock, SYNC_INDEX_TREE);
index->lock.SRW_LOCK_INIT(index_tree_rw_lock_key);
index->heap= heap;
index->name= mem_heap_strdup(heap, name);
index->fields= static_cast<dict_field_t*>
@ -2157,8 +2157,7 @@ dict_index_add_to_cache(
#endif /* BTR_CUR_ADAPT */
new_index->page = unsigned(page_no);
rw_lock_create(index_tree_rw_lock_key, &new_index->lock,
SYNC_INDEX_TREE);
new_index->lock.SRW_LOCK_INIT(index_tree_rw_lock_key);
new_index->n_core_fields = new_index->n_fields;
@ -2228,7 +2227,7 @@ dict_index_remove_from_cache_low(
}
#endif /* BTR_CUR_HASH_ADAPT */
rw_lock_free(&index->lock);
index->lock.free();
dict_mem_index_free(index);
}
@ -4419,10 +4418,10 @@ dict_set_merge_threshold_list_debug(
for (dict_index_t* index = UT_LIST_GET_FIRST(table->indexes);
index != NULL;
index = UT_LIST_GET_NEXT(indexes, index)) {
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
index->merge_threshold = merge_threshold_all
& ((1U << 6) - 1);
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
}
}
}

View file

@ -1358,7 +1358,7 @@ fsp_alloc_seg_inode_page(fil_space_t *space, buf_block_t *header, mtr_t *mtr)
return false;
buf_block_dbg_add_level(block, SYNC_FSP_PAGE);
ut_ad(rw_lock_get_x_lock_count(&block->lock) == 1);
ut_ad(block->lock.not_recursive());
mtr->write<2>(*block, block->frame + FIL_PAGE_TYPE, FIL_PAGE_INODE);
@ -1719,7 +1719,7 @@ fseg_create(fil_space_t *space, ulint byte_offset, mtr_t *mtr,
goto funct_exit;
}
ut_ad(rw_lock_get_x_lock_count(&block->lock) == 1);
ut_ad(block->lock.not_recursive());
ut_ad(!fil_page_get_type(block->frame));
mtr->write<1>(*block, FIL_PAGE_TYPE + 1 + block->frame,
FIL_PAGE_TYPE_SYS);
@ -2645,7 +2645,7 @@ fseg_free_extent(
if (!xdes_is_free(descr, i)) {
buf_page_free(
page_id_t(space->id, first_page_in_extent + 1),
mtr, __FILE__, __LINE__);
mtr);
}
}
}

View file

@ -917,8 +917,7 @@ func_start:
ut_ad(!dict_index_is_online_ddl(cursor->index)
|| (flags & BTR_CREATE_FLAG)
|| dict_index_is_clust(cursor->index));
ut_ad(rw_lock_own_flagged(dict_index_get_lock(cursor->index),
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(cursor->index->lock.have_u_or_x());
block = btr_cur_get_block(cursor);
page = buf_block_get_frame(block);

View file

@ -256,26 +256,19 @@ rtr_pcur_getnext_from_path(
/* set up savepoint to record any locks to be taken */
rtr_info->tree_savepoints[tree_idx] = mtr_set_savepoint(mtr);
#ifdef UNIV_RTR_DEBUG
ut_ad(!(rw_lock_own_flagged(&btr_cur->page_cur.block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S))
|| my_latch_mode == BTR_MODIFY_TREE
|| my_latch_mode == BTR_CONT_MODIFY_TREE
|| !page_is_leaf(buf_block_get_frame(
btr_cur->page_cur.block)));
#endif /* UNIV_RTR_DEBUG */
dberr_t err = DB_SUCCESS;
ut_ad(my_latch_mode == BTR_MODIFY_TREE
|| my_latch_mode == BTR_CONT_MODIFY_TREE
|| !page_is_leaf(btr_cur_get_page(btr_cur))
|| !btr_cur->page_cur.block->lock.have_any());
block = buf_page_get_gen(
page_id_t(index->table->space_id,
next_rec.page_no), zip_size,
rw_latch, NULL, BUF_GET, __FILE__, __LINE__, mtr, &err);
rw_latch, NULL, BUF_GET, __FILE__, __LINE__, mtr);
if (block == NULL) {
continue;
} else if (rw_latch != RW_NO_LATCH) {
ut_ad(!dict_index_is_ibuf(index));
buf_block_dbg_add_level(block, SYNC_TREE_NODE);
}
@ -402,14 +395,14 @@ rtr_pcur_getnext_from_path(
lock_mutex_exit();
if (rw_latch == RW_NO_LATCH) {
rw_lock_s_lock(&(block->lock));
block->lock.s_lock();
}
lock_prdt_lock(block, &prdt, index, LOCK_S,
LOCK_PREDICATE, btr_cur->rtr_info->thr);
if (rw_latch == RW_NO_LATCH) {
rw_lock_s_unlock(&(block->lock));
block->lock.s_unlock();
}
}
@ -461,8 +454,7 @@ rtr_pcur_getnext_from_path(
mtr_commit(mtr);
mtr_start(mtr);
} else if (!index_locked) {
mtr_memo_release(mtr, dict_index_get_lock(index),
MTR_MEMO_X_LOCK);
mtr_memo_release(mtr, &index->lock, MTR_MEMO_X_LOCK);
}
return(found);
@ -542,7 +534,6 @@ void
rtr_pcur_open_low(
/*==============*/
dict_index_t* index, /*!< in: index */
ulint level, /*!< in: level in the rtree */
const dtuple_t* tuple, /*!< in: tuple on which search done */
page_cur_mode_t mode, /*!< in: PAGE_CUR_RTREE_LOCATE, ... */
ulint latch_mode,/*!< in: BTR_SEARCH_LEAF, ... */
@ -555,11 +546,6 @@ rtr_pcur_open_low(
ulint n_fields;
ulint low_match;
rec_t* rec;
bool tree_latched = false;
bool for_delete = false;
bool for_undo_ins = false;
ut_ad(level == 0);
ut_ad(latch_mode & BTR_MODIFY_LEAF || latch_mode & BTR_MODIFY_TREE);
ut_ad(mode == PAGE_CUR_RTREE_LOCATE);
@ -568,9 +554,6 @@ rtr_pcur_open_low(
btr_pcur_init(cursor);
for_delete = latch_mode & BTR_RTREE_DELETE_MARK;
for_undo_ins = latch_mode & BTR_RTREE_UNDO_INS;
cursor->latch_mode = BTR_LATCH_MODE_WITHOUT_FLAGS(latch_mode);
cursor->search_mode = mode;
@ -587,7 +570,12 @@ rtr_pcur_open_low(
btr_cursor->rtr_info->thr = btr_cursor->thr;
}
btr_cur_search_to_nth_level(index, level, tuple, mode, latch_mode,
if ((latch_mode & BTR_MODIFY_TREE) && index->lock.have_u_not_x()) {
index->lock.u_x_upgrade(SRW_LOCK_ARGS(file, line));
mtr->lock_upgrade(index->lock);
}
btr_cur_search_to_nth_level(index, 0, tuple, mode, latch_mode,
btr_cursor, 0, file, line, mtr);
cursor->pos_state = BTR_PCUR_IS_POSITIONED;
@ -599,24 +587,13 @@ rtr_pcur_open_low(
n_fields = dtuple_get_n_fields(tuple);
if (latch_mode & BTR_ALREADY_S_LATCHED) {
ut_ad(mtr->memo_contains(index->lock, MTR_MEMO_S_LOCK));
tree_latched = true;
}
if (latch_mode & BTR_MODIFY_TREE) {
ut_ad(mtr->memo_contains_flagged(&index->lock,
MTR_MEMO_X_LOCK
| MTR_MEMO_SX_LOCK));
tree_latched = true;
}
const bool d= rec_get_deleted_flag(rec, index->table->not_redundant());
if (page_rec_is_infimum(rec) || low_match != n_fields
|| (rec_get_deleted_flag(rec, dict_table_is_comp(index->table))
&& (for_delete || for_undo_ins))) {
|| (d && latch_mode
& (BTR_RTREE_DELETE_MARK | BTR_RTREE_UNDO_INS))) {
if (rec_get_deleted_flag(rec, dict_table_is_comp(index->table))
&& for_delete) {
if (d && latch_mode & BTR_RTREE_DELETE_MARK) {
btr_cursor->rtr_info->fd_del = true;
btr_cursor->low_match = 0;
}
@ -626,8 +603,6 @@ rtr_pcur_open_low(
ulint tree_idx = btr_cursor->tree_height - 1;
rtr_info_t* rtr_info = btr_cursor->rtr_info;
ut_ad(level == 0);
if (rtr_info->tree_blocks[tree_idx]) {
mtr_release_block_at_savepoint(
mtr,
@ -638,8 +613,9 @@ rtr_pcur_open_low(
}
bool ret = rtr_pcur_getnext_from_path(
tuple, mode, btr_cursor, level, latch_mode,
tree_latched, mtr);
tuple, mode, btr_cursor, 0, latch_mode,
latch_mode & (BTR_MODIFY_TREE | BTR_ALREADY_S_LATCHED),
mtr);
if (ret) {
low_match = btr_pcur_get_low_match(cursor);
@ -953,9 +929,7 @@ rtr_create_rtr_info(
+ UNIV_PAGE_SIZE_MAX + 1);
mutex_create(LATCH_ID_RTR_MATCH_MUTEX,
&rtr_info->matches->rtr_match_mutex);
rw_lock_create(PFS_NOT_INSTRUMENTED,
&(rtr_info->matches->block.lock),
SYNC_LEVEL_VARYING);
rtr_info->matches->block.lock.init();
}
rtr_info->path = UT_NEW_NOKEY(rtr_node_path_t());
@ -1100,7 +1074,7 @@ rtr_clean_rtr_info(
UT_DELETE(rtr_info->matches->matched_recs);
}
rw_lock_free(&(rtr_info->matches->block.lock));
rtr_info->matches->block.lock.free();
mutex_destroy(&rtr_info->matches->rtr_match_mutex);
}
@ -1555,7 +1529,6 @@ rtr_copy_buf(
matches->block.curr_left_side = block->curr_left_side;
matches->block.index = block->index;
#endif /* BTR_CUR_HASH_ADAPT */
ut_d(matches->block.debug_latch = NULL);
}
/****************************************************************//**

View file

@ -537,7 +537,6 @@ static PSI_mutex_info all_innodb_mutexes[] = {
# ifdef UNIV_DEBUG
PSI_KEY(rw_lock_debug_mutex),
# endif /* UNIV_DEBUG */
PSI_KEY(rw_lock_list_mutex),
PSI_KEY(srv_innodb_monitor_mutex),
PSI_KEY(srv_misc_tmpfile_mutex),
PSI_KEY(srv_monitor_file_mutex),
@ -11366,10 +11365,10 @@ innobase_parse_hint_from_comment(
/* x-lock index is needed to exclude concurrent
pessimistic tree operations */
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
index->merge_threshold = merge_threshold_table
& ((1U << 6) - 1);
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
continue;
}
@ -11386,11 +11385,11 @@ innobase_parse_hint_from_comment(
/* x-lock index is needed to exclude concurrent
pessimistic tree operations */
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
index->merge_threshold
= merge_threshold_index[i]
& ((1U << 6) - 1);
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
is_found[i] = true;
break;
@ -15842,103 +15841,37 @@ innodb_show_mutex_status(
DBUG_RETURN(0);
}
/** Implements the SHOW MUTEX STATUS command.
@param[in,out] hton the innodb handlerton
@param[in,out] thd the MySQL query thread of the caller
@param[in,out] stat_print function for printing statistics
/** Implement SHOW ENGINE INNODB MUTEX for rw-locks.
@param hton the innodb handlerton
@param thd connection
@param fn function for printing statistics
@return 0 on success. */
static
int
innodb_show_rwlock_status(
handlerton*
#ifdef DBUG_ASSERT_EXISTS
hton
#endif
,
THD* thd,
stat_print_fn* stat_print)
innodb_show_rwlock_status(handlerton* ut_d(hton), THD *thd, stat_print_fn *fn)
{
DBUG_ENTER("innodb_show_rwlock_status");
DBUG_ENTER("innodb_show_rwlock_status");
ut_ad(hton == innodb_hton_ptr);
const rw_lock_t* block_rwlock= nullptr;
ulint block_rwlock_oswait_count = 0;
uint hton_name_len = (uint) strlen(innobase_hton_name);
constexpr size_t prefix_len= sizeof "waits=" - 1;
char waits[prefix_len + 20 + 1];
snprintf(waits, sizeof waits, "waits=" UINT64PF, buf_pool.waited());
DBUG_ASSERT(hton == innodb_hton_ptr);
if (fn(thd, STRING_WITH_LEN(innobase_hton_name),
STRING_WITH_LEN("buf_block_t::lock"), waits, strlen(waits)))
DBUG_RETURN(1);
mutex_enter(&rw_lock_list_mutex);
for (const rw_lock_t& rw_lock : rw_lock_list) {
if (rw_lock.count_os_wait == 0) {
continue;
}
int buf1len;
char buf1[IO_SIZE];
if (rw_lock.is_block_lock) {
block_rwlock = &rw_lock;
block_rwlock_oswait_count += rw_lock.count_os_wait;
continue;
}
buf1len = snprintf(
buf1, sizeof buf1, "rwlock: %s:%u",
innobase_basename(rw_lock.cfile_name),
rw_lock.cline);
int buf2len;
char buf2[IO_SIZE];
buf2len = snprintf(
buf2, sizeof buf2, "waits=%u",
rw_lock.count_os_wait);
if (stat_print(thd, innobase_hton_name,
hton_name_len,
buf1, static_cast<uint>(buf1len),
buf2, static_cast<uint>(buf2len))) {
mutex_exit(&rw_lock_list_mutex);
DBUG_RETURN(1);
}
}
if (block_rwlock != NULL) {
int buf1len;
char buf1[IO_SIZE];
buf1len = snprintf(
buf1, sizeof buf1, "sum rwlock: %s:%u",
innobase_basename(block_rwlock->cfile_name),
block_rwlock->cline);
int buf2len;
char buf2[IO_SIZE];
buf2len = snprintf(
buf2, sizeof buf2, "waits=" ULINTPF,
block_rwlock_oswait_count);
if (stat_print(thd, innobase_hton_name,
hton_name_len,
buf1, static_cast<uint>(buf1len),
buf2, static_cast<uint>(buf2len))) {
mutex_exit(&rw_lock_list_mutex);
DBUG_RETURN(1);
}
}
mutex_exit(&rw_lock_list_mutex);
DBUG_RETURN(0);
DBUG_RETURN(!dict_sys.for_each_index([&](const dict_index_t &i)
{
uint32_t waited= i.lock.waited();
if (!waited)
return true;
snprintf(waits + prefix_len, sizeof waits - prefix_len, "%u", waited);
std::ostringstream s;
s << i.name << '(' << i.table->name << ')';
return !fn(thd, STRING_WITH_LEN(innobase_hton_name),
s.str().data(), s.str().size(), waits, strlen(waits));
}));
}
/** Implements the SHOW MUTEX STATUS command.
@ -17299,6 +17232,9 @@ innodb_monitor_set_option(
if (monitor_id == (MONITOR_LATCHES)) {
mutex_monitor.reset();
buf_pool.reset_waited();
dict_sys.for_each_index([](const dict_index_t &i)
{i.lock.reset_waited(); return true;});
}
break;

View file

@ -1004,7 +1004,7 @@ struct ha_innobase_inplace_ctx : public inplace_alter_handler_ctx
while (dict_index_t* index
= UT_LIST_GET_LAST(instant_table->indexes)) {
UT_LIST_REMOVE(instant_table->indexes, index);
rw_lock_free(&index->lock);
index->lock.free();
dict_mem_index_free(index);
}
for (unsigned i = old_n_v_cols; i--; ) {
@ -6866,7 +6866,7 @@ error_handling_drop_uncached_1:
if (ctx->online) {
/* Allocate a log for online table rebuild. */
rw_lock_x_lock(&clust_index->lock);
clust_index->lock.x_lock(SRW_LOCK_CALL);
bool ok = row_log_allocate(
ctx->prebuilt->trx,
clust_index, ctx->new_table,
@ -6875,7 +6875,7 @@ error_handling_drop_uncached_1:
ctx->defaults, ctx->col_map, path,
old_table,
ctx->allow_not_null);
rw_lock_x_unlock(&clust_index->lock);
clust_index->lock.x_unlock();
if (!ok) {
error = DB_OUT_OF_MEMORY;
@ -6941,7 +6941,7 @@ error_handling_drop_uncached:
/* No need to allocate a modification log. */
DBUG_ASSERT(!index->online_log);
} else {
rw_lock_x_lock(&ctx->add_index[a]->lock);
index->lock.x_lock(SRW_LOCK_CALL);
bool ok = row_log_allocate(
ctx->prebuilt->trx,
@ -6950,7 +6950,7 @@ error_handling_drop_uncached:
path, old_table,
ctx->allow_not_null);
rw_lock_x_unlock(&index->lock);
index->lock.x_unlock();
DBUG_EXECUTE_IF(
"innodb_OOM_prepare_add_index",
@ -7127,7 +7127,7 @@ error_handled:
dict_index_t* clust_index = dict_table_get_first_index(
user_table);
rw_lock_x_lock(&clust_index->lock);
clust_index->lock.x_lock(SRW_LOCK_CALL);
if (clust_index->online_log) {
ut_ad(ctx->online);
@ -7136,7 +7136,7 @@ error_handled:
= ONLINE_INDEX_COMPLETE;
}
rw_lock_x_unlock(&clust_index->lock);
clust_index->lock.x_unlock();
}
trx_commit_for_mysql(ctx->trx);
@ -8525,7 +8525,7 @@ innobase_online_rebuild_log_free(
{
dict_index_t* clust_index = dict_table_get_first_index(table);
ut_d(dict_sys.assert_locked());
rw_lock_x_lock(&clust_index->lock);
clust_index->lock.x_lock(SRW_LOCK_CALL);
if (clust_index->online_log) {
ut_ad(dict_index_get_online_status(clust_index)
@ -8538,7 +8538,7 @@ innobase_online_rebuild_log_free(
DBUG_ASSERT(dict_index_get_online_status(clust_index)
== ONLINE_INDEX_COMPLETE);
rw_lock_x_unlock(&clust_index->lock);
clust_index->lock.x_unlock();
}
/** For each user column, which is part of an index which is not going to be
@ -10394,9 +10394,9 @@ commit_cache_norebuild(
/* Mark the index dropped
in the data dictionary cache. */
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.u_lock(SRW_LOCK_CALL);
index->page = FIL_NULL;
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.u_unlock();
}
trx_start_for_ddl(trx, TRX_DICT_OP_INDEX);

View file

@ -6885,26 +6885,15 @@ namespace Show {
/* Fields of the dynamic table INFORMATION_SCHEMA.INNODB_MUTEXES */
static ST_FIELD_INFO innodb_mutexes_fields_info[] =
{
#define MUTEXES_NAME 0
Column("NAME", Varchar(OS_FILE_MAX_PATH), NOT_NULL),
#define MUTEXES_CREATE_FILE 1
Column("CREATE_FILE", Varchar(OS_FILE_MAX_PATH), NOT_NULL),
#define MUTEXES_CREATE_LINE 2
Column("CREATE_LINE", ULong(), NOT_NULL),
#define MUTEXES_OS_WAITS 3
Column("OS_WAITS", ULonglong(), NOT_NULL),
CEnd()
};
} // namespace Show
/*******************************************************************//**
Function to populate INFORMATION_SCHEMA.INNODB_MUTEXES table.
Loop through each record in mutex and rw_lock lists, and extract the column
information and fill the INFORMATION_SCHEMA.INNODB_MUTEXES table.
@see innodb_show_rwlock_status
@return 0 on success */
static
int
@ -6914,76 +6903,34 @@ i_s_innodb_mutexes_fill_table(
TABLE_LIST* tables, /*!< in/out: tables to fill */
Item* ) /*!< in: condition (not used) */
{
ulint block_lock_oswait_count = 0;
const rw_lock_t* block_lock= nullptr;
Field** fields = tables->table->field;
DBUG_ENTER("i_s_innodb_mutexes_fill_table");
RETURN_IF_INNODB_NOT_STARTED(tables->schema_table_name.str);
DBUG_ENTER("i_s_innodb_mutexes_fill_table");
RETURN_IF_INNODB_NOT_STARTED(tables->schema_table_name.str);
if (check_global_access(thd, PROCESS_ACL))
DBUG_RETURN(0);
/* deny access to user without PROCESS_ACL privilege */
if (check_global_access(thd, PROCESS_ACL)) {
DBUG_RETURN(0);
} else {
struct Locking
{
Locking() { mutex_enter(&rw_lock_list_mutex); }
~Locking() { mutex_exit(&rw_lock_list_mutex); }
} locking;
Field **fields= tables->table->field;
OK(fields[0]->store(STRING_WITH_LEN("buf_block_t::lock"),
system_charset_info));
OK(fields[1]->store(buf_pool.waited(), true));
fields[0]->set_notnull();
fields[1]->set_notnull();
char lock_name[sizeof "buf0dump.cc:12345"];
OK(schema_table_store_record(thd, tables->table));
for (const rw_lock_t& lock : rw_lock_list) {
if (lock.count_os_wait == 0) {
continue;
}
if (buf_pool.is_block_lock(&lock)) {
block_lock = &lock;
block_lock_oswait_count += lock.count_os_wait;
continue;
}
const char* basename = innobase_basename(
lock.cfile_name);
snprintf(lock_name, sizeof lock_name, "%s:%u",
basename, lock.cline);
OK(field_store_string(fields[MUTEXES_NAME],
lock_name));
OK(field_store_string(fields[MUTEXES_CREATE_FILE],
basename));
OK(fields[MUTEXES_CREATE_LINE]->store(lock.cline,
true));
fields[MUTEXES_CREATE_LINE]->set_notnull();
OK(fields[MUTEXES_OS_WAITS]->store(lock.count_os_wait,
true));
fields[MUTEXES_OS_WAITS]->set_notnull();
OK(schema_table_store_record(thd, tables->table));
}
if (block_lock) {
char buf1[IO_SIZE];
snprintf(buf1, sizeof buf1, "combined %s",
innobase_basename(block_lock->cfile_name));
OK(field_store_string(fields[MUTEXES_NAME],
"buf_block_t::lock"));
OK(field_store_string(fields[MUTEXES_CREATE_FILE],
buf1));
OK(fields[MUTEXES_CREATE_LINE]->store(block_lock->cline,
true));
fields[MUTEXES_CREATE_LINE]->set_notnull();
OK(fields[MUTEXES_OS_WAITS]->store(
block_lock_oswait_count, true));
fields[MUTEXES_OS_WAITS]->set_notnull();
OK(schema_table_store_record(thd, tables->table));
}
}
DBUG_RETURN(0);
DBUG_RETURN(!dict_sys.for_each_index([&](const dict_index_t &i)
{
uint32_t waited= i.lock.waited();
if (!waited)
return true;
if (fields[1]->store(waited, true))
return false;
std::ostringstream s;
s << i.name << '(' << i.table->name << ')';
return !fields[0]->store(s.str().data(), s.str().size(),
system_charset_info) &&
!schema_table_store_record(thd, tables->table);
}));
}
/*******************************************************************//**

View file

@ -375,7 +375,7 @@ ibuf_close(void)
mutex_free(&ibuf_bitmap_mutex);
dict_table_t* ibuf_table = ibuf.index->table;
rw_lock_free(&ibuf.index->lock);
ibuf.index->lock.free();
dict_mem_index_free(ibuf.index);
dict_mem_table_free(ibuf_table);
ibuf.index = NULL;
@ -477,8 +477,7 @@ ibuf_init_at_db_start(void)
DICT_CLUSTERED | DICT_IBUF, 1);
ibuf.index->id = DICT_IBUF_ID_MIN + IBUF_SPACE_ID;
ibuf.index->n_uniq = REC_MAX_N_FIELDS;
rw_lock_create(index_tree_rw_lock_key, &ibuf.index->lock,
SYNC_IBUF_INDEX_TREE);
ibuf.index->lock.SRW_LOCK_INIT(index_tree_rw_lock_key);
#ifdef BTR_CUR_ADAPT
ibuf.index->search_info = btr_search_info_create(ibuf.index->heap);
#endif /* BTR_CUR_ADAPT */
@ -1856,7 +1855,7 @@ static bool ibuf_add_free_page()
return false;
}
ut_ad(rw_lock_get_x_lock_count(&block->lock) == 1);
ut_ad(block->lock.not_recursive());
ibuf_enter(&mtr);
mutex_enter(&ibuf_mutex);
@ -1986,7 +1985,7 @@ ibuf_remove_free_page(void)
ibuf_bitmap_page_set_bits<IBUF_BITMAP_IBUF>(
bitmap_page, page_id, srv_page_size, false, &mtr);
buf_page_free(page_id, &mtr, __FILE__, __LINE__);
buf_page_free(page_id, &mtr);
ibuf_mtr_commit(&mtr);
}
@ -4243,7 +4242,7 @@ void ibuf_merge_or_delete_for_page(buf_block_t *block, const page_id_t page_id,
is needed for the insert operations to the index page to pass
the debug checks. */
rw_lock_x_lock_move_ownership(&(block->lock));
block->lock.claim_ownership();
if (!fil_page_index_page_check(block->frame)
|| !page_is_leaf(block->frame)) {
@ -4276,9 +4275,8 @@ loop:
&pcur, &mtr);
if (block) {
ut_ad(rw_lock_own(&block->lock, RW_LOCK_X));
buf_block_buf_fix_inc(block, __FILE__, __LINE__);
rw_lock_x_lock(&block->lock);
buf_block_buf_fix_inc(block);
block->lock.x_lock_recursive();
mtr.memo_push(block, MTR_MEMO_PAGE_X_FIX);
/* This is a user page (secondary index leaf page),
@ -4395,10 +4393,8 @@ loop:
ibuf_mtr_start(&mtr);
mtr.set_named_space(space);
ut_ad(rw_lock_own(&block->lock, RW_LOCK_X));
buf_block_buf_fix_inc(block,
__FILE__, __LINE__);
rw_lock_x_lock(&block->lock);
buf_block_buf_fix_inc(block);
block->lock.x_lock_recursive();
mtr.memo_push(block, MTR_MEMO_PAGE_X_FIX);
/* This is a user page (secondary

View file

@ -362,13 +362,8 @@ void buf_page_make_young(buf_page_t *bpage);
/** Mark the page status as FREED for the given tablespace id and
page number. If the page is not in buffer pool then ignore it.
@param[in] page_id page_id
@param[in,out] mtr mini-transaction
@param[in] file file name
@param[in] line line where called */
void buf_page_free(const page_id_t page_id,
mtr_t *mtr,
const char *file,
unsigned line);
@param[in,out] mtr mini-transaction */
void buf_page_free(const page_id_t page_id, mtr_t *mtr);
/********************************************************************//**
Reads the freed_page_clock of a buffer block.
@ -433,30 +428,11 @@ buf_block_get_modify_clock(
buf_block_t* block); /*!< in: block */
/*******************************************************************//**
Increments the bufferfix count. */
UNIV_INLINE
void
buf_block_buf_fix_inc_func(
/*=======================*/
# ifdef UNIV_DEBUG
const char* file, /*!< in: file name */
unsigned line, /*!< in: line */
# endif /* UNIV_DEBUG */
buf_block_t* block) /*!< in/out: block to bufferfix */
MY_ATTRIBUTE((nonnull));
# define buf_block_buf_fix_inc(block) (block)->fix()
# ifdef UNIV_DEBUG
/** Increments the bufferfix count.
@param[in,out] b block to bufferfix
@param[in] f file name where requested
@param[in] l line number where requested */
# define buf_block_buf_fix_inc(b,f,l) buf_block_buf_fix_inc_func(f,l,b)
# else /* UNIV_DEBUG */
/** Increments the bufferfix count.
@param[in,out] b block to bufferfix
@param[in] f file name where requested
@param[in] l line number where requested */
# define buf_block_buf_fix_inc(b,f,l) buf_block_buf_fix_inc_func(b)
# endif /* UNIV_DEBUG */
/*******************************************************************//**
Decrements the bufferfix count. */
# define buf_block_buf_fix_dec(block) (block)->unfix()
#endif /* !UNIV_INNOCHECKSUM */
/** Check if a buffer is all zeroes.
@ -631,21 +607,7 @@ void buf_pool_invalidate();
--------------------------- LOWER LEVEL ROUTINES -------------------------
=========================================================================*/
#ifdef UNIV_DEBUG
/*********************************************************************//**
Adds latch level info for the rw-lock protecting the buffer frame. This
should be called in the debug version after a successful latching of a
page if we know the latching order level of the acquired latch. */
UNIV_INLINE
void
buf_block_dbg_add_level(
/*====================*/
buf_block_t* block, /*!< in: buffer page
where we have acquired latch */
latch_level_t level); /*!< in: latching order level */
#else /* UNIV_DEBUG */
# define buf_block_dbg_add_level(block, level) /* nothing */
#endif /* UNIV_DEBUG */
#define buf_block_dbg_add_level(block, level) do {} while (0)
#ifdef UNIV_DEBUG
/*********************************************************************//**
@ -1036,8 +998,8 @@ struct buf_block_t{
is of size srv_page_size, and
aligned to an address divisible by
srv_page_size */
rw_lock_t lock; /*!< read-write lock of the buffer
frame */
/** read-write lock covering frame */
block_lock lock;
#ifdef UNIV_DEBUG
/** whether page.list is in buf_pool.withdraw
((state() == BUF_BLOCK_NOT_USED)) and the buffer pool is being shrunk;
@ -1161,22 +1123,12 @@ struct buf_block_t{
# define assert_block_ahi_empty_on_init(block) /* nothing */
# define assert_block_ahi_valid(block) /* nothing */
#endif /* BTR_CUR_HASH_ADAPT */
# ifdef UNIV_DEBUG
/** @name Debug fields */
/* @{ */
rw_lock_t* debug_latch; /*!< in the debug version, each thread
which bufferfixes the block acquires
an s-latch here; so we can use the
debug utilities in sync0rw */
/* @} */
# endif
void fix() { page.fix(); }
uint32_t unfix()
{
ut_ad(page.buf_fix_count() || page.io_fix() != BUF_IO_NONE ||
page.state() == BUF_BLOCK_ZIP_PAGE ||
!rw_lock_own_flagged(&lock, RW_LOCK_FLAG_X | RW_LOCK_FLAG_S |
RW_LOCK_FLAG_SX));
!lock.have_any());
return page.unfix();
}
@ -1395,6 +1347,22 @@ class buf_pool_t
@return whether the allocation succeeded */
inline bool create(size_t bytes);
/** Compute the sum of buf_block_t::lock::waited()
@param total_waited sum of buf_block_t::lock::waited() */
void waited(uint64_t &total_waited) const
{
for (const buf_block_t *block= blocks, * const end= blocks + size;
block != end; block++)
total_waited+= block->lock.waited();
}
/** Invoke buf_block_t::lock::reset_waited() on all blocks */
void reset_waited()
{
for (buf_block_t *block= blocks, * const end= blocks + size;
block != end; block++)
block->lock.reset_waited();
}
#ifdef UNIV_DEBUG
/** Find a block that points to a ROW_FORMAT=COMPRESSED page
@param data pointer to the start of a ROW_FORMAT=COMPRESSED page frame
@ -1472,6 +1440,30 @@ public:
return size;
}
/** @return sum of buf_block_t::lock::waited() */
uint64_t waited()
{
ut_ad(is_initialised());
uint64_t waited_count= 0;
page_hash.read_lock_all(); /* prevent any race with resize() */
for (const chunk_t *chunk= chunks, * const end= chunks + n_chunks;
chunk != end; chunk++)
chunks->waited(waited_count);
page_hash.read_unlock_all();
return waited_count;
}
/** Invoke buf_block_t::lock::reset_waited() on all blocks */
void reset_waited()
{
ut_ad(is_initialised());
page_hash.read_lock_all(); /* prevent any race with resize() */
for (const chunk_t *chunk= chunks, * const end= chunks + n_chunks;
chunk != end; chunk++)
chunks->reset_waited();
page_hash.read_unlock_all();
}
/** Determine whether a frame is intended to be withdrawn during resize().
@param ptr pointer within a buf_block_t::frame
@return whether the frame will be withdrawn */
@ -1479,7 +1471,7 @@ public:
{
ut_ad(curr_size < old_size);
#ifdef SAFE_MUTEX
if (resizing.load(std::memory_order_relaxed))
if (resize_in_progress())
mysql_mutex_assert_owner(&mutex);
#endif /* SAFE_MUTEX */
@ -1499,7 +1491,7 @@ public:
{
ut_ad(curr_size < old_size);
#ifdef SAFE_MUTEX
if (resizing.load(std::memory_order_relaxed))
if (resize_in_progress())
mysql_mutex_assert_owner(&mutex);
#endif /* SAFE_MUTEX */
@ -1549,9 +1541,6 @@ public:
inline buf_block_t *block_from_ahi(const byte *ptr) const;
#endif /* BTR_CUR_HASH_ADAPT */
bool is_block_lock(const rw_lock_t *l) const
{ return is_block_field(static_cast<const void*>(l)); }
/**
@return the smallest oldest_modification lsn for any page
@retval empty_lsn if all modified persistent pages have been flushed */
@ -1876,6 +1865,28 @@ public:
}
}
/** Acquire all latches in shared mode */
void read_lock_all()
{
for (auto n= pad(n_cells) & ~ELEMENTS_PER_LATCH;;
n-= ELEMENTS_PER_LATCH + 1)
{
reinterpret_cast<page_hash_latch&>(array[n]).read_lock();
if (!n)
break;
}
}
/** Release all latches in shared mode */
void read_unlock_all()
{
for (auto n= pad(n_cells) & ~ELEMENTS_PER_LATCH;;
n-= ELEMENTS_PER_LATCH + 1)
{
reinterpret_cast<page_hash_latch&>(array[n]).read_unlock();
if (!n)
break;
}
}
/** Exclusively aqcuire all latches */
inline void write_lock_all();

View file

@ -218,14 +218,12 @@ buf_block_modify_clock_inc(
ut_ad(fsp_is_system_temporary(block->page.id().space())
|| (mysql_mutex_is_owner(&buf_pool.mutex)
&& !block->page.buf_fix_count())
|| rw_lock_own_flagged(&block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
|| block->lock.have_u_or_x());
#else /* SAFE_MUTEX */
/* No latch is acquired for the shared temporary tablespace. */
ut_ad(fsp_is_system_temporary(block->page.id().space())
|| !block->page.buf_fix_count()
|| rw_lock_own_flagged(&block->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
|| block->lock.have_u_or_x());
#endif /* SAFE_MUTEX */
assert_block_ahi_valid(block);
@ -242,64 +240,12 @@ buf_block_get_modify_clock(
/*=======================*/
buf_block_t* block) /*!< in: block */
{
#ifdef UNIV_DEBUG
/* No latch is acquired for the shared temporary tablespace. */
if (!fsp_is_system_temporary(block->page.id().space())) {
ut_ad(rw_lock_own(&(block->lock), RW_LOCK_S)
|| rw_lock_own(&(block->lock), RW_LOCK_X)
|| rw_lock_own(&(block->lock), RW_LOCK_SX));
}
#endif /* UNIV_DEBUG */
ut_ad(fsp_is_system_temporary(block->page.id().space())
|| block->lock.have_any());
return(block->modify_clock);
}
/*******************************************************************//**
Increments the bufferfix count. */
UNIV_INLINE
void
buf_block_buf_fix_inc_func(
/*=======================*/
#ifdef UNIV_DEBUG
const char* file, /*!< in: file name */
unsigned line, /*!< in: line */
#endif /* UNIV_DEBUG */
buf_block_t* block) /*!< in/out: block to bufferfix */
{
#ifdef UNIV_DEBUG
/* No debug latch is acquired if block belongs to system temporary.
Debug latch is not of much help if access to block is single
threaded. */
if (!fsp_is_system_temporary(block->page.id().space())) {
ibool ret;
ret = rw_lock_s_lock_nowait(block->debug_latch, file, line);
ut_a(ret);
}
#endif /* UNIV_DEBUG */
block->fix();
}
/*******************************************************************//**
Decrements the bufferfix count. */
UNIV_INLINE
void
buf_block_buf_fix_dec(
/*==================*/
buf_block_t* block) /*!< in/out: block to bufferunfix */
{
#ifdef UNIV_DEBUG
/* No debug latch is acquired if block belongs to system temporary.
Debug latch is not of much help if access to block is single
threaded. */
if (!fsp_is_system_temporary(block->page.id().space())) {
rw_lock_s_unlock(block->debug_latch);
}
#endif /* UNIV_DEBUG */
block->unfix();
}
/********************************************************************//**
Releases a compressed-only page acquired with buf_page_get_zip(). */
UNIV_INLINE
@ -309,24 +255,12 @@ buf_page_release_zip(
buf_page_t* bpage) /*!< in: buffer block */
{
ut_ad(bpage);
ut_a(bpage->buf_fix_count());
ut_ad(bpage->buf_fix_count());
switch (bpage->state()) {
case BUF_BLOCK_FILE_PAGE:
#ifdef UNIV_DEBUG
{
/* No debug latch is acquired if block belongs to system
temporary. Debug latch is not of much help if access to block
is single threaded. */
buf_block_t* block = reinterpret_cast<buf_block_t*>(bpage);
if (!fsp_is_system_temporary(block->page.id().space())) {
rw_lock_s_unlock(block->debug_latch);
}
}
#endif /* UNIV_DEBUG */
/* Fall through */
case BUF_BLOCK_ZIP_PAGE:
reinterpret_cast<buf_block_t*>(bpage)->unfix();
bpage->unfix();
return;
case BUF_BLOCK_NOT_USED:
@ -348,41 +282,16 @@ buf_page_release_latch(
ulint rw_latch) /*!< in: RW_S_LATCH, RW_X_LATCH,
RW_NO_LATCH */
{
#ifdef UNIV_DEBUG
/* No debug latch is acquired if block belongs to system
temporary. Debug latch is not of much help if access to block
is single threaded. */
if (!fsp_is_system_temporary(block->page.id().space())) {
rw_lock_s_unlock(block->debug_latch);
}
#endif /* UNIV_DEBUG */
if (rw_latch == RW_S_LATCH) {
rw_lock_s_unlock(&block->lock);
} else if (rw_latch == RW_SX_LATCH) {
rw_lock_sx_unlock(&block->lock);
} else if (rw_latch == RW_X_LATCH) {
rw_lock_x_unlock(&block->lock);
}
switch (rw_latch) {
case RW_S_LATCH:
block->lock.s_unlock();
break;
case RW_SX_LATCH:
case RW_X_LATCH:
block->lock.u_or_x_unlock(rw_latch == RW_SX_LATCH);
}
}
#ifdef UNIV_DEBUG
/*********************************************************************//**
Adds latch level info for the rw-lock protecting the buffer frame. This
should be called in the debug version after a successful latching of a
page if we know the latching order level of the acquired latch. */
UNIV_INLINE
void
buf_block_dbg_add_level(
/*====================*/
buf_block_t* block, /*!< in: buffer page
where we have acquired latch */
latch_level_t level) /*!< in: latching order level */
{
sync_check_lock(&block->lock, level);
}
#endif /* UNIV_DEBUG */
/********************************************************************//**
Get buf frame. */
UNIV_INLINE

View file

@ -185,9 +185,16 @@ extern const byte field_ref_zero[UNIV_PAGE_SIZE_MAX];
#ifndef UNIV_INNOCHECKSUM
#include "ut0mutex.h"
#include "sync0rw.h"
#include "rw_lock.h"
/** Latch types */
enum rw_lock_type_t
{
RW_S_LATCH= 1 << 0,
RW_X_LATCH= 1 << 1,
RW_SX_LATCH= 1 << 2,
RW_NO_LATCH= 1 << 3
};
#include "sux_lock.h"
class page_hash_latch : public rw_lock
{

View file

@ -1261,15 +1261,6 @@ dict_index_get_page(
/*================*/
const dict_index_t* tree) /*!< in: index */
MY_ATTRIBUTE((nonnull, warn_unused_result));
/*********************************************************************//**
Gets the read-write lock of the index tree.
@return read-write lock */
UNIV_INLINE
rw_lock_t*
dict_index_get_lock(
/*================*/
const dict_index_t* index) /*!< in: index */
MY_ATTRIBUTE((nonnull, warn_unused_result));
/********************************************************************//**
Returns free space reserved for future updates of records. This is
relevant only in the case of many consecutive inserts, as updates
@ -1540,6 +1531,19 @@ public:
return table->can_be_evicted ? find<true>(table) : find<false>(table);
}
#endif
private:
/** Invoke f on each index of a table, until it returns false
@param f function object
@param t table
@retval false if f returned false
@retval true if f never returned false */
template<typename F>
inline bool for_each_index(const F &f, const dict_table_t *t);
public:
/** Invoke f on each index of each persistent table, until it returns false
@retval false if f returned false
@retval true if f never returned false */
template<typename F> inline bool for_each_index(const F &f);
/** Move a table to the non-LRU list from the LRU list. */
void prevent_eviction(dict_table_t* table)
@ -1607,6 +1611,39 @@ extern dict_sys_t dict_sys;
#define dict_sys_lock() dict_sys.lock(__FILE__, __LINE__)
#define dict_sys_unlock() dict_sys.unlock()
template<typename F>
inline bool dict_sys_t::for_each_index(const F &f, const dict_table_t *t)
{
const dict_index_t *i= UT_LIST_GET_FIRST(t->indexes);
do
{
if (!i->is_corrupted() && !f(*i))
return false;
i= UT_LIST_GET_NEXT(indexes, i);
}
while (i);
return true;
}
template<typename F>
inline bool dict_sys_t::for_each_index(const F &f)
{
struct Locking
{
Locking() { mutex_enter(&dict_sys.mutex); }
~Locking() { mutex_exit(&dict_sys.mutex); }
} locking;
for (const dict_table_t *t= UT_LIST_GET_FIRST(table_non_LRU);
t; t= UT_LIST_GET_NEXT(table_LRU, t))
if (!for_each_index(f, t))
return false;
for (const dict_table_t *t= UT_LIST_GET_FIRST(table_LRU);
t; t= UT_LIST_GET_NEXT(table_LRU, t))
if (!for_each_index(f, t))
return false;
return true;
}
/* Auxiliary structs for checking a table definition @{ */
/* This struct is used to specify the name and type that a column must

View file

@ -907,20 +907,6 @@ dict_index_get_page(
return(index->page);
}
/*********************************************************************//**
Gets the read-write lock of the index tree.
@return read-write lock */
UNIV_INLINE
rw_lock_t*
dict_index_get_lock(
/*================*/
const dict_index_t* index) /*!< in: index */
{
ut_ad(index->magic_n == DICT_INDEX_MAGIC_N);
return(&(index->lock));
}
/********************************************************************//**
Returns free space reserved for future updates of records. This is
relevant only in the case of many consecutive inserts, as updates
@ -977,7 +963,7 @@ dict_index_set_online_status(
enum online_index_status status) /*!< in: status */
{
ut_ad(!(index->type & DICT_FTS));
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_x());
#ifdef UNIV_DEBUG
switch (dict_index_get_online_status(index)) {

View file

@ -35,7 +35,7 @@ Created 1/8/1996 Heikki Tuuri
#include "btr0types.h"
#include "lock0types.h"
#include "que0types.h"
#include "sync0rw.h"
#include "sux_lock.h"
#include "ut0mem.h"
#include "ut0rnd.h"
#include "ut0byte.h"
@ -1116,8 +1116,8 @@ public:
when InnoDB was started up */
zip_pad_info_t zip_pad;/*!< Information about state of
compression failures and successes */
mutable rw_lock_t lock; /*!< read-write lock protecting the
upper levels of the index tree */
/** lock protecting the non-leaf index pages */
mutable index_lock lock;
/** Determine if the index has been committed to the
data dictionary.

View file

@ -306,7 +306,6 @@ void
rtr_pcur_open_low(
/*==============*/
dict_index_t* index, /*!< in: index */
ulint level, /*!< in: level in the btree */
const dtuple_t* tuple, /*!< in: tuple on which search done */
page_cur_mode_t mode, /*!< in: PAGE_CUR_L, ...;
NOTE that if the search is made using a unique
@ -321,7 +320,7 @@ rtr_pcur_open_low(
mtr_t* mtr); /*!< in: mtr */
#define rtr_pcur_open(i,t,md,l,c,m) \
rtr_pcur_open_low(i,0,t,md,l,c,__FILE__,__LINE__,m)
rtr_pcur_open_low(i,t,md,l,c,__FILE__,__LINE__,m)
struct btr_cur_t;

View file

@ -457,7 +457,7 @@ struct TTASEventMutex {
sync_cell_t* cell;
sync_array_t *sync_arr = sync_array_get_and_reserve_cell(
this, SYNC_MUTEX,
this,
filename, line, &cell);
uint32_t oldval = MUTEX_STATE_LOCKED;

View file

@ -27,6 +27,7 @@ Created 12/9/1995 Heikki Tuuri
#include "mach0data.h"
#include "assume_aligned.h"
#include "ut0crc32.h"
#include "sync0debug.h"
extern ulong srv_log_buffer_size;

View file

@ -30,6 +30,7 @@ Created 9/20/1997 Heikki Tuuri
#include "buf0types.h"
#include "log0log.h"
#include "mtr0types.h"
#include "ut0mutex.h"
#include <deque>

View file

@ -28,6 +28,7 @@ Created 11/28/1995 Heikki Tuuri
#ifndef UNIV_INNOCHECKSUM
#include "mtr0types.h"
#include "ut0byte.h"
/*******************************************************//**
The following function is used to store data in one byte. */

View file

@ -65,9 +65,15 @@ savepoint. */
/** Push an object to an mtr memo stack. */
#define mtr_memo_push(m, o, t) (m)->memo_push(o, t)
#define mtr_s_lock_index(i, m) (m)->s_lock(&(i)->lock, __FILE__, __LINE__)
#define mtr_x_lock_index(i, m) (m)->x_lock(&(i)->lock, __FILE__, __LINE__)
#define mtr_sx_lock_index(i, m) (m)->sx_lock(&(i)->lock, __FILE__, __LINE__)
#ifdef UNIV_PFS_RWLOCK
# define mtr_s_lock_index(i,m) (m)->s_lock(__FILE__, __LINE__, &(i)->lock)
# define mtr_x_lock_index(i,m) (m)->x_lock(__FILE__, __LINE__, &(i)->lock)
# define mtr_sx_lock_index(i,m) (m)->u_lock(__FILE__, __LINE__, &(i)->lock)
#else
# define mtr_s_lock_index(i,m) (m)->s_lock(&(i)->lock)
# define mtr_x_lock_index(i,m) (m)->x_lock(&(i)->lock)
# define mtr_sx_lock_index(i,m) (m)->u_lock(&(i)->lock)
#endif
#define mtr_release_block_at_savepoint(m, s, b) \
(m)->release_block_at_savepoint((s), (b))
@ -117,7 +123,7 @@ struct mtr_t {
@param lock latch to release */
inline void release_s_latch_at_savepoint(
ulint savepoint,
rw_lock_t* lock);
index_lock* lock);
/** Release the block in an mtr memo after a savepoint. */
inline void release_block_at_savepoint(
@ -214,35 +220,38 @@ struct mtr_t {
@return the tablespace object (never NULL) */
fil_space_t* x_lock_space(ulint space_id);
/** Acquire a shared rw-latch.
@param[in] lock rw-latch
@param[in] file file name from where called
@param[in] line line number in file */
void s_lock(rw_lock_t* lock, const char* file, unsigned line)
{
rw_lock_s_lock_inline(lock, 0, file, line);
memo_push(lock, MTR_MEMO_S_LOCK);
}
/** Acquire a shared rw-latch. */
void s_lock(
#ifdef UNIV_PFS_RWLOCK
const char *file, unsigned line,
#endif
index_lock *lock)
{
lock->s_lock(SRW_LOCK_ARGS(file, line));
memo_push(lock, MTR_MEMO_S_LOCK);
}
/** Acquire an exclusive rw-latch.
@param[in] lock rw-latch
@param[in] file file name from where called
@param[in] line line number in file */
void x_lock(rw_lock_t* lock, const char* file, unsigned line)
{
rw_lock_x_lock_inline(lock, 0, file, line);
memo_push(lock, MTR_MEMO_X_LOCK);
}
/** Acquire an exclusive rw-latch. */
void x_lock(
#ifdef UNIV_PFS_RWLOCK
const char *file, unsigned line,
#endif
index_lock *lock)
{
lock->x_lock(SRW_LOCK_ARGS(file, line));
memo_push(lock, MTR_MEMO_X_LOCK);
}
/** Acquire an shared/exclusive rw-latch.
@param[in] lock rw-latch
@param[in] file file name from where called
@param[in] line line number in file */
void sx_lock(rw_lock_t* lock, const char* file, unsigned line)
{
rw_lock_sx_lock_inline(lock, 0, file, line);
memo_push(lock, MTR_MEMO_SX_LOCK);
}
/** Acquire an update latch. */
void u_lock(
#ifdef UNIV_PFS_RWLOCK
const char *file, unsigned line,
#endif
index_lock *lock)
{
lock->u_lock(SRW_LOCK_ARGS(file, line));
memo_push(lock, MTR_MEMO_SX_LOCK);
}
/** Acquire a tablespace S-latch.
@param[in] space tablespace */
@ -315,6 +324,17 @@ public:
return false;
#endif
}
/** Latch a buffer pool block.
@param block block to be latched
@param rw_latch RW_S_LATCH, RW_SX_LATCH, RW_X_LATCH, RW_NO_LATCH */
void page_lock(buf_block_t *block, ulint rw_latch);
/** Upgrade U locks on a block to X */
void page_lock_upgrade(const buf_block_t &block);
/** Upgrade X lock to X */
void lock_upgrade(const index_lock &lock);
/** Check if we are holding tablespace latch
@param space tablespace to search for
@param shared whether to look for shared latch, instead of exclusive
@ -326,7 +346,7 @@ public:
@param lock latch to search for
@param type held latch type
@return whether (lock,type) is contained */
bool memo_contains(const rw_lock_t &lock, mtr_memo_type_t type)
bool memo_contains(const index_lock &lock, mtr_memo_type_t type)
MY_ATTRIBUTE((warn_unused_result));
/** Check if memo contains the given item.

View file

@ -70,7 +70,7 @@ savepoint. */
void
mtr_t::release_s_latch_at_savepoint(
ulint savepoint,
rw_lock_t* lock)
index_lock* lock)
{
ut_ad(is_active());
ut_ad(m_memo.size() > savepoint);
@ -80,7 +80,7 @@ mtr_t::release_s_latch_at_savepoint(
ut_ad(slot->object == lock);
ut_ad(slot->type == MTR_MEMO_S_LOCK);
rw_lock_s_unlock(lock);
lock->s_unlock();
slot->object = NULL;
}
@ -109,7 +109,7 @@ mtr_t::sx_latch_at_savepoint(
/* == RW_NO_LATCH */
ut_a(slot->type == MTR_MEMO_BUF_FIX);
rw_lock_sx_lock(&block->lock);
block->lock.u_lock();
if (!m_made_dirty) {
m_made_dirty = is_block_dirtied(block);
@ -142,7 +142,7 @@ mtr_t::x_latch_at_savepoint(
/* == RW_NO_LATCH */
ut_a(slot->type == MTR_MEMO_BUF_FIX);
rw_lock_x_lock(&block->lock);
block->lock.x_lock();
if (!m_made_dirty) {
m_made_dirty = is_block_dirtied(block);

View file

@ -28,7 +28,7 @@ Created 11/26/1995 Heikki Tuuri
#define mtr0types_h
#ifndef UNIV_INNOCHECKSUM
#include "sync0rw.h"
#include "buf0types.h"
#else
#include "univ.i"
#endif /* UNIV_INNOCHECKSUM */

View file

@ -34,8 +34,7 @@ row_log_abort_sec(
/*===============*/
dict_index_t* index) /*!< in/out: index (x-latched) */
{
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_u_or_x());
ut_ad(!dict_index_is_clust(index));
dict_index_set_online_status(index, ONLINE_INDEX_ABORTED);
row_log_free(index->online_log);
@ -56,10 +55,7 @@ row_log_online_op_try(
trx_id_t trx_id) /*!< in: transaction ID for insert,
or 0 for delete */
{
ut_ad(rw_lock_own_flagged(
dict_index_get_lock(index),
RW_LOCK_FLAG_S | RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_any());
switch (dict_index_get_online_status(index)) {
case ONLINE_INDEX_COMPLETE:

View file

@ -392,15 +392,6 @@ enum monitor_id_t {
MONITOR_OVLD_SRV_DBLWR_WRITES,
MONITOR_OVLD_SRV_DBLWR_PAGES_WRITTEN,
MONITOR_OVLD_SRV_PAGE_SIZE,
MONITOR_OVLD_RWLOCK_S_SPIN_WAITS,
MONITOR_OVLD_RWLOCK_X_SPIN_WAITS,
MONITOR_OVLD_RWLOCK_SX_SPIN_WAITS,
MONITOR_OVLD_RWLOCK_S_SPIN_ROUNDS,
MONITOR_OVLD_RWLOCK_X_SPIN_ROUNDS,
MONITOR_OVLD_RWLOCK_SX_SPIN_ROUNDS,
MONITOR_OVLD_RWLOCK_S_OS_WAITS,
MONITOR_OVLD_RWLOCK_X_OS_WAITS,
MONITOR_OVLD_RWLOCK_SX_OS_WAITS,
/* Data DML related counters */
MONITOR_MODULE_DML_STATS,

View file

@ -85,16 +85,49 @@ public:
#endif
bool rd_lock_try() { uint32_t l; return read_trylock(l); }
bool wr_lock_try() { return write_trylock(); }
/** @return whether the lock was acquired without waiting
@tparam support_u_lock dummy parameter for UNIV_PFS_RWLOCK */
template<bool support_u_lock= false>
void rd_lock() { uint32_t l; if (!read_trylock(l)) read_lock(l); }
void u_lock() { uint32_t l; if (!update_trylock(l)) update_lock(l); }
bool rd_lock()
{
uint32_t l;
if (read_trylock(l))
return true;
read_lock(l);
return false;
}
/** @return whether the lock was acquired without waiting */
bool u_lock()
{
uint32_t l;
if (update_trylock(l))
return true;
update_lock(l);
return false;
}
bool u_lock_try() { uint32_t l; return update_trylock(l); }
void u_wr_upgrade() { if (!upgrade_trylock()) write_lock(true); }
/** @return whether the lock was upgraded without waiting */
bool u_wr_upgrade()
{
if (upgrade_trylock())
return true;
write_lock(true);
return false;
}
/** @return whether the lock was acquired without waiting */
template<bool support_u_lock= false>
void wr_lock() { if (!write_trylock()) write_lock(false); }
bool wr_lock()
{
if (write_trylock())
return true;
write_lock(false);
return false;
}
void rd_unlock();
void u_unlock();
void wr_unlock();
/** @return whether any writer is waiting */
bool is_waiting() const { return value() & WRITER_WAITING; }
};
#ifndef UNIV_PFS_RWLOCK
@ -114,11 +147,11 @@ class srw_lock
PSI_rwlock *pfs_psi;
template<bool support_u_lock>
ATTRIBUTE_NOINLINE void psi_rd_lock(const char *file, unsigned line);
ATTRIBUTE_NOINLINE bool psi_rd_lock(const char *file, unsigned line);
template<bool support_u_lock>
ATTRIBUTE_NOINLINE void psi_wr_lock(const char *file, unsigned line);
ATTRIBUTE_NOINLINE void psi_u_lock(const char *file, unsigned line);
ATTRIBUTE_NOINLINE void psi_u_wr_upgrade(const char *file, unsigned line);
ATTRIBUTE_NOINLINE bool psi_wr_lock(const char *file, unsigned line);
ATTRIBUTE_NOINLINE bool psi_u_lock(const char *file, unsigned line);
ATTRIBUTE_NOINLINE bool psi_u_wr_upgrade(const char *file, unsigned line);
public:
void init(mysql_pfs_key_t key)
{
@ -135,12 +168,12 @@ public:
lock.destroy();
}
template<bool support_u_lock= false>
void rd_lock(const char *file, unsigned line)
bool rd_lock(const char *file, unsigned line)
{
if (psi_likely(pfs_psi != nullptr))
psi_rd_lock<support_u_lock>(file, line);
return psi_rd_lock<support_u_lock>(file, line);
else
lock.rd_lock();
return lock.rd_lock();
}
void rd_unlock()
{
@ -148,12 +181,12 @@ public:
PSI_RWLOCK_CALL(unlock_rwlock)(pfs_psi);
lock.rd_unlock();
}
void u_lock(const char *file, unsigned line)
bool u_lock(const char *file, unsigned line)
{
if (psi_likely(pfs_psi != nullptr))
psi_u_lock(file, line);
return psi_u_lock(file, line);
else
lock.u_lock();
return lock.u_lock();
}
void u_unlock()
{
@ -162,12 +195,12 @@ public:
lock.u_unlock();
}
template<bool support_u_lock= false>
void wr_lock(const char *file, unsigned line)
bool wr_lock(const char *file, unsigned line)
{
if (psi_likely(pfs_psi != nullptr))
psi_wr_lock<support_u_lock>(file, line);
return psi_wr_lock<support_u_lock>(file, line);
else
lock.wr_lock();
return lock.wr_lock();
}
void wr_unlock()
{
@ -175,15 +208,16 @@ public:
PSI_RWLOCK_CALL(unlock_rwlock)(pfs_psi);
lock.wr_unlock();
}
void u_wr_upgrade(const char *file, unsigned line)
bool u_wr_upgrade(const char *file, unsigned line)
{
if (psi_likely(pfs_psi != nullptr))
psi_u_wr_upgrade(file, line);
return psi_u_wr_upgrade(file, line);
else
lock.u_wr_upgrade();
return lock.u_wr_upgrade();
}
bool rd_lock_try() { return lock.rd_lock_try(); }
bool u_lock_try() { return lock.u_lock_try(); }
bool wr_lock_try() { return lock.wr_lock_try(); }
bool is_waiting() const { return lock.is_waiting(); }
};
#endif

View file

@ -0,0 +1,463 @@
/*****************************************************************************
Copyright (c) 2020, MariaDB Corporation.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
*****************************************************************************/
#pragma once
#include "srw_lock.h"
#include "my_atomic_wrapper.h"
#include "os0thread.h"
#ifdef UNIV_DEBUG
# include <set>
#endif
/** A "fat" rw-lock that supports
S (shared), U (update, or shared-exclusive), and X (exclusive) modes
as well as recursive U and X latch acquisition
@tparam srw srw_lock_low or srw_lock */
template<typename srw>
class sux_lock final
{
/** The underlying non-recursive lock */
srw lock;
/** The owner of the U or X lock (0 if none); protected by lock */
std::atomic<os_thread_id_t> writer;
/** Special writer!=0 value to indicate that the lock is non-recursive
and will be released by an I/O thread */
#if defined __linux__ || defined _WIN32
static constexpr os_thread_id_t FOR_IO= os_thread_id_t(~0UL);
#else
# define FOR_IO ((os_thread_id_t) ~0UL) /* it could be a pointer */
#endif
/** Numbers of U and X locks. Protected by lock. */
uint32_t recursive;
/** Number of blocking waits */
std::atomic<uint32_t> waits;
#ifdef UNIV_DEBUG
/** Protects readers */
mutable srw_mutex readers_lock;
/** Threads that hold the lock in shared mode */
std::atomic<std::set<os_thread_id_t>*> readers;
#endif
/** The multiplier in recursive for X locks */
static constexpr uint32_t RECURSIVE_X= 1U;
/** The multiplier in recursive for U locks */
static constexpr uint32_t RECURSIVE_U= 1U << 16;
/** The maximum allowed level of recursion */
static constexpr uint32_t RECURSIVE_MAX= RECURSIVE_U - 1;
public:
#ifdef UNIV_PFS_RWLOCK
inline void init();
#endif
void SRW_LOCK_INIT(mysql_pfs_key_t key)
{
lock.SRW_LOCK_INIT(key);
ut_ad(!writer.load(std::memory_order_relaxed));
ut_ad(!recursive);
ut_ad(!waits.load(std::memory_order_relaxed));
ut_d(readers_lock.init());
ut_ad(!readers.load(std::memory_order_relaxed));
}
/** Free the rw-lock after create() */
void free()
{
ut_ad(!writer.load(std::memory_order_relaxed));
ut_ad(!recursive);
#ifdef UNIV_DEBUG
readers_lock.destroy();
if (auto r= readers.load(std::memory_order_relaxed))
{
ut_ad(r->empty());
delete r;
readers.store(nullptr, std::memory_order_relaxed);
}
#endif
lock.destroy();
}
/** @return number of blocking waits */
uint32_t waited() const { return waits.load(std::memory_order_relaxed); }
/** Reset the number of blocking waits */
void reset_waited() { waits.store(0, std::memory_order_relaxed); }
/** needed for dict_index_t::clone() */
inline void operator=(const sux_lock&);
#ifdef UNIV_DEBUG
/** @return whether no recursive locks are being held */
bool not_recursive() const
{
ut_ad(recursive);
return recursive == RECURSIVE_X || recursive == RECURSIVE_U;
}
#endif
/** Acquire a recursive lock */
template<bool allow_readers> void writer_recurse()
{
ut_ad(writer == os_thread_get_curr_id());
ut_d(auto rec= (recursive / (allow_readers ? RECURSIVE_U : RECURSIVE_X)) &
RECURSIVE_MAX);
ut_ad(allow_readers ? recursive : rec);
ut_ad(rec < RECURSIVE_MAX);
recursive+= allow_readers ? RECURSIVE_U : RECURSIVE_X;
}
private:
/** Transfer the ownership of a write lock to another thread
@param id the new owner of the U or X lock */
void set_new_owner(os_thread_id_t id)
{
IF_DBUG(DBUG_ASSERT(writer.exchange(id, std::memory_order_relaxed)),
writer.store(id, std::memory_order_relaxed));
}
/** Assign the ownership of a write lock to a thread
@param id the owner of the U or X lock */
void set_first_owner(os_thread_id_t id)
{
IF_DBUG(DBUG_ASSERT(!writer.exchange(id, std::memory_order_relaxed)),
writer.store(id, std::memory_order_relaxed));
}
#ifdef UNIV_DEBUG
/** Register the current thread as a holder of a shared lock */
void s_lock_register()
{
readers_lock.wr_lock();
auto r= readers.load(std::memory_order_relaxed);
if (!r)
{
r= new std::set<os_thread_id_t>();
readers.store(r, std::memory_order_relaxed);
}
ut_ad(r->emplace(os_thread_get_curr_id()).second);
readers_lock.wr_unlock();
}
#endif
public:
/** In crash recovery or the change buffer, claim the ownership
of the exclusive block lock to the current thread */
void claim_ownership() { set_new_owner(os_thread_get_curr_id()); }
/** @return whether the current thread is holding X or U latch */
bool have_u_or_x() const
{
if (os_thread_get_curr_id() != writer.load(std::memory_order_relaxed))
return false;
ut_ad(recursive);
return true;
}
/** @return whether the current thread is holding U but not X latch */
bool have_u_not_x() const
{ return have_u_or_x() && !((recursive / RECURSIVE_X) & RECURSIVE_MAX); }
/** @return whether the current thread is holding X latch */
bool have_x() const
{ return have_u_or_x() && ((recursive / RECURSIVE_X) & RECURSIVE_MAX); }
#ifdef UNIV_DEBUG
/** @return whether the current thread is holding S latch */
bool have_s() const
{
if (auto r= readers.load(std::memory_order_relaxed))
{
readers_lock.wr_lock();
bool found= r->find(os_thread_get_curr_id()) != r->end();
readers_lock.wr_unlock();
return found;
}
return false;
}
/** @return whether the current thread is holding the latch */
bool have_any() const { return have_u_or_x() || have_s(); }
#endif
/** Acquire a shared lock */
inline void s_lock();
inline void s_lock(const char *file, unsigned line);
/** Acquire an update lock */
inline void u_lock();
inline void u_lock(const char *file, unsigned line);
/** Acquire an exclusive lock */
inline void x_lock(bool for_io= false);
inline void x_lock(const char *file, unsigned line);
/** Acquire a recursive exclusive lock */
void x_lock_recursive() { writer_recurse<false>(); }
/** Upgrade an update lock */
inline void u_x_upgrade();
inline void u_x_upgrade(const char *file, unsigned line);
/** Acquire an exclusive lock or upgrade an update lock
@return whether U locks were upgraded to X */
inline bool x_lock_upgraded();
/** @return whether a shared lock was acquired */
bool s_lock_try()
{
bool acquired= lock.rd_lock_try();
ut_d(if (acquired) s_lock_register());
return acquired;
}
/** Try to acquire an update lock
@param for_io whether the lock will be released by another thread
@return whether the update lock was acquired */
inline bool u_lock_try(bool for_io);
/** Try to acquire an exclusive lock
@return whether an exclusive lock was acquired */
inline bool x_lock_try();
/** Release a shared lock */
void s_unlock()
{
#ifdef UNIV_DEBUG
auto r= readers.load(std::memory_order_relaxed);
ut_ad(r);
readers_lock.wr_lock();
ut_ad(r->erase(os_thread_get_curr_id()) == 1);
readers_lock.wr_unlock();
#endif
lock.rd_unlock();
}
/** Release an update or exclusive lock
@param allow_readers whether we are releasing a U lock
@param claim_ownership whether the lock was acquired by another thread */
void u_or_x_unlock(bool allow_readers, bool claim_ownership= false)
{
ut_d(auto owner= writer.load(std::memory_order_relaxed));
ut_ad(owner == os_thread_get_curr_id() ||
(owner == FOR_IO && claim_ownership &&
recursive == (allow_readers ? RECURSIVE_U : RECURSIVE_X)));
ut_d(auto rec= (recursive / (allow_readers ? RECURSIVE_U : RECURSIVE_X)) &
RECURSIVE_MAX);
ut_ad(rec);
if (!(recursive-= allow_readers ? RECURSIVE_U : RECURSIVE_X))
{
set_new_owner(0);
if (allow_readers)
lock.u_unlock();
else
lock.wr_unlock();
}
}
/** Release an update lock */
void u_unlock(bool claim_ownership= false)
{ u_or_x_unlock(true, claim_ownership); }
/** Release an exclusive lock */
void x_unlock(bool claim_ownership= false)
{ u_or_x_unlock(false, claim_ownership); }
/** @return whether any writer is waiting */
bool is_waiting() const { return lock.is_waiting(); }
};
/** needed for dict_index_t::clone() */
template<> inline void sux_lock<srw_lock>::operator=(const sux_lock&)
{
memset((void*) this, 0, sizeof *this);
}
typedef sux_lock<srw_lock_low> block_lock;
#ifndef UNIV_PFS_RWLOCK
typedef block_lock index_lock;
#else
typedef sux_lock<srw_lock> index_lock;
template<> inline void sux_lock<srw_lock_low>::init()
{
lock.init();
ut_ad(!writer.load(std::memory_order_relaxed));
ut_ad(!recursive);
ut_ad(!waits.load(std::memory_order_relaxed));
ut_d(readers_lock.init());
ut_ad(!readers.load(std::memory_order_relaxed));
}
template<>
inline void sux_lock<srw_lock>::s_lock(const char *file, unsigned line)
{
ut_ad(!have_x());
ut_ad(!have_s());
if (!lock.template rd_lock<true>(file, line))
waits.fetch_add(1, std::memory_order_relaxed);
ut_d(s_lock_register());
}
template<>
inline void sux_lock<srw_lock>::u_lock(const char *file, unsigned line)
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
writer_recurse<true>();
else
{
if (!lock.u_lock(file, line))
waits.fetch_add(1, std::memory_order_relaxed);
ut_ad(!recursive);
recursive= RECURSIVE_U;
set_first_owner(id);
}
}
template<>
inline void sux_lock<srw_lock>::x_lock(const char *file, unsigned line)
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
writer_recurse<false>();
else
{
if (!lock.template wr_lock<true>(file, line))
waits.fetch_add(1, std::memory_order_relaxed);
ut_ad(!recursive);
recursive= RECURSIVE_X;
set_first_owner(id);
}
}
template<>
inline void sux_lock<srw_lock>::u_x_upgrade(const char *file, unsigned line)
{
ut_ad(have_u_not_x());
if (!lock.u_wr_upgrade(file, line))
waits.fetch_add(1, std::memory_order_relaxed);
recursive/= RECURSIVE_U;
}
#endif
template<>
inline void sux_lock<srw_lock_low>::s_lock()
{
ut_ad(!have_x());
ut_ad(!have_s());
if (!lock.template rd_lock<true>())
waits.fetch_add(1, std::memory_order_relaxed);
ut_d(s_lock_register());
}
template<>
inline void sux_lock<srw_lock_low>::u_lock()
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
writer_recurse<true>();
else
{
if (!lock.u_lock())
waits.fetch_add(1, std::memory_order_relaxed);
ut_ad(!recursive);
recursive= RECURSIVE_U;
set_first_owner(id);
}
}
template<>
inline void sux_lock<srw_lock_low>::x_lock(bool for_io)
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
{
ut_ad(!for_io);
writer_recurse<false>();
}
else
{
if (!lock.template wr_lock<true>())
waits.fetch_add(1, std::memory_order_relaxed);
ut_ad(!recursive);
recursive= RECURSIVE_X;
set_first_owner(for_io ? FOR_IO : id);
}
}
template<>
inline void sux_lock<srw_lock_low>::u_x_upgrade()
{
ut_ad(have_u_not_x());
if (!lock.u_wr_upgrade())
waits.fetch_add(1, std::memory_order_relaxed);
recursive/= RECURSIVE_U;
}
template<> inline bool sux_lock<srw_lock_low>::x_lock_upgraded()
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
{
ut_ad(recursive);
static_assert(RECURSIVE_X == 1, "compatibility");
if (recursive & RECURSIVE_MAX)
{
writer_recurse<false>();
return false;
}
/* Upgrade the lock. */
lock.u_wr_upgrade();
recursive/= RECURSIVE_U;
return true;
}
else
{
lock.template wr_lock<true>();
ut_ad(!recursive);
recursive= RECURSIVE_X;
set_first_owner(id);
return false;
}
}
template<>
inline bool sux_lock<srw_lock_low>::u_lock_try(bool for_io)
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
{
if (for_io)
return false;
writer_recurse<true>();
return true;
}
if (lock.u_lock_try())
{
ut_ad(!recursive);
recursive= RECURSIVE_U;
set_first_owner(for_io ? FOR_IO : id);
return true;
}
return false;
}
template<>
inline bool sux_lock<srw_lock_low>::x_lock_try()
{
os_thread_id_t id= os_thread_get_curr_id();
if (writer.load(std::memory_order_relaxed) == id)
{
writer_recurse<false>();
return true;
}
if (lock.wr_lock_try())
{
ut_ad(!recursive);
recursive= RECURSIVE_X;
set_first_owner(id);
return true;
}
return false;
}

View file

@ -46,7 +46,6 @@ UNIV_INLINE
sync_array_t*
sync_array_get_and_reserve_cell(
void* object, /*!< in: pointer to the object to wait for */
ulint type, /*!< in: lock request type */
const char* file, /*!< in: file where requested */
unsigned line, /*!< in: line where requested */
sync_cell_t** cell); /*!< out: the cell reserved, never NULL */
@ -56,8 +55,7 @@ The event of the cell is reset to nonsignalled state. */
sync_cell_t*
sync_array_reserve_cell(
sync_array_t* arr, /*!< in: wait array */
void* object, /*!< in: pointer to the object to wait for */
ulint type, /*!< in: lock request type */
void* object, /*!< in: pointer to the object to wait for */
const char* file, /*!< in: file where requested */
unsigned line); /*!< in: line where requested */

View file

@ -1,7 +1,7 @@
/*****************************************************************************
Copyright (c) 1995, 2015, Oracle and/or its affiliates. All Rights Reserved.
Copyright (c) 2017, MariaDB Corporation.
Copyright (c) 2017, 2020, MariaDB Corporation.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
@ -57,9 +57,7 @@ instance until we can reserve an empty cell of it.
UNIV_INLINE
sync_array_t*
sync_array_get_and_reserve_cell(
/*============================*/
void* object, /*!< in: pointer to the object to wait for */
ulint type, /*!< in: lock request type */
void* object, /*!< in: pointer to the object to wait for */
const char* file, /*!< in: file where requested */
unsigned line, /*!< in: line where requested */
sync_cell_t** cell) /*!< out: the cell reserved, never NULL */
@ -72,8 +70,7 @@ sync_array_get_and_reserve_cell(
we still try at most sync_array_size times, in case any
of the sync_array we get is full */
sync_arr = sync_array_get();
*cell = sync_array_reserve_cell(sync_arr, object, type,
file, line);
*cell = sync_array_reserve_cell(sync_arr, object, file, line);
}
/* This won't be true every time, for the loop above may execute

View file

@ -1,838 +0,0 @@
/*****************************************************************************
Copyright (c) 1995, 2016, Oracle and/or its affiliates. All Rights Reserved.
Copyright (c) 2008, Google Inc.
Copyright (c) 2017, 2020, MariaDB Corporation.
Portions of this file contain modifications contributed and copyrighted by
Google, Inc. Those modifications are gratefully acknowledged and are described
briefly in the InnoDB documentation. The contributions by Google are
incorporated with their permission, and subject to the conditions contained in
the file COPYING.Google.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
*****************************************************************************/
/**************************************************//**
@file include/sync0rw.h
The read-write lock (for threads, not for database transactions)
Created 9/11/1995 Heikki Tuuri
*******************************************************/
#ifndef sync0rw_h
#define sync0rw_h
#include "os0event.h"
#include "ut0mutex.h"
#include "ilist.h"
/** Counters for RW locks. */
struct rw_lock_stats_t {
typedef ib_counter_t<int64_t, IB_N_SLOTS> int64_counter_t;
/** number of spin waits on rw-latches,
resulted during shared (read) locks */
int64_counter_t rw_s_spin_wait_count;
/** number of spin loop rounds on rw-latches,
resulted during shared (read) locks */
int64_counter_t rw_s_spin_round_count;
/** number of OS waits on rw-latches,
resulted during shared (read) locks */
int64_counter_t rw_s_os_wait_count;
/** number of spin waits on rw-latches,
resulted during exclusive (write) locks */
int64_counter_t rw_x_spin_wait_count;
/** number of spin loop rounds on rw-latches,
resulted during exclusive (write) locks */
int64_counter_t rw_x_spin_round_count;
/** number of OS waits on rw-latches,
resulted during exclusive (write) locks */
int64_counter_t rw_x_os_wait_count;
/** number of spin waits on rw-latches,
resulted during sx locks */
int64_counter_t rw_sx_spin_wait_count;
/** number of spin loop rounds on rw-latches,
resulted during sx locks */
int64_counter_t rw_sx_spin_round_count;
/** number of OS waits on rw-latches,
resulted during sx locks */
int64_counter_t rw_sx_os_wait_count;
};
/* Latch types; these are used also in btr0btr.h and mtr0mtr.h: keep the
numerical values smaller than 30 (smaller than BTR_MODIFY_TREE and
MTR_MEMO_MODIFY) and the order of the numerical values like below! and they
should be 2pow value to be used also as ORed combination of flag. */
enum rw_lock_type_t {
RW_S_LATCH = 1,
RW_X_LATCH = 2,
RW_SX_LATCH = 4,
RW_NO_LATCH = 8
};
/* We decrement lock_word by X_LOCK_DECR for each x_lock. It is also the
start value for the lock_word, meaning that it limits the maximum number
of concurrent read locks before the rw_lock breaks. */
/* We decrement lock_word by X_LOCK_HALF_DECR for sx_lock. */
#define X_LOCK_DECR 0x20000000
#define X_LOCK_HALF_DECR 0x10000000
#ifdef rw_lock_t
#undef rw_lock_t
#endif
struct rw_lock_t;
#ifdef UNIV_DEBUG
struct rw_lock_debug_t;
#endif /* UNIV_DEBUG */
extern ilist<rw_lock_t> rw_lock_list;
extern ib_mutex_t rw_lock_list_mutex;
/** Counters for RW locks. */
extern rw_lock_stats_t rw_lock_stats;
#ifndef UNIV_PFS_RWLOCK
/******************************************************************//**
Creates, or rather, initializes an rw-lock object in a specified memory
location (which must be appropriately aligned). The rw-lock is initialized
to the non-locked state. Explicit freeing of the rw-lock with rw_lock_free
is necessary only if the memory block containing it is freed.
if MySQL performance schema is enabled and "UNIV_PFS_RWLOCK" is
defined, the rwlock are instrumented with performance schema probes. */
# ifdef UNIV_DEBUG
# define rw_lock_create(K, L, level) \
rw_lock_create_func((L), (level), __FILE__, __LINE__)
# else /* UNIV_DEBUG */
# define rw_lock_create(K, L, level) \
rw_lock_create_func((L), __FILE__, __LINE__)
# endif /* UNIV_DEBUG */
/**************************************************************//**
NOTE! The following macros should be used in rw locking and
unlocking, not the corresponding function. */
# define rw_lock_s_lock(M) \
rw_lock_s_lock_func((M), 0, __FILE__, __LINE__)
# define rw_lock_s_lock_inline(M, P, F, L) \
rw_lock_s_lock_func((M), (P), (F), (L))
# define rw_lock_s_lock_gen(M, P) \
rw_lock_s_lock_func((M), (P), __FILE__, __LINE__)
# define rw_lock_s_lock_nowait(M, F, L) \
rw_lock_s_lock_low((M), 0, (F), (L))
# ifdef UNIV_DEBUG
# define rw_lock_s_unlock_gen(L, P) rw_lock_s_unlock_func(P, L)
# else
# define rw_lock_s_unlock_gen(L, P) rw_lock_s_unlock_func(L)
# endif /* UNIV_DEBUG */
#define rw_lock_sx_lock(L) \
rw_lock_sx_lock_func((L), 0, __FILE__, __LINE__)
#define rw_lock_sx_lock_inline(M, P, F, L) \
rw_lock_sx_lock_func((M), (P), (F), (L))
#define rw_lock_sx_lock_gen(M, P) \
rw_lock_sx_lock_func((M), (P), __FILE__, __LINE__)
#define rw_lock_sx_lock_nowait(M, P) \
rw_lock_sx_lock_low((M), (P), __FILE__, __LINE__)
#define rw_lock_sx_lock(L) \
rw_lock_sx_lock_func((L), 0, __FILE__, __LINE__)
#define rw_lock_sx_lock_inline(M, P, F, L) \
rw_lock_sx_lock_func((M), (P), (F), (L))
#define rw_lock_sx_lock_gen(M, P) \
rw_lock_sx_lock_func((M), (P), __FILE__, __LINE__)
#define rw_lock_sx_lock_nowait(M, P) \
rw_lock_sx_lock_low((M), (P), __FILE__, __LINE__)
# ifdef UNIV_DEBUG
# define rw_lock_sx_unlock(L) rw_lock_sx_unlock_func(0, L)
# define rw_lock_sx_unlock_gen(L, P) rw_lock_sx_unlock_func(P, L)
# else /* UNIV_DEBUG */
# define rw_lock_sx_unlock(L) rw_lock_sx_unlock_func(L)
# define rw_lock_sx_unlock_gen(L, P) rw_lock_sx_unlock_func(L)
# endif /* UNIV_DEBUG */
# define rw_lock_x_lock(M) \
rw_lock_x_lock_func((M), 0, __FILE__, __LINE__)
# define rw_lock_x_lock_inline(M, P, F, L) \
rw_lock_x_lock_func((M), (P), (F), (L))
# define rw_lock_x_lock_gen(M, P) \
rw_lock_x_lock_func((M), (P), __FILE__, __LINE__)
# define rw_lock_x_lock_nowait(M) \
rw_lock_x_lock_func_nowait((M), __FILE__, __LINE__)
# define rw_lock_x_lock_func_nowait_inline(M, F, L) \
rw_lock_x_lock_func_nowait((M), (F), (L))
# ifdef UNIV_DEBUG
# define rw_lock_x_unlock_gen(L, P) rw_lock_x_unlock_func(P, L)
# else
# define rw_lock_x_unlock_gen(L, P) rw_lock_x_unlock_func(L)
# endif
# define rw_lock_free(M) rw_lock_free_func(M)
#else /* !UNIV_PFS_RWLOCK */
/* Following macros point to Performance Schema instrumented functions. */
# ifdef UNIV_DEBUG
# define rw_lock_create(K, L, level) \
pfs_rw_lock_create_func((K), (L), (level), __FILE__, __LINE__)
# else /* UNIV_DEBUG */
# define rw_lock_create(K, L, level) \
pfs_rw_lock_create_func((K), (L), __FILE__, __LINE__)
# endif /* UNIV_DEBUG */
/******************************************************************
NOTE! The following macros should be used in rw locking and
unlocking, not the corresponding function. */
# define rw_lock_s_lock(M) \
pfs_rw_lock_s_lock_func((M), 0, __FILE__, __LINE__)
# define rw_lock_s_lock_inline(M, P, F, L) \
pfs_rw_lock_s_lock_func((M), (P), (F), (L))
# define rw_lock_s_lock_gen(M, P) \
pfs_rw_lock_s_lock_func((M), (P), __FILE__, __LINE__)
# define rw_lock_s_lock_nowait(M, F, L) \
pfs_rw_lock_s_lock_low((M), 0, (F), (L))
# ifdef UNIV_DEBUG
# define rw_lock_s_unlock_gen(L, P) pfs_rw_lock_s_unlock_func(P, L)
# else
# define rw_lock_s_unlock_gen(L, P) pfs_rw_lock_s_unlock_func(L)
# endif
# define rw_lock_sx_lock(M) \
pfs_rw_lock_sx_lock_func((M), 0, __FILE__, __LINE__)
# define rw_lock_sx_lock_inline(M, P, F, L) \
pfs_rw_lock_sx_lock_func((M), (P), (F), (L))
# define rw_lock_sx_lock_gen(M, P) \
pfs_rw_lock_sx_lock_func((M), (P), __FILE__, __LINE__)
#define rw_lock_sx_lock_nowait(M, P) \
pfs_rw_lock_sx_lock_low((M), (P), __FILE__, __LINE__)
# ifdef UNIV_DEBUG
# define rw_lock_sx_unlock(L) pfs_rw_lock_sx_unlock_func(0, L)
# define rw_lock_sx_unlock_gen(L, P) pfs_rw_lock_sx_unlock_func(P, L)
# else
# define rw_lock_sx_unlock(L) pfs_rw_lock_sx_unlock_func(L)
# define rw_lock_sx_unlock_gen(L, P) pfs_rw_lock_sx_unlock_func(L)
# endif
# define rw_lock_x_lock(M) \
pfs_rw_lock_x_lock_func((M), 0, __FILE__, __LINE__)
# define rw_lock_x_lock_inline(M, P, F, L) \
pfs_rw_lock_x_lock_func((M), (P), (F), (L))
# define rw_lock_x_lock_gen(M, P) \
pfs_rw_lock_x_lock_func((M), (P), __FILE__, __LINE__)
# define rw_lock_x_lock_nowait(M) \
pfs_rw_lock_x_lock_func_nowait((M), __FILE__, __LINE__)
# define rw_lock_x_lock_func_nowait_inline(M, F, L) \
pfs_rw_lock_x_lock_func_nowait((M), (F), (L))
# ifdef UNIV_DEBUG
# define rw_lock_x_unlock_gen(L, P) pfs_rw_lock_x_unlock_func(P, L)
# else
# define rw_lock_x_unlock_gen(L, P) pfs_rw_lock_x_unlock_func(L)
# endif
# define rw_lock_free(M) pfs_rw_lock_free_func(M)
#endif /* !UNIV_PFS_RWLOCK */
#define rw_lock_s_unlock(L) rw_lock_s_unlock_gen(L, 0)
#define rw_lock_x_unlock(L) rw_lock_x_unlock_gen(L, 0)
/******************************************************************//**
Creates, or rather, initializes an rw-lock object in a specified memory
location (which must be appropriately aligned). The rw-lock is initialized
to the non-locked state. Explicit freeing of the rw-lock with rw_lock_free
is necessary only if the memory block containing it is freed. */
void
rw_lock_create_func(
/*================*/
rw_lock_t* lock, /*!< in: pointer to memory */
#ifdef UNIV_DEBUG
latch_level_t level, /*!< in: level */
#endif /* UNIV_DEBUG */
const char* cfile_name, /*!< in: file name where created */
unsigned cline); /*!< in: file line where created */
/******************************************************************//**
Calling this function is obligatory only if the memory buffer containing
the rw-lock is freed. Removes an rw-lock object from the global list. The
rw-lock is checked to be in the non-locked state. */
void
rw_lock_free_func(
/*==============*/
rw_lock_t* lock); /*!< in/out: rw-lock */
#ifdef UNIV_DEBUG
/******************************************************************//**
Checks that the rw-lock has been initialized and that there are no
simultaneous shared and exclusive locks.
@return true */
bool
rw_lock_validate(
/*=============*/
const rw_lock_t* lock); /*!< in: rw-lock */
#endif /* UNIV_DEBUG */
/******************************************************************//**
Low-level function which tries to lock an rw-lock in s-mode.
@return TRUE if success */
UNIV_INLINE
ibool
rw_lock_s_lock_low(
/*===============*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass MY_ATTRIBUTE((unused)),
/*!< in: pass value; != 0, if the lock will be
passed to another thread to unlock */
const char* file_name, /*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
NOTE! Use the corresponding macro, not directly this function, except if
you supply the file name and line number. Lock an rw-lock in shared mode
for the current thread. If the rw-lock is locked in exclusive mode, or
there is an exclusive lock request waiting, the function spins a preset
time (controlled by srv_n_spin_wait_rounds), waiting for the lock, before
suspending the thread. */
UNIV_INLINE
void
rw_lock_s_lock_func(
/*================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
NOTE! Use the corresponding macro, not directly this function! Lock an
rw-lock in exclusive mode for the current thread if the lock can be
obtained immediately.
@return TRUE if success */
UNIV_INLINE
ibool
rw_lock_x_lock_func_nowait(
/*=======================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Releases a shared mode lock. */
UNIV_INLINE
void
rw_lock_s_unlock_func(
/*==================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the lock may have
been passed to another thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock); /*!< in/out: rw-lock */
/******************************************************************//**
NOTE! Use the corresponding macro, not directly this function! Lock an
rw-lock in exclusive mode for the current thread. If the rw-lock is locked
in shared or exclusive mode, or there is an exclusive lock request waiting,
the function spins a preset time (controlled by srv_n_spin_wait_rounds), waiting
for the lock, before suspending the thread. If the same thread has an x-lock
on the rw-lock, locking succeed, with the following exception: if pass != 0,
only a single x-lock may be taken on the lock. NOTE: If the same thread has
an s-lock, locking does not succeed! */
void
rw_lock_x_lock_func(
/*================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Low-level function for acquiring an sx lock.
@return FALSE if did not succeed, TRUE if success. */
ibool
rw_lock_sx_lock_low(
/*================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
NOTE! Use the corresponding macro, not directly this function! Lock an
rw-lock in SX mode for the current thread. If the rw-lock is locked
in exclusive mode, or there is an exclusive lock request waiting,
the function spins a preset time (controlled by SYNC_SPIN_ROUNDS), waiting
for the lock, before suspending the thread. If the same thread has an x-lock
on the rw-lock, locking succeed, with the following exception: if pass != 0,
only a single sx-lock may be taken on the lock. NOTE: If the same thread has
an s-lock, locking does not succeed! */
void
rw_lock_sx_lock_func(
/*=================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Releases an exclusive mode lock. */
UNIV_INLINE
void
rw_lock_x_unlock_func(
/*==================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the lock may have
been passed to another thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock); /*!< in/out: rw-lock */
/******************************************************************//**
Releases an sx mode lock. */
UNIV_INLINE
void
rw_lock_sx_unlock_func(
/*===================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the lock may have
been passed to another thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock); /*!< in/out: rw-lock */
/******************************************************************//**
This function is used in the insert buffer to move the ownership of an
x-latch on a buffer frame to the current thread. The x-latch was set by
the buffer read operation and it protected the buffer frame while the
read was done. The ownership is moved because we want that the current
thread is able to acquire a second x-latch which is stored in an mtr.
This, in turn, is needed to pass the debug checks of index page
operations. */
void
rw_lock_x_lock_move_ownership(
/*==========================*/
rw_lock_t* lock); /*!< in: lock which was x-locked in the
buffer read */
/******************************************************************//**
Returns the value of writer_count for the lock. Does not reserve the lock
mutex, so the caller must be sure it is not changed during the call.
@return value of writer_count */
UNIV_INLINE
ulint
rw_lock_get_x_lock_count(
/*=====================*/
const rw_lock_t* lock); /*!< in: rw-lock */
/******************************************************************//**
Returns the number of sx-lock for the lock. Does not reserve the lock
mutex, so the caller must be sure it is not changed during the call.
@return value of writer_count */
UNIV_INLINE
ulint
rw_lock_get_sx_lock_count(
/*======================*/
const rw_lock_t* lock); /*!< in: rw-lock */
/******************************************************************//**
Returns the write-status of the lock - this function made more sense
with the old rw_lock implementation.
@return RW_LOCK_NOT_LOCKED, RW_LOCK_X, RW_LOCK_X_WAIT, RW_LOCK_SX */
UNIV_INLINE
ulint
rw_lock_get_writer(
/*===============*/
const rw_lock_t* lock); /*!< in: rw-lock */
/******************************************************************//**
Returns the number of readers (s-locks).
@return number of readers */
UNIV_INLINE
ulint
rw_lock_get_reader_count(
/*=====================*/
const rw_lock_t* lock); /*!< in: rw-lock */
/******************************************************************//**
Decrements lock_word the specified amount if it is greater than 0.
This is used by both s_lock and x_lock operations.
@return true if decr occurs */
UNIV_INLINE
bool
rw_lock_lock_word_decr(
/*===================*/
rw_lock_t* lock, /*!< in/out: rw-lock */
int32_t amount, /*!< in: amount to decrement */
int32_t threshold); /*!< in: threshold of judgement */
#ifdef UNIV_DEBUG
/******************************************************************//**
Checks if the thread has locked the rw-lock in the specified mode, with
the pass value == 0. */
bool
rw_lock_own(
/*========*/
const rw_lock_t*lock, /*!< in: rw-lock */
ulint lock_type) /*!< in: lock type: RW_LOCK_S,
RW_LOCK_X */
MY_ATTRIBUTE((warn_unused_result));
/******************************************************************//**
Checks if the thread has locked the rw-lock in the specified mode, with
the pass value == 0. */
bool
rw_lock_own_flagged(
/*================*/
const rw_lock_t* lock, /*!< in: rw-lock */
rw_lock_flags_t flags) /*!< in: specify lock types with
OR of the rw_lock_flag_t values */
MY_ATTRIBUTE((warn_unused_result));
#endif /* UNIV_DEBUG */
/******************************************************************//**
Checks if somebody has locked the rw-lock in the specified mode.
@return true if locked */
bool
rw_lock_is_locked(
/*==============*/
rw_lock_t* lock, /*!< in: rw-lock */
ulint lock_type); /*!< in: lock type: RW_LOCK_S,
RW_LOCK_X or RW_LOCK_SX */
#ifdef UNIV_DEBUG
/***************************************************************//**
Prints debug info of currently locked rw-locks. */
void
rw_lock_list_print_info(
/*====================*/
FILE* file); /*!< in: file where to print */
/*#####################################################################*/
/*********************************************************************//**
Prints info of a debug struct. */
void
rw_lock_debug_print(
/*================*/
FILE* f, /*!< in: output stream */
const rw_lock_debug_t* info); /*!< in: debug struct */
#endif /* UNIV_DEBUG */
/* NOTE! The structure appears here only for the compiler to know its size.
Do not use its fields directly! */
/** The structure used in the spin lock implementation of a read-write
lock. Several threads may have a shared lock simultaneously in this
lock, but only one writer may have an exclusive lock, in which case no
shared locks are allowed. To prevent starving of a writer blocked by
readers, a writer may queue for x-lock by decrementing lock_word: no
new readers will be let in while the thread waits for readers to
exit. */
struct rw_lock_t :
#ifdef UNIV_DEBUG
public latch_t,
#endif /* UNIV_DEBUG */
public ilist_node<>
{
ut_d(bool created= false;)
/** Holds the state of the lock. */
Atomic_relaxed<int32_t> lock_word;
/** 0=no waiters, 1=waiters for X or SX lock exist */
Atomic_relaxed<uint32_t> waiters;
/** number of granted SX locks. */
volatile ulint sx_recursive;
/** The value is typically set to thread id of a writer thread making
normal rw_locks recursive. In case of asynchronous IO, when a non-zero
value of 'pass' is passed then we keep the lock non-recursive.
writer_thread must be reset in x_unlock functions before incrementing
the lock_word. */
volatile os_thread_id_t writer_thread;
/** Used by sync0arr.cc for thread queueing */
os_event_t event;
/** Event for next-writer to wait on. A thread must decrement
lock_word before waiting. */
os_event_t wait_ex_event;
/** File name where lock created */
const char* cfile_name;
/** File name where last x-locked */
const char* last_x_file_name;
/** Line where created */
unsigned cline:13;
/** If 1 then the rw-lock is a block lock */
unsigned is_block_lock:1;
/** Line number where last time x-locked */
unsigned last_x_line:14;
/** Count of os_waits. May not be accurate */
uint32_t count_os_wait;
#ifdef UNIV_PFS_RWLOCK
/** The instrumentation hook */
struct PSI_rwlock* pfs_psi;
#endif /* UNIV_PFS_RWLOCK */
#ifdef UNIV_DEBUG
std::string to_string() const override;
/** In the debug version: pointer to the debug info list of the lock */
UT_LIST_BASE_NODE_T(rw_lock_debug_t) debug_list;
/** Level in the global latching order. */
latch_level_t level;
#endif /* UNIV_DEBUG */
};
#ifdef UNIV_DEBUG
/** The structure for storing debug info of an rw-lock. All access to this
structure must be protected by rw_lock_debug_mutex_enter(). */
struct rw_lock_debug_t {
os_thread_id_t thread_id; /*!< The thread id of the thread which
locked the rw-lock */
ulint pass; /*!< Pass value given in the lock operation */
ulint lock_type; /*!< Type of the lock: RW_LOCK_X,
RW_LOCK_S, RW_LOCK_X_WAIT */
const char* file_name;/*!< File name where the lock was obtained */
unsigned line; /*!< Line where the rw-lock was locked */
UT_LIST_NODE_T(rw_lock_debug_t) list;
/*!< Debug structs are linked in a two-way
list */
};
#endif /* UNIV_DEBUG */
/* For performance schema instrumentation, a new set of rwlock
wrap functions are created if "UNIV_PFS_RWLOCK" is defined.
The instrumentations are not planted directly into original
functions, so that we keep the underlying function as they
are. And in case, user wants to "take out" some rwlock from
instrumentation even if performance schema (UNIV_PFS_RWLOCK)
is defined, they can do so by reinstating APIs directly link to
original underlying functions.
The instrumented function names have prefix of "pfs_rw_lock_" vs.
original name prefix of "rw_lock_". Following are list of functions
that have been instrumented:
rw_lock_create()
rw_lock_x_lock()
rw_lock_x_lock_gen()
rw_lock_x_lock_nowait()
rw_lock_x_unlock_gen()
rw_lock_s_lock()
rw_lock_s_lock_gen()
rw_lock_s_lock_nowait()
rw_lock_s_unlock_gen()
rw_lock_sx_lock()
rw_lock_sx_unlock_gen()
rw_lock_free()
*/
#ifdef UNIV_PFS_RWLOCK
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_create_func()
NOTE! Please use the corresponding macro rw_lock_create(), not
directly this function! */
UNIV_INLINE
void
pfs_rw_lock_create_func(
/*====================*/
PSI_rwlock_key key, /*!< in: key registered with
performance schema */
rw_lock_t* lock, /*!< in: rw lock */
#ifdef UNIV_DEBUG
latch_level_t level, /*!< in: level */
#endif /* UNIV_DEBUG */
const char* cfile_name, /*!< in: file name where created */
unsigned cline); /*!< in: file line where created */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_x_lock_func()
NOTE! Please use the corresponding macro rw_lock_x_lock(), not
directly this function! */
UNIV_INLINE
void
pfs_rw_lock_x_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for
rw_lock_x_lock_func_nowait()
NOTE! Please use the corresponding macro, not directly this function!
@return TRUE if success */
UNIV_INLINE
ibool
pfs_rw_lock_x_lock_func_nowait(
/*===========================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_s_lock_func()
NOTE! Please use the corresponding macro rw_lock_s_lock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_s_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_s_lock_func()
NOTE! Please use the corresponding macro rw_lock_s_lock(), not directly
this function!
@return TRUE if success */
UNIV_INLINE
ibool
pfs_rw_lock_s_lock_low(
/*===================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the
lock will be passed to another
thread to unlock */
const char* file_name, /*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_x_lock_func()
NOTE! Please use the corresponding macro rw_lock_x_lock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_x_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_s_unlock_func()
NOTE! Please use the corresponding macro rw_lock_s_unlock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_s_unlock_func(
/*======================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the
lock may have been passed to another
thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock); /*!< in/out: rw-lock */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_x_unlock_func()
NOTE! Please use the corresponding macro rw_lock_x_unlock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_x_unlock_func(
/*======================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the
lock may have been passed to another
thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock); /*!< in/out: rw-lock */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_sx_lock_func()
NOTE! Please use the corresponding macro rw_lock_sx_lock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_sx_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_sx_lock_nowait()
NOTE! Please use the corresponding macro, not directly
this function! */
UNIV_INLINE
ibool
pfs_rw_lock_sx_lock_low(
/*================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_sx_unlock_func()
NOTE! Please use the corresponding macro rw_lock_sx_unlock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_sx_unlock_func(
/*======================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the
lock may have been passed to another
thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock); /*!< in/out: rw-lock */
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_free_func()
NOTE! Please use the corresponding macro rw_lock_free(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_free_func(
/*==================*/
rw_lock_t* lock); /*!< in: rw-lock */
#endif /* UNIV_PFS_RWLOCK */
#include "sync0rw.ic"
#endif /* sync0rw.h */

View file

@ -1,842 +0,0 @@
/*****************************************************************************
Copyright (c) 1995, 2016, Oracle and/or its affiliates. All Rights Reserved.
Copyright (c) 2008, Google Inc.
Copyright (c) 2017, 2020, MariaDB Corporation.
Portions of this file contain modifications contributed and copyrighted by
Google, Inc. Those modifications are gratefully acknowledged and are described
briefly in the InnoDB documentation. The contributions by Google are
incorporated with their permission, and subject to the conditions contained in
the file COPYING.Google.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
*****************************************************************************/
/**************************************************//**
@file include/sync0rw.ic
The read-write lock (for threads)
Created 9/11/1995 Heikki Tuuri
*******************************************************/
#include "os0event.h"
/******************************************************************//**
Lock an rw-lock in shared mode for the current thread. If the rw-lock is
locked in exclusive mode, or there is an exclusive lock request waiting,
the function spins a preset time (controlled by srv_n_spin_wait_rounds),
waiting for the lock before suspending the thread. */
void
rw_lock_s_lock_spin(
/*================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line); /*!< in: line where requested */
#ifdef UNIV_DEBUG
/******************************************************************//**
Inserts the debug information for an rw-lock. */
void
rw_lock_add_debug_info(
/*===================*/
rw_lock_t* lock, /*!< in: rw-lock */
ulint pass, /*!< in: pass value */
ulint lock_type, /*!< in: lock type */
const char* file_name, /*!< in: file where requested */
unsigned line); /*!< in: line where requested */
/******************************************************************//**
Removes a debug information struct for an rw-lock. */
void
rw_lock_remove_debug_info(
/*======================*/
rw_lock_t* lock, /*!< in: rw-lock */
ulint pass, /*!< in: pass value */
ulint lock_type); /*!< in: lock type */
#endif /* UNIV_DEBUG */
/******************************************************************//**
Returns the write-status of the lock - this function made more sense
with the old rw_lock implementation.
@return RW_LOCK_NOT_LOCKED, RW_LOCK_X, RW_LOCK_X_WAIT, RW_LOCK_SX */
UNIV_INLINE
ulint
rw_lock_get_writer(
/*===============*/
const rw_lock_t* lock) /*!< in: rw-lock */
{
int32_t lock_word = lock->lock_word;
ut_ad(lock_word <= X_LOCK_DECR);
if (lock_word > X_LOCK_HALF_DECR) {
/* return NOT_LOCKED in s-lock state, like the writer
member of the old lock implementation. */
return(RW_LOCK_NOT_LOCKED);
} else if (lock_word > 0) {
/* sx-locked, no x-locks */
return(RW_LOCK_SX);
} else if (lock_word == 0
|| lock_word == -X_LOCK_HALF_DECR
|| lock_word <= -X_LOCK_DECR) {
/* x-lock with sx-lock is also treated as RW_LOCK_EX */
return(RW_LOCK_X);
} else {
/* x-waiter with sx-lock is also treated as RW_LOCK_WAIT_EX
e.g. -X_LOCK_HALF_DECR < lock_word < 0 : without sx
-X_LOCK_DECR < lock_word < -X_LOCK_HALF_DECR : with sx */
return(RW_LOCK_X_WAIT);
}
}
/******************************************************************//**
Returns the number of readers (s-locks).
@return number of readers */
UNIV_INLINE
ulint
rw_lock_get_reader_count(
/*=====================*/
const rw_lock_t* lock) /*!< in: rw-lock */
{
int32_t lock_word = lock->lock_word;
ut_ad(lock_word <= X_LOCK_DECR);
if (lock_word > X_LOCK_HALF_DECR) {
/* s-locked, no x-waiter */
return ulint(X_LOCK_DECR - lock_word);
} else if (lock_word > 0) {
/* s-locked, with sx-locks only */
return ulint(X_LOCK_HALF_DECR - lock_word);
} else if (lock_word == 0) {
/* x-locked */
return(0);
} else if (lock_word > -X_LOCK_HALF_DECR) {
/* s-locked, with x-waiter */
return((ulint)(-lock_word));
} else if (lock_word == -X_LOCK_HALF_DECR) {
/* x-locked with sx-locks */
return(0);
} else if (lock_word > -X_LOCK_DECR) {
/* s-locked, with x-waiter and sx-lock */
return((ulint)(-(lock_word + X_LOCK_HALF_DECR)));
}
/* no s-locks */
return(0);
}
/******************************************************************//**
Returns the value of writer_count for the lock. Does not reserve the lock
mutex, so the caller must be sure it is not changed during the call.
@return value of writer_count */
UNIV_INLINE
ulint
rw_lock_get_x_lock_count(
/*=====================*/
const rw_lock_t* lock) /*!< in: rw-lock */
{
int32_t lock_copy = lock->lock_word;
ut_ad(lock_copy <= X_LOCK_DECR);
if (lock_copy == 0 || lock_copy == -X_LOCK_HALF_DECR) {
/* "1 x-lock" or "1 x-lock + sx-locks" */
return(1);
} else if (lock_copy > -X_LOCK_DECR) {
/* s-locks, one or more sx-locks if > 0, or x-waiter if < 0 */
return(0);
} else if (lock_copy > -(X_LOCK_DECR + X_LOCK_HALF_DECR)) {
/* no s-lock, no sx-lock, 2 or more x-locks.
First 2 x-locks are set with -X_LOCK_DECR,
all other recursive x-locks are set with -1 */
return ulint(2 - X_LOCK_DECR - lock_copy);
} else {
/* no s-lock, 1 or more sx-lock, 2 or more x-locks.
First 2 x-locks are set with -(X_LOCK_DECR + X_LOCK_HALF_DECR),
all other recursive x-locks are set with -1 */
return ulint(2 - X_LOCK_DECR - X_LOCK_HALF_DECR - lock_copy);
}
}
/******************************************************************//**
Returns the number of sx-lock for the lock. Does not reserve the lock
mutex, so the caller must be sure it is not changed during the call.
@return value of sx-lock count */
UNIV_INLINE
ulint
rw_lock_get_sx_lock_count(
/*======================*/
const rw_lock_t* lock) /*!< in: rw-lock */
{
#ifdef UNIV_DEBUG
int32_t lock_copy = lock->lock_word;
ut_ad(lock_copy <= X_LOCK_DECR);
while (lock_copy < 0) {
lock_copy += X_LOCK_DECR;
}
if (lock_copy > 0 && lock_copy <= X_LOCK_HALF_DECR) {
return(lock->sx_recursive);
}
return(0);
#else /* UNIV_DEBUG */
return(lock->sx_recursive);
#endif /* UNIV_DEBUG */
}
/******************************************************************//**
Recursive x-locks are not supported: they should be handled by the caller and
need not be atomic since they are performed by the current lock holder.
Returns true if the decrement was made, false if not.
@return true if decr occurs */
UNIV_INLINE
bool
rw_lock_lock_word_decr(
/*===================*/
rw_lock_t* lock, /*!< in/out: rw-lock */
int32_t amount, /*!< in: amount to decrement */
int32_t threshold) /*!< in: threshold of judgement */
{
int32_t lock_copy = lock->lock_word;
while (lock_copy > threshold) {
if (lock->lock_word.compare_exchange_strong(
lock_copy,
lock_copy - amount,
std::memory_order_acquire,
std::memory_order_relaxed)) {
return(true);
}
/* Note that lock_copy was reloaded above. We will
keep trying if a spurious conflict occurred, typically
caused by concurrent executions of
rw_lock_s_lock(). */
/* Note: unlike this implementation, rw_lock::read_lock()
allows concurrent calls without a spin loop */
}
/* A real conflict was detected. */
return(false);
}
/******************************************************************//**
Low-level function which tries to lock an rw-lock in s-mode.
@return TRUE if success */
UNIV_INLINE
ibool
rw_lock_s_lock_low(
/*===============*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass MY_ATTRIBUTE((unused)),
/*!< in: pass value; != 0, if the lock will be
passed to another thread to unlock */
const char* file_name, /*!< in: file name where lock requested */
unsigned line) /*!< in: line where requested */
{
if (!rw_lock_lock_word_decr(lock, 1, 0)) {
/* Locking did not succeed */
return(FALSE);
}
ut_d(rw_lock_add_debug_info(lock, pass, RW_LOCK_S, file_name, line));
return(TRUE); /* locking succeeded */
}
/******************************************************************//**
NOTE! Use the corresponding macro, not directly this function! Lock an
rw-lock in shared mode for the current thread. If the rw-lock is locked
in exclusive mode, or there is an exclusive lock request waiting, the
function spins a preset time (controlled by srv_n_spin_wait_rounds), waiting for
the lock, before suspending the thread. */
UNIV_INLINE
void
rw_lock_s_lock_func(
/*================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line) /*!< in: line where requested */
{
/* NOTE: As we do not know the thread ids for threads which have
s-locked a latch, and s-lockers will be served only after waiting
x-lock requests have been fulfilled, then if this thread already
owns an s-lock here, it may end up in a deadlock with another thread
which requests an x-lock here. Therefore, we will forbid recursive
s-locking of a latch: the following assert will warn the programmer
of the possibility of this kind of a deadlock. If we want to implement
safe recursive s-locking, we should keep in a list the thread ids of
the threads which have s-locked a latch. This would use some CPU
time. */
ut_ad(!rw_lock_own_flagged(lock, RW_LOCK_FLAG_X | RW_LOCK_FLAG_S));
if (!rw_lock_s_lock_low(lock, pass, file_name, line)) {
/* Did not succeed, try spin wait */
rw_lock_s_lock_spin(lock, pass, file_name, line);
}
}
/******************************************************************//**
NOTE! Use the corresponding macro, not directly this function! Lock an
rw-lock in exclusive mode for the current thread if the lock can be
obtained immediately.
@return TRUE if success */
UNIV_INLINE
ibool
rw_lock_x_lock_func_nowait(
/*=======================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line) /*!< in: line where requested */
{
int32_t oldval = X_LOCK_DECR;
if (lock->lock_word.compare_exchange_strong(oldval, 0,
std::memory_order_acquire,
std::memory_order_relaxed)) {
lock->writer_thread = os_thread_get_curr_id();
} else if (os_thread_eq(lock->writer_thread, os_thread_get_curr_id())) {
/* Relock: even though no other thread can modify (lock, unlock
or reserve) lock_word while there is an exclusive writer and
this is the writer thread, we still want concurrent threads to
observe consistent values. */
if (oldval == 0 || oldval == -X_LOCK_HALF_DECR) {
/* There are 1 x-locks */
lock->lock_word.fetch_sub(X_LOCK_DECR,
std::memory_order_relaxed);
} else if (oldval <= -X_LOCK_DECR) {
/* There are 2 or more x-locks */
lock->lock_word.fetch_sub(1,
std::memory_order_relaxed);
/* Watch for too many recursive locks */
ut_ad(oldval < 1);
} else {
/* Failure */
return(FALSE);
}
} else {
/* Failure */
return(FALSE);
}
ut_d(rw_lock_add_debug_info(lock, 0, RW_LOCK_X, file_name, line));
lock->last_x_file_name = file_name;
lock->last_x_line = line & ((1 << 14) - 1);
ut_ad(rw_lock_validate(lock));
return(TRUE);
}
/******************************************************************//**
Releases a shared mode lock. */
UNIV_INLINE
void
rw_lock_s_unlock_func(
/*==================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the lock may have
been passed to another thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock) /*!< in/out: rw-lock */
{
ut_d(rw_lock_remove_debug_info(lock, pass, RW_LOCK_S));
/* Increment lock_word to indicate 1 less reader */
int32_t lock_word = lock->lock_word.fetch_add(
1, std::memory_order_release);
if (lock_word == -1 || lock_word == -X_LOCK_HALF_DECR - 1) {
/* wait_ex waiter exists. It may not be asleep, but we signal
anyway. We do not wake other waiters, because they can't
exist without wait_ex waiter and wait_ex waiter goes first.*/
os_event_set(lock->wait_ex_event);
sync_array_object_signalled();
} else {
ut_ad(lock_word > -X_LOCK_DECR);
ut_ad(lock_word < X_LOCK_DECR);
}
ut_ad(rw_lock_validate(lock));
}
/******************************************************************//**
Releases an exclusive mode lock. */
UNIV_INLINE
void
rw_lock_x_unlock_func(
/*==================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the lock may have
been passed to another thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock) /*!< in/out: rw-lock */
{
int32_t lock_word = lock->lock_word;
if (lock_word == 0) {
/* Last caller in a possible recursive chain. */
lock->writer_thread = 0;
}
ut_d(rw_lock_remove_debug_info(lock, pass, RW_LOCK_X));
if (lock_word == 0 || lock_word == -X_LOCK_HALF_DECR) {
/* Last X-lock owned by this thread, it may still hold SX-locks.
ACQ_REL due to...
RELEASE: we release rw-lock
ACQUIRE: we want waiters to be loaded after lock_word is stored */
lock->lock_word.fetch_add(X_LOCK_DECR,
std::memory_order_acq_rel);
/* This no longer has an X-lock but it may still have
an SX-lock. So it is now free for S-locks by other threads.
We need to signal read/write waiters.
We do not need to signal wait_ex waiters, since they cannot
exist when there is a writer. */
if (lock->waiters) {
lock->waiters = 0;
os_event_set(lock->event);
sync_array_object_signalled();
}
} else if (lock_word == -X_LOCK_DECR
|| lock_word == -(X_LOCK_DECR + X_LOCK_HALF_DECR)) {
/* There are 2 x-locks */
lock->lock_word.fetch_add(X_LOCK_DECR);
} else {
/* There are more than 2 x-locks. */
ut_ad(lock_word < -X_LOCK_DECR);
lock->lock_word.fetch_add(1);
}
ut_ad(rw_lock_validate(lock));
}
/******************************************************************//**
Releases a sx mode lock. */
UNIV_INLINE
void
rw_lock_sx_unlock_func(
/*===================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the lock may have
been passed to another thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock) /*!< in/out: rw-lock */
{
ut_ad(rw_lock_get_sx_lock_count(lock));
ut_ad(lock->sx_recursive > 0);
--lock->sx_recursive;
ut_d(rw_lock_remove_debug_info(lock, pass, RW_LOCK_SX));
if (lock->sx_recursive == 0) {
int32_t lock_word = lock->lock_word;
/* Last caller in a possible recursive chain. */
if (lock_word > 0) {
lock->writer_thread = 0;
ut_ad(lock_word <= INT_MAX32 - X_LOCK_HALF_DECR);
/* Last SX-lock owned by this thread, doesn't own X-lock.
ACQ_REL due to...
RELEASE: we release rw-lock
ACQUIRE: we want waiters to be loaded after lock_word is stored */
lock->lock_word.fetch_add(X_LOCK_HALF_DECR,
std::memory_order_acq_rel);
/* Lock is now free. May have to signal read/write
waiters. We do not need to signal wait_ex waiters,
since they cannot exist when there is an sx-lock
holder. */
if (lock->waiters) {
lock->waiters = 0;
os_event_set(lock->event);
sync_array_object_signalled();
}
} else {
/* still has x-lock */
ut_ad(lock_word == -X_LOCK_HALF_DECR ||
lock_word <= -(X_LOCK_DECR + X_LOCK_HALF_DECR));
lock->lock_word.fetch_add(X_LOCK_HALF_DECR);
}
}
ut_ad(rw_lock_validate(lock));
}
#ifdef UNIV_PFS_RWLOCK
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_create_func().
NOTE! Please use the corresponding macro rw_lock_create(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_create_func(
/*====================*/
mysql_pfs_key_t key, /*!< in: key registered with
performance schema */
rw_lock_t* lock, /*!< in/out: pointer to memory */
# ifdef UNIV_DEBUG
latch_level_t level, /*!< in: level */
# endif /* UNIV_DEBUG */
const char* cfile_name, /*!< in: file name where created */
unsigned cline) /*!< in: file line where created */
{
ut_d(new(lock) rw_lock_t());
/* Initialize the rwlock for performance schema */
lock->pfs_psi = PSI_RWLOCK_CALL(init_rwlock)(key, lock);
/* The actual function to initialize an rwlock */
rw_lock_create_func(lock,
#ifdef UNIV_DEBUG
level,
#endif /* UNIV_DEBUG */
cfile_name,
cline);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_x_lock_func()
NOTE! Please use the corresponding macro rw_lock_x_lock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_x_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the lock will
be passed to another thread to unlock */
const char* file_name,/*!< in: file name where lock requested */
unsigned line) /*!< in: line where requested */
{
if (lock->pfs_psi != NULL) {
PSI_rwlock_locker* locker;
PSI_rwlock_locker_state state;
/* Record the acquisition of a read-write lock in exclusive
mode in performance schema */
locker = PSI_RWLOCK_CALL(start_rwlock_wrwait)(
&state, lock->pfs_psi, PSI_RWLOCK_EXCLUSIVELOCK,
file_name, static_cast<uint>(line));
rw_lock_x_lock_func(
lock, pass, file_name, static_cast<uint>(line));
if (locker != NULL) {
PSI_RWLOCK_CALL(end_rwlock_wrwait)(locker, 0);
}
} else {
rw_lock_x_lock_func(lock, pass, file_name, line);
}
}
/******************************************************************//**
Performance schema instrumented wrap function for
rw_lock_x_lock_func_nowait()
NOTE! Please use the corresponding macro rw_lock_x_lock_func(),
not directly this function!
@return TRUE if success */
UNIV_INLINE
ibool
pfs_rw_lock_x_lock_func_nowait(
/*===========================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
const char* file_name,/*!< in: file name where lock
requested */
unsigned line) /*!< in: line where requested */
{
ibool ret;
if (lock->pfs_psi != NULL) {
PSI_rwlock_locker* locker;
PSI_rwlock_locker_state state;
/* Record the acquisition of a read-write trylock in exclusive
mode in performance schema */
locker = PSI_RWLOCK_CALL(start_rwlock_wrwait)(
&state, lock->pfs_psi, PSI_RWLOCK_TRYEXCLUSIVELOCK,
file_name, static_cast<uint>(line));
ret = rw_lock_x_lock_func_nowait(lock, file_name, line);
if (locker != NULL) {
PSI_RWLOCK_CALL(end_rwlock_wrwait)(
locker, static_cast<int>(ret));
}
} else {
ret = rw_lock_x_lock_func_nowait(lock, file_name, line);
}
return(ret);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_free_func()
NOTE! Please use the corresponding macro rw_lock_free(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_free_func(
/*==================*/
rw_lock_t* lock) /*!< in: pointer to rw-lock */
{
if (lock->pfs_psi != NULL) {
PSI_RWLOCK_CALL(destroy_rwlock)(lock->pfs_psi);
lock->pfs_psi = NULL;
}
rw_lock_free_func(lock);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_s_lock_func()
NOTE! Please use the corresponding macro rw_lock_s_lock(), not
directly this function! */
UNIV_INLINE
void
pfs_rw_lock_s_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the
lock will be passed to another
thread to unlock */
const char* file_name,/*!< in: file name where lock
requested */
unsigned line) /*!< in: line where requested */
{
if (lock->pfs_psi != NULL) {
PSI_rwlock_locker* locker;
PSI_rwlock_locker_state state;
/* Instrumented to inform we are aquiring a shared rwlock */
locker = PSI_RWLOCK_CALL(start_rwlock_rdwait)(
&state, lock->pfs_psi, PSI_RWLOCK_SHAREDLOCK,
file_name, static_cast<uint>(line));
rw_lock_s_lock_func(lock, pass, file_name, line);
if (locker != NULL) {
PSI_RWLOCK_CALL(end_rwlock_rdwait)(locker, 0);
}
} else {
rw_lock_s_lock_func(lock, pass, file_name, line);
}
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_sx_lock_func()
NOTE! Please use the corresponding macro rw_lock_sx_lock(), not
directly this function! */
UNIV_INLINE
void
pfs_rw_lock_sx_lock_func(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the
lock will be passed to another
thread to unlock */
const char* file_name,/*!< in: file name where lock
requested */
unsigned line) /*!< in: line where requested */
{
if (lock->pfs_psi != NULL) {
PSI_rwlock_locker* locker;
PSI_rwlock_locker_state state;
/* Instrumented to inform we are aquiring a shared rwlock */
locker = PSI_RWLOCK_CALL(start_rwlock_wrwait)(
&state, lock->pfs_psi, PSI_RWLOCK_SHAREDEXCLUSIVELOCK,
file_name, static_cast<uint>(line));
rw_lock_sx_lock_func(lock, pass, file_name, line);
if (locker != NULL) {
PSI_RWLOCK_CALL(end_rwlock_wrwait)(locker, 0);
}
} else {
rw_lock_sx_lock_func(lock, pass, file_name, line);
}
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_s_lock_func()
NOTE! Please use the corresponding macro rw_lock_s_lock(), not
directly this function!
@return TRUE if success */
UNIV_INLINE
ibool
pfs_rw_lock_s_lock_low(
/*===================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the
lock will be passed to another
thread to unlock */
const char* file_name, /*!< in: file name where lock requested */
unsigned line) /*!< in: line where requested */
{
ibool ret;
if (lock->pfs_psi != NULL) {
PSI_rwlock_locker* locker;
PSI_rwlock_locker_state state;
/* Instrumented to inform we are aquiring a shared rwlock */
locker = PSI_RWLOCK_CALL(start_rwlock_rdwait)(
&state, lock->pfs_psi, PSI_RWLOCK_TRYSHAREDLOCK,
file_name, static_cast<uint>(line));
ret = rw_lock_s_lock_low(lock, pass, file_name, line);
if (locker != NULL) {
PSI_RWLOCK_CALL(end_rwlock_rdwait)(
locker, static_cast<int>(ret));
}
} else {
ret = rw_lock_s_lock_low(lock, pass, file_name, line);
}
return(ret);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_sx_lock_nowait()
NOTE! Please use the corresponding macro, not
directly this function!
@return TRUE if success */
UNIV_INLINE
ibool
pfs_rw_lock_sx_lock_low(
/*====================*/
rw_lock_t* lock, /*!< in: pointer to rw-lock */
ulint pass, /*!< in: pass value; != 0, if the
lock will be passed to another
thread to unlock */
const char* file_name, /*!< in: file name where lock requested */
unsigned line) /*!< in: line where requested */
{
ibool ret;
if (lock->pfs_psi != NULL) {
PSI_rwlock_locker* locker;
PSI_rwlock_locker_state state;
/* Instrumented to inform we are aquiring a shared
exclusive rwlock */
locker = PSI_RWLOCK_CALL(start_rwlock_rdwait)(
&state, lock->pfs_psi,
PSI_RWLOCK_TRYSHAREDEXCLUSIVELOCK,
file_name, static_cast<uint>(line));
ret = rw_lock_sx_lock_low(lock, pass, file_name, line);
if (locker != NULL) {
PSI_RWLOCK_CALL(end_rwlock_rdwait)(
locker, static_cast<int>(ret));
}
} else {
ret = rw_lock_sx_lock_low(lock, pass, file_name, line);
}
return(ret);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_x_unlock_func()
NOTE! Please use the corresponding macro rw_lock_x_unlock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_x_unlock_func(
/*======================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the
lock may have been passed to another
thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock) /*!< in/out: rw-lock */
{
/* Inform performance schema we are unlocking the lock */
if (lock->pfs_psi != NULL) {
PSI_RWLOCK_CALL(unlock_rwlock)(lock->pfs_psi);
}
rw_lock_x_unlock_func(
#ifdef UNIV_DEBUG
pass,
#endif /* UNIV_DEBUG */
lock);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_sx_unlock_func()
NOTE! Please use the corresponding macro rw_lock_sx_unlock(), not directly
this function! */
UNIV_INLINE
void
pfs_rw_lock_sx_unlock_func(
/*======================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the
lock may have been passed to another
thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock) /*!< in/out: rw-lock */
{
/* Inform performance schema we are unlocking the lock */
if (lock->pfs_psi != NULL) {
PSI_RWLOCK_CALL(unlock_rwlock)(lock->pfs_psi);
}
rw_lock_sx_unlock_func(
#ifdef UNIV_DEBUG
pass,
#endif /* UNIV_DEBUG */
lock);
}
/******************************************************************//**
Performance schema instrumented wrap function for rw_lock_s_unlock_func()
NOTE! Please use the corresponding macro pfs_rw_lock_s_unlock(), not
directly this function! */
UNIV_INLINE
void
pfs_rw_lock_s_unlock_func(
/*======================*/
#ifdef UNIV_DEBUG
ulint pass, /*!< in: pass value; != 0, if the
lock may have been passed to another
thread to unlock */
#endif /* UNIV_DEBUG */
rw_lock_t* lock) /*!< in/out: rw-lock */
{
/* Inform performance schema we are unlocking the lock */
if (lock->pfs_psi != NULL) {
PSI_RWLOCK_CALL(unlock_rwlock)(lock->pfs_psi);
}
rw_lock_s_unlock_func(
#ifdef UNIV_DEBUG
pass,
#endif /* UNIV_DEBUG */
lock);
}
#endif /* UNIV_PFS_RWLOCK */

View file

@ -67,7 +67,6 @@ extern mysql_pfs_key_t page_zip_stat_per_index_mutex_key;
# ifdef UNIV_DEBUG
extern mysql_pfs_key_t rw_lock_debug_mutex_key;
# endif /* UNIV_DEBUG */
extern mysql_pfs_key_t rw_lock_list_mutex_key;
extern mysql_pfs_key_t srv_innodb_monitor_mutex_key;
extern mysql_pfs_key_t srv_misc_tmpfile_mutex_key;
extern mysql_pfs_key_t srv_monitor_file_mutex_key;
@ -98,9 +97,4 @@ extern mysql_pfs_key_t index_online_log_key;
extern mysql_pfs_key_t trx_sys_rw_lock_key;
#endif /* UNIV_PFS_RWLOCK */
/** Prints info of the sync system.
@param[in] file where to print */
void
sync_print(FILE* file);
#endif /* !sync0sync_h */

View file

@ -268,7 +268,6 @@ enum latch_id_t {
LATCH_ID_RTR_ACTIVE_MUTEX,
LATCH_ID_RTR_MATCH_MUTEX,
LATCH_ID_RTR_PATH_MUTEX,
LATCH_ID_RW_LOCK_LIST,
LATCH_ID_SRV_INNODB_MONITOR,
LATCH_ID_SRV_MISC_TMPFILE,
LATCH_ID_SRV_MONITOR_FILE,
@ -926,7 +925,7 @@ struct latch_t {
/** Latch ID */
latch_id_t m_id;
/** true if it is a rw-lock. In debug mode, rw_lock_t derives from
/** true if it is a rw-lock. In debug mode, sux_lock derives from
this class and sets this variable. */
bool m_rw_lock;
};
@ -1001,16 +1000,6 @@ private:
@return LATCH_ID_NONE. */
latch_id_t
sync_latch_get_id(const char* name);
typedef ulint rw_lock_flags_t;
/* Flags to specify lock types for rw_lock_own_flagged() */
enum rw_lock_flag_t {
RW_LOCK_FLAG_S = 1 << 0,
RW_LOCK_FLAG_X = 1 << 1,
RW_LOCK_FLAG_SX = 1 << 2
};
#endif /* UNIV_DBEUG */
#endif /* UNIV_INNOCHECKSUM */

View file

@ -111,8 +111,7 @@ private:
because ib_counter_t is only intended for usage with global
memory that is allocated from the .bss and thus guaranteed to
be zero-initialized by the run-time environment.
@see srv_stats
@see rw_lock_stats */
@see srv_stats */
struct ib_counter_element_t {
MY_ALIGNED(CACHE_LINE_SIZE) std::atomic<Type> value;
};

View file

@ -881,7 +881,6 @@ constexpr const char* const auto_event_names[] =
"srv0start",
"sync0arr",
"sync0debug",
"sync0rw",
"sync0start",
"sync0types",
"trx0i_s",

View file

@ -34,6 +34,7 @@ processing.
#define IB_WORK_QUEUE_H
#include "ut0list.h"
#include "ut0mutex.h"
#include "mem0mem.h"
// Forward declaration

View file

@ -2461,9 +2461,9 @@ void recv_recover_page(fil_space_t* space, buf_page_t* bpage)
this OS thread, so that we can acquire a second
x-latch on it. This is needed for the operations to
the page to pass the debug checks. */
rw_lock_x_lock_move_ownership(&block->lock);
buf_block_buf_fix_inc(block, __FILE__, __LINE__);
rw_lock_x_lock(&block->lock);
block->lock.claim_ownership();
block->lock.x_lock_recursive();
buf_block_buf_fix_inc(block);
mtr.memo_push(block, MTR_MEMO_PAGE_X_FIX);
mutex_enter(&recv_sys.mutex);

View file

@ -32,6 +32,9 @@ Created 11/26/1995 Heikki Tuuri
#include "page0types.h"
#include "mtr0log.h"
#include "log0recv.h"
#ifdef BTR_CUR_HASH_ADAPT
# include "btr0sea.h"
#endif
/** Iterate over a memo block in reverse. */
template <typename Functor>
@ -170,12 +173,12 @@ struct FindPage
|| m_ptr >= block->frame + srv_page_size) {
return(true);
}
ut_ad(!(m_flags & (MTR_MEMO_PAGE_S_FIX
| MTR_MEMO_PAGE_SX_FIX
| MTR_MEMO_PAGE_X_FIX))
|| rw_lock_own_flagged(&block->lock, m_flags));
ut_ad(!(slot->type & MTR_MEMO_PAGE_S_FIX)
|| block->lock.have_s());
ut_ad(!(slot->type & MTR_MEMO_PAGE_SX_FIX)
|| block->lock.have_u_or_x());
ut_ad(!(slot->type & MTR_MEMO_PAGE_X_FIX)
|| block->lock.have_x());
m_slot = slot;
return(false);
}
@ -204,12 +207,14 @@ private:
@param slot memo slot */
static void memo_slot_release(mtr_memo_slot_t *slot)
{
switch (slot->type) {
switch (const auto type= slot->type) {
case MTR_MEMO_S_LOCK:
rw_lock_s_unlock(reinterpret_cast<rw_lock_t*>(slot->object));
static_cast<index_lock*>(slot->object)->s_unlock();
break;
case MTR_MEMO_X_LOCK:
case MTR_MEMO_SX_LOCK:
rw_lock_sx_unlock(reinterpret_cast<rw_lock_t*>(slot->object));
static_cast<index_lock*>(slot->object)->
u_or_x_unlock(type == MTR_MEMO_SX_LOCK);
break;
case MTR_MEMO_SPACE_X_LOCK:
static_cast<fil_space_t*>(slot->object)->set_committed_size();
@ -218,9 +223,6 @@ static void memo_slot_release(mtr_memo_slot_t *slot)
case MTR_MEMO_SPACE_S_LOCK:
static_cast<fil_space_t*>(slot->object)->s_unlock();
break;
case MTR_MEMO_X_LOCK:
rw_lock_x_unlock(reinterpret_cast<rw_lock_t*>(slot->object));
break;
default:
#ifdef UNIV_DEBUG
switch (slot->type & ~MTR_MEMO_MODIFY) {
@ -234,7 +236,7 @@ static void memo_slot_release(mtr_memo_slot_t *slot)
break;
}
#endif /* UNIV_DEBUG */
buf_block_t *block= reinterpret_cast<buf_block_t*>(slot->object);
buf_block_t *block= static_cast<buf_block_t*>(slot->object);
buf_page_release_latch(block, slot->type & ~MTR_MEMO_MODIFY);
block->unfix();
break;
@ -249,9 +251,9 @@ struct ReleaseLatches {
{
if (!slot->object)
return true;
switch (slot->type) {
switch (const auto type= slot->type) {
case MTR_MEMO_S_LOCK:
rw_lock_s_unlock(reinterpret_cast<rw_lock_t*>(slot->object));
static_cast<index_lock*>(slot->object)->s_unlock();
break;
case MTR_MEMO_SPACE_X_LOCK:
static_cast<fil_space_t*>(slot->object)->set_committed_size();
@ -261,10 +263,9 @@ struct ReleaseLatches {
static_cast<fil_space_t*>(slot->object)->s_unlock();
break;
case MTR_MEMO_X_LOCK:
rw_lock_x_unlock(reinterpret_cast<rw_lock_t*>(slot->object));
break;
case MTR_MEMO_SX_LOCK:
rw_lock_sx_unlock(reinterpret_cast<rw_lock_t*>(slot->object));
static_cast<index_lock*>(slot->object)->
u_or_x_unlock(type == MTR_MEMO_SX_LOCK);
break;
default:
#ifdef UNIV_DEBUG
@ -279,7 +280,7 @@ struct ReleaseLatches {
break;
}
#endif /* UNIV_DEBUG */
buf_block_t *block= reinterpret_cast<buf_block_t*>(slot->object);
buf_block_t *block= static_cast<buf_block_t*>(slot->object);
buf_page_release_latch(block, slot->type & ~MTR_MEMO_MODIFY);
block->unfix();
break;
@ -944,7 +945,7 @@ bool mtr_t::have_x_latch(const buf_block_t &block) const
MTR_MEMO_BUF_FIX | MTR_MEMO_MODIFY));
return false;
}
ut_ad(rw_lock_own(&block.lock, RW_LOCK_X));
ut_ad(block.lock.have_x());
return true;
}
@ -963,12 +964,137 @@ bool mtr_t::memo_contains(const fil_space_t& space, bool shared)
return true;
}
#ifdef BTR_CUR_HASH_ADAPT
/** If a stale adaptive hash index exists on the block, drop it.
Multiple executions of btr_search_drop_page_hash_index() on the
same block must be prevented by exclusive page latch. */
ATTRIBUTE_COLD
static void mtr_defer_drop_ahi(buf_block_t *block, mtr_memo_type_t fix_type)
{
switch (fix_type) {
case MTR_MEMO_BUF_FIX:
/* We do not drop the adaptive hash index, because safely doing
so would require acquiring block->lock, and that is not safe
to acquire in some RW_NO_LATCH access paths. Those code paths
should have no business accessing the adaptive hash index anyway. */
break;
case MTR_MEMO_PAGE_S_FIX:
/* Temporarily release our S-latch. */
block->lock.s_unlock();
block->lock.x_lock();
if (dict_index_t *index= block->index)
if (index->freed())
btr_search_drop_page_hash_index(block);
block->lock.x_unlock();
block->lock.s_lock();
break;
case MTR_MEMO_PAGE_SX_FIX:
block->lock.u_unlock();
block->lock.x_lock();
if (dict_index_t *index= block->index)
if (index->freed())
btr_search_drop_page_hash_index(block);
block->lock.u_lock();
block->lock.x_unlock();
break;
default:
ut_ad(fix_type == MTR_MEMO_PAGE_X_FIX);
btr_search_drop_page_hash_index(block);
}
}
#endif /* BTR_CUR_HASH_ADAPT */
/** Upgrade U-latched pages to X */
struct UpgradeX
{
const buf_block_t &block;
UpgradeX(const buf_block_t &block) : block(block) {}
bool operator()(mtr_memo_slot_t *slot) const
{
if (slot->object == &block && (MTR_MEMO_PAGE_SX_FIX & slot->type))
slot->type= static_cast<mtr_memo_type_t>
(slot->type ^ (MTR_MEMO_PAGE_SX_FIX | MTR_MEMO_PAGE_X_FIX));
return true;
}
};
/** Upgrade U locks on a block to X */
void mtr_t::page_lock_upgrade(const buf_block_t &block)
{
ut_ad(block.lock.have_x());
m_memo.for_each_block(CIterate<UpgradeX>((UpgradeX(block))));
#ifdef BTR_CUR_HASH_ADAPT
ut_ad(!block.index || !block.index->freed());
#endif /* BTR_CUR_HASH_ADAPT */
}
/** Upgrade U locks to X */
struct UpgradeLockX
{
const index_lock &lock;
UpgradeLockX(const index_lock &lock) : lock(lock) {}
bool operator()(mtr_memo_slot_t *slot) const
{
if (slot->object == &lock && (MTR_MEMO_SX_LOCK & slot->type))
slot->type= static_cast<mtr_memo_type_t>
(slot->type ^ (MTR_MEMO_SX_LOCK | MTR_MEMO_X_LOCK));
return true;
}
};
/** Upgrade U locks on a block to X */
void mtr_t::lock_upgrade(const index_lock &lock)
{
ut_ad(lock.have_x());
m_memo.for_each_block(CIterate<UpgradeLockX>((UpgradeLockX(lock))));
}
/** Latch a buffer pool block.
@param block block to be latched
@param rw_latch RW_S_LATCH, RW_SX_LATCH, RW_X_LATCH, RW_NO_LATCH */
void mtr_t::page_lock(buf_block_t *block, ulint rw_latch)
{
mtr_memo_type_t fix_type;
switch (rw_latch)
{
case RW_NO_LATCH:
fix_type= MTR_MEMO_BUF_FIX;
goto done;
case RW_S_LATCH:
fix_type= MTR_MEMO_PAGE_S_FIX;
block->lock.s_lock();
break;
case RW_SX_LATCH:
fix_type= MTR_MEMO_PAGE_SX_FIX;
block->lock.u_lock();
break;
default:
ut_ad(rw_latch == RW_X_LATCH);
fix_type= MTR_MEMO_PAGE_X_FIX;
if (block->lock.x_lock_upgraded())
{
page_lock_upgrade(*block);
block->unfix();
return;
}
}
#ifdef BTR_CUR_HASH_ADAPT
if (dict_index_t *index= block->index)
if (index->freed())
mtr_defer_drop_ahi(block, fix_type);
#endif /* BTR_CUR_HASH_ADAPT */
done:
memo_push(block, fix_type);
}
#ifdef UNIV_DEBUG
/** Check if we are holding an rw-latch in this mini-transaction
@param lock latch to search for
@param type held latch type
@return whether (lock,type) is contained */
bool mtr_t::memo_contains(const rw_lock_t &lock, mtr_memo_type_t type)
bool mtr_t::memo_contains(const index_lock &lock, mtr_memo_type_t type)
{
Iterate<Find> iteration(Find(&lock, type));
if (m_memo.for_each_block_in_reverse(iteration))
@ -976,13 +1102,13 @@ bool mtr_t::memo_contains(const rw_lock_t &lock, mtr_memo_type_t type)
switch (type) {
case MTR_MEMO_X_LOCK:
ut_ad(rw_lock_own(&lock, RW_LOCK_X));
ut_ad(lock.have_x());
break;
case MTR_MEMO_SX_LOCK:
ut_ad(rw_lock_own(&lock, RW_LOCK_SX));
ut_ad(lock.have_u_or_x());
break;
case MTR_MEMO_S_LOCK:
ut_ad(rw_lock_own(&lock, RW_LOCK_S));
ut_ad(lock.have_s());
break;
default:
break;
@ -1027,20 +1153,29 @@ struct FlaggedCheck {
@retval true if the iteration should continue */
bool operator()(const mtr_memo_slot_t* slot) const
{
if (m_ptr != slot->object || !(m_flags & slot->type)) {
if (m_ptr != slot->object) {
return(true);
}
if (ulint flags = m_flags & (MTR_MEMO_PAGE_S_FIX
| MTR_MEMO_PAGE_SX_FIX
| MTR_MEMO_PAGE_X_FIX)) {
rw_lock_t* lock = &static_cast<buf_block_t*>(
auto f = m_flags & slot->type;
if (!f) {
return true;
}
if (f & (MTR_MEMO_PAGE_S_FIX | MTR_MEMO_PAGE_SX_FIX
| MTR_MEMO_PAGE_X_FIX)) {
block_lock* lock = &static_cast<buf_block_t*>(
const_cast<void*>(m_ptr))->lock;
ut_ad(rw_lock_own_flagged(lock, flags));
ut_ad(!(f & MTR_MEMO_PAGE_S_FIX) || lock->have_s());
ut_ad(!(f & MTR_MEMO_PAGE_SX_FIX)
|| lock->have_u_or_x());
ut_ad(!(f & MTR_MEMO_PAGE_X_FIX) || lock->have_x());
} else {
rw_lock_t* lock = static_cast<rw_lock_t*>(
index_lock* lock = static_cast<index_lock*>(
const_cast<void*>(m_ptr));
ut_ad(rw_lock_own_flagged(lock, m_flags >> 5));
ut_ad(!(f & MTR_MEMO_S_LOCK) || lock->have_s());
ut_ad(!(f & MTR_MEMO_SX_LOCK) || lock->have_u_or_x());
ut_ad(!(f & MTR_MEMO_X_LOCK) || lock->have_x());
}
return(false);

View file

@ -2079,8 +2079,7 @@ row_ins_scan_sec_index_for_duplicate(
rec_offs_init(offsets_);
ut_ad(s_latch == rw_lock_own_flagged(
&index->lock, RW_LOCK_FLAG_S | RW_LOCK_FLAG_SX));
ut_ad(s_latch == (index->lock.have_u_not_x() || index->lock.have_s()));
n_unique = dict_index_get_n_unique(index);

View file

@ -341,8 +341,7 @@ row_log_online_op(
ut_ad(dtuple_validate(tuple));
ut_ad(dtuple_get_n_fields(tuple) == dict_index_get_n_fields(index));
ut_ad(rw_lock_own_flagged(&index->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S));
ut_ad(index->lock.have_x() || index->lock.have_s());
if (index->is_corrupted()) {
return;
@ -660,9 +659,7 @@ row_log_table_delete(
ut_ad(rec_offs_validate(rec, index, offsets));
ut_ad(rec_offs_n_fields(offsets) == dict_index_get_n_fields(index));
ut_ad(rec_offs_size(offsets) <= sizeof index->online_log->tail.buf);
ut_ad(rw_lock_own_flagged(
&index->lock,
RW_LOCK_FLAG_S | RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_any());
if (index->online_status != ONLINE_INDEX_CREATION
|| (index->type & DICT_CORRUPT) || index->table->corrupted
@ -957,9 +954,8 @@ row_log_table_low(
ut_ad(rec_offs_validate(rec, index, offsets));
ut_ad(rec_offs_n_fields(offsets) == dict_index_get_n_fields(index));
ut_ad(rec_offs_size(offsets) <= sizeof log->tail.buf);
ut_ad(rw_lock_own_flagged(
&index->lock,
RW_LOCK_FLAG_S | RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_any());
#ifdef UNIV_DEBUG
switch (fil_page_get_type(page_align(rec))) {
case FIL_PAGE_INDEX:
@ -1239,10 +1235,7 @@ row_log_table_get_pk(
ut_ad(dict_index_is_clust(index));
ut_ad(dict_index_is_online_ddl(index));
ut_ad(!offsets || rec_offs_validate(rec, index, offsets));
ut_ad(rw_lock_own_flagged(
&index->lock,
RW_LOCK_FLAG_S | RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_any());
ut_ad(log);
ut_ad(log->table);
ut_ad(log->min_trx);
@ -1447,9 +1440,7 @@ row_log_table_blob_free(
{
ut_ad(dict_index_is_clust(index));
ut_ad(dict_index_is_online_ddl(index));
ut_ad(rw_lock_own_flagged(
&index->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_u_or_x());
ut_ad(page_no != FIL_NULL);
if (index->online_log->error != DB_SUCCESS) {
@ -1491,10 +1482,7 @@ row_log_table_blob_alloc(
{
ut_ad(dict_index_is_clust(index));
ut_ad(dict_index_is_online_ddl(index));
ut_ad(rw_lock_own_flagged(
&index->lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_u_or_x());
ut_ad(page_no != FIL_NULL);
@ -1588,7 +1576,7 @@ row_log_table_apply_convert_mrec(
if (rec_offs_nth_extern(offsets, i)) {
ut_ad(rec_offs_any_extern(offsets));
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
if (const page_no_map* blobs = log->blobs) {
data = rec_get_nth_field(
@ -1618,7 +1606,7 @@ row_log_table_apply_convert_mrec(
ut_a(data);
dfield_set_data(dfield, data, len);
blob_done:
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
} else {
data = rec_get_nth_field(mrec, offsets, i, &len);
if (len == UNIV_SQL_DEFAULT) {
@ -2768,7 +2756,7 @@ row_log_table_apply_ops(
ut_ad(dict_index_is_clust(index));
ut_ad(dict_index_is_online_ddl(index));
ut_ad(trx->mysql_thd);
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_x());
ut_ad(!dict_index_is_online_ddl(new_index));
ut_ad(dict_col_get_clust_pos(
dict_table_get_sys_col(index->table, DATA_TRX_ID), index)
@ -2788,7 +2776,7 @@ row_log_table_apply_ops(
next_block:
ut_ad(has_index_lock);
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_u_or_x());
ut_ad(index->online_log->head.bytes == 0);
stage->inc(row_log_progress_inc_per_block());
@ -2861,7 +2849,7 @@ all_done:
ut_ad(has_index_lock);
has_index_lock = false;
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
log_free_check();
@ -3052,7 +3040,7 @@ all_done:
mrec = NULL;
process_next_block:
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
has_index_lock = true;
index->online_log->head.bytes = 0;
@ -3084,7 +3072,7 @@ interrupted:
error = DB_INTERRUPTED;
func_exit:
if (!has_index_lock) {
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
}
mem_heap_free(offsets_heap);
@ -3126,7 +3114,7 @@ row_log_table_apply(
clust_index->online_log->n_rows = new_table->stat_n_rows;
}
rw_lock_x_lock(dict_index_get_lock(clust_index));
clust_index->lock.x_lock(SRW_LOCK_CALL);
if (!clust_index->online_log) {
ut_ad(dict_index_get_online_status(clust_index)
@ -3149,7 +3137,7 @@ row_log_table_apply(
== clust_index->online_log->tail.total);
}
rw_lock_x_unlock(dict_index_get_lock(clust_index));
clust_index->lock.x_unlock();
DBUG_EXECUTE_IF("innodb_trx_duplicates",
thr_get_trx(thr)->duplicates = 0;);
@ -3188,7 +3176,7 @@ row_log_allocate(
ut_ad(same_pk || table);
ut_ad(!table || col_map);
ut_ad(!defaults || col_map);
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_u_or_x());
ut_ad(trx_state_eq(trx, TRX_STATE_ACTIVE));
ut_ad(trx->id);
@ -3296,11 +3284,9 @@ row_log_get_max_trx(
dict_index_t* index) /*!< in: index, must be locked */
{
ut_ad(dict_index_get_online_status(index) == ONLINE_INDEX_CREATION);
ut_ad((rw_lock_own(dict_index_get_lock(index), RW_LOCK_S)
&& mutex_own(&index->online_log->mutex))
|| rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_x()
|| (index->lock.have_s()
&& mutex_own(&index->online_log->mutex)));
return(index->online_log->max_trx);
}
@ -3328,8 +3314,7 @@ row_log_apply_op_low(
ut_ad(!dict_index_is_clust(index));
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X)
== has_index_lock);
ut_ad(index->lock.have_x() == has_index_lock);
ut_ad(!index->is_corrupted());
ut_ad(trx_id != 0 || op == ROW_OP_DELETE);
@ -3571,8 +3556,7 @@ row_log_apply_op(
/* Online index creation is only used for secondary indexes. */
ut_ad(!dict_index_is_clust(index));
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X)
== has_index_lock);
ut_ad(index->lock.have_x() == has_index_lock);
if (index->is_corrupted()) {
*error = DB_INDEX_CORRUPT;
@ -3683,7 +3667,7 @@ row_log_apply_ops(
ut_ad(dict_index_is_online_ddl(index));
ut_ad(!index->is_committed());
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_x());
ut_ad(index->online_log);
MEM_UNDEFINED(&mrec_end, sizeof mrec_end);
@ -3698,7 +3682,7 @@ row_log_apply_ops(
next_block:
ut_ad(has_index_lock);
ut_ad(rw_lock_own(dict_index_get_lock(index), RW_LOCK_X));
ut_ad(index->lock.have_x());
ut_ad(index->online_log->head.bytes == 0);
stage->inc(row_log_progress_inc_per_block());
@ -3764,7 +3748,7 @@ all_done:
* srv_sort_buf_size;
ut_ad(has_index_lock);
has_index_lock = false;
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
log_free_check();
@ -3924,7 +3908,7 @@ all_done:
mrec = NULL;
process_next_block:
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
has_index_lock = true;
index->online_log->head.bytes = 0;
@ -3956,7 +3940,7 @@ interrupted:
error = DB_INTERRUPTED;
func_exit:
if (!has_index_lock) {
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
}
switch (error) {
@ -4011,7 +3995,7 @@ row_log_apply(
log_free_check();
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
if (!dict_table_is_corrupted(index->table)) {
error = row_log_apply_ops(trx, index, &dup, stage);
@ -4035,7 +4019,7 @@ row_log_apply(
log = index->online_log;
index->online_log = NULL;
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
row_log_free(log);

View file

@ -1950,7 +1950,7 @@ row_merge_read_clustered_index(
goto scan_next;
}
if (clust_index->lock.waiters) {
if (clust_index->lock.is_waiting()) {
/* There are waiters on the clustered
index tree lock, likely the purge
thread. Store and restore the cursor
@ -2557,22 +2557,21 @@ write_buffers:
from accessing this index, to ensure
read consistency. */
trx_id_t max_trx_id;
ut_a(row == NULL);
rw_lock_x_lock(
dict_index_get_lock(buf->index));
ut_a(dict_index_get_online_status(buf->index)
dict_index_t* index = buf->index;
index->lock.x_lock(SRW_LOCK_CALL);
ut_a(dict_index_get_online_status(index)
== ONLINE_INDEX_CREATION);
max_trx_id = row_log_get_max_trx(buf->index);
trx_id_t max_trx_id = row_log_get_max_trx(
index);
if (max_trx_id > buf->index->trx_id) {
buf->index->trx_id = max_trx_id;
if (max_trx_id > index->trx_id) {
index->trx_id = max_trx_id;
}
rw_lock_x_unlock(
dict_index_get_lock(buf->index));
index->lock.x_unlock();
}
/* Secondary index and clustered index which is
@ -3867,8 +3866,7 @@ row_merge_drop_indexes(
table, index);
index = prev;
} else {
rw_lock_x_lock(
dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
dict_index_set_online_status(
index, ONLINE_INDEX_ABORTED);
index->type |= DICT_CORRUPT;
@ -3877,11 +3875,11 @@ row_merge_drop_indexes(
}
continue;
case ONLINE_INDEX_CREATION:
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
ut_ad(!index->is_committed());
row_log_abort_sec(index);
drop_aborted:
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
DEBUG_SYNC_C("merge_drop_index_after_abort");
/* covered by dict_sys.mutex */
@ -3893,10 +3891,10 @@ row_merge_drop_indexes(
the tablespace, but keep the object
in the data dictionary cache. */
row_merge_drop_index_dict(trx, index->id);
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
dict_index_set_online_status(
index, ONLINE_INDEX_ABORTED_DROPPED);
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
table->drop_aborted = TRUE;
continue;
}
@ -4768,12 +4766,10 @@ func_exit:
case ONLINE_INDEX_COMPLETE:
break;
case ONLINE_INDEX_CREATION:
rw_lock_x_lock(
dict_index_get_lock(indexes[i]));
indexes[i]->lock.x_lock(SRW_LOCK_CALL);
row_log_abort_sec(indexes[i]);
indexes[i]->type |= DICT_CORRUPT;
rw_lock_x_unlock(
dict_index_get_lock(indexes[i]));
indexes[i]->lock.x_unlock();
new_table->drop_aborted = TRUE;
/* fall through */
case ONLINE_INDEX_ABORTED_DROPPED:

View file

@ -3527,13 +3527,13 @@ defer:
for (dict_index_t* index = dict_table_get_first_index(table);
index != NULL;
index = dict_table_get_next_index(index)) {
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
/* Save the page numbers so that we can restore them
if the operation fails. */
*page_no++ = index->page;
/* Mark the index unusable. */
index->page = FIL_NULL;
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
}
/* Deleting a row from SYS_INDEXES table will invoke
@ -3713,10 +3713,10 @@ do_drop:
for (dict_index_t* index = dict_table_get_first_index(table);
index != NULL;
index = dict_table_get_next_index(index)) {
rw_lock_x_lock(dict_index_get_lock(index));
index->lock.x_lock(SRW_LOCK_CALL);
ut_a(index->page == FIL_NULL);
index->page = *page_no++;
rw_lock_x_unlock(dict_index_get_lock(index));
index->lock.x_unlock();
}
}

View file

@ -442,21 +442,19 @@ row_purge_remove_sec_if_poss_leaf(
/* Set the purge node for the call to row_purge_poss_sec(). */
pcur.btr_cur.purge_node = node;
if (dict_index_is_spatial(index)) {
rw_lock_sx_lock(dict_index_get_lock(index));
if (index->is_spatial()) {
pcur.btr_cur.thr = NULL;
index->lock.u_lock(SRW_LOCK_CALL);
search_result = row_search_index_entry(
index, entry, mode, &pcur, &mtr);
index->lock.u_unlock();
} else {
/* Set the query thread, so that ibuf_insert_low() will be
able to invoke thd_get_trx(). */
pcur.btr_cur.thr = static_cast<que_thr_t*>(
que_node_get_parent(node));
}
search_result = row_search_index_entry(
index, entry, mode, &pcur, &mtr);
if (dict_index_is_spatial(index)) {
rw_lock_sx_unlock(dict_index_get_lock(index));
search_result = row_search_index_entry(
index, entry, mode, &pcur, &mtr);
}
switch (search_result) {

View file

@ -677,10 +677,10 @@ row_quiesce_set_state(
for (dict_index_t* index = dict_table_get_next_index(clust_index);
index != NULL;
index = dict_table_get_next_index(index)) {
rw_lock_x_lock(&index->lock);
index->lock.x_lock(SRW_LOCK_CALL);
}
rw_lock_x_lock(&clust_index->lock);
clust_index->lock.x_lock(SRW_LOCK_CALL);
switch (state) {
case QUIESCE_START:
@ -700,7 +700,7 @@ row_quiesce_set_state(
for (dict_index_t* index = dict_table_get_first_index(table);
index != NULL;
index = dict_table_get_next_index(index)) {
rw_lock_x_unlock(&index->lock);
index->lock.x_unlock();
}
row_mysql_unlock_data_dictionary(trx);

View file

@ -1077,11 +1077,10 @@ sel_set_rtr_rec_lock(
ut_ad(page_align(first_rec) == cur_block->frame);
ut_ad(match->valid);
rw_lock_x_lock(&(match->block.lock));
match->block.lock.x_lock();
retry:
cur_block = btr_pcur_get_block(pcur);
ut_ad(rw_lock_own_flagged(&match->block.lock,
RW_LOCK_FLAG_X | RW_LOCK_FLAG_S));
ut_ad(match->block.lock.have_x() || match->block.lock.have_s());
ut_ad(page_is_leaf(buf_block_get_frame(cur_block)));
err = lock_sec_rec_read_check_and_lock(
@ -1196,7 +1195,7 @@ re_scan:
match->locked = true;
func_end:
rw_lock_x_unlock(&(match->block.lock));
match->block.lock.x_unlock();
if (heap != NULL) {
mem_heap_free(heap);
}

View file

@ -305,11 +305,7 @@ row_undo_mod_clust(
ut_ad(online || !dict_index_is_online_ddl(index));
if (err == DB_SUCCESS && online) {
ut_ad(rw_lock_own_flagged(
&index->lock,
RW_LOCK_FLAG_S | RW_LOCK_FLAG_X
| RW_LOCK_FLAG_SX));
ut_ad(index->lock.have_any());
switch (node->rec_type) {
case TRX_UNDO_DEL_MARK_REC:

View file

@ -1166,60 +1166,6 @@ static monitor_info_t innodb_counter_info[] =
MONITOR_EXISTING | MONITOR_DEFAULT_ON | MONITOR_DISPLAY_CURRENT),
MONITOR_DEFAULT_START, MONITOR_OVLD_SRV_PAGE_SIZE},
{"innodb_rwlock_s_spin_waits", "server",
"Number of rwlock spin waits due to shared latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_S_SPIN_WAITS},
{"innodb_rwlock_x_spin_waits", "server",
"Number of rwlock spin waits due to exclusive latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_X_SPIN_WAITS},
{"innodb_rwlock_sx_spin_waits", "server",
"Number of rwlock spin waits due to sx latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_SX_SPIN_WAITS},
{"innodb_rwlock_s_spin_rounds", "server",
"Number of rwlock spin loop rounds due to shared latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_S_SPIN_ROUNDS},
{"innodb_rwlock_x_spin_rounds", "server",
"Number of rwlock spin loop rounds due to exclusive latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_X_SPIN_ROUNDS},
{"innodb_rwlock_sx_spin_rounds", "server",
"Number of rwlock spin loop rounds due to sx latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_SX_SPIN_ROUNDS},
{"innodb_rwlock_s_os_waits", "server",
"Number of OS waits due to shared latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_S_OS_WAITS},
{"innodb_rwlock_x_os_waits", "server",
"Number of OS waits due to exclusive latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_X_OS_WAITS},
{"innodb_rwlock_sx_os_waits", "server",
"Number of OS waits due to sx latch request",
static_cast<monitor_type_t>(
MONITOR_EXISTING | MONITOR_DEFAULT_ON),
MONITOR_DEFAULT_START, MONITOR_OVLD_RWLOCK_SX_OS_WAITS},
/* ========== Counters for DML operations ========== */
{"module_dml", "dml", "Statistics for DMLs",
MONITOR_MODULE,
@ -1733,42 +1679,6 @@ srv_mon_process_existing_counter(
value = srv_page_size;
break;
case MONITOR_OVLD_RWLOCK_S_SPIN_WAITS:
value = rw_lock_stats.rw_s_spin_wait_count;
break;
case MONITOR_OVLD_RWLOCK_X_SPIN_WAITS:
value = rw_lock_stats.rw_x_spin_wait_count;
break;
case MONITOR_OVLD_RWLOCK_SX_SPIN_WAITS:
value = rw_lock_stats.rw_sx_spin_wait_count;
break;
case MONITOR_OVLD_RWLOCK_S_SPIN_ROUNDS:
value = rw_lock_stats.rw_s_spin_round_count;
break;
case MONITOR_OVLD_RWLOCK_X_SPIN_ROUNDS:
value = rw_lock_stats.rw_x_spin_round_count;
break;
case MONITOR_OVLD_RWLOCK_SX_SPIN_ROUNDS:
value = rw_lock_stats.rw_sx_spin_round_count;
break;
case MONITOR_OVLD_RWLOCK_S_OS_WAITS:
value = rw_lock_stats.rw_s_os_wait_count;
break;
case MONITOR_OVLD_RWLOCK_X_OS_WAITS:
value = rw_lock_stats.rw_x_os_wait_count;
break;
case MONITOR_OVLD_RWLOCK_SX_OS_WAITS:
value = rw_lock_stats.rw_sx_os_wait_count;
break;
case MONITOR_OVLD_BUFFER_POOL_SIZE:
value = srv_buf_pool_size;
break;

View file

@ -855,7 +855,7 @@ srv_printf_innodb_monitor(
"SEMAPHORES\n"
"----------\n", file);
sync_print(file);
sync_array_print(file);
/* Conceptually, srv_innodb_monitor_mutex has a very high latching
order level in sync0sync.h, while dict_foreign_err_mutex has a very

View file

@ -230,7 +230,7 @@ void srw_lock_low::wr_unlock() { write_unlock(); readers_wake(); }
#ifdef UNIV_PFS_RWLOCK
template<bool support_u_lock>
void srw_lock::psi_rd_lock(const char *file, unsigned line)
bool srw_lock::psi_rd_lock(const char *file, unsigned line)
{
PSI_rwlock_locker_state state;
uint32_t l;
@ -247,26 +247,28 @@ void srw_lock::psi_rd_lock(const char *file, unsigned line)
}
else if (!nowait)
lock.read_lock(l);
return nowait;
}
template void srw_lock::psi_rd_lock<false>(const char *, unsigned);
template void srw_lock::psi_rd_lock<true>(const char *, unsigned);
template bool srw_lock::psi_rd_lock<false>(const char *, unsigned);
template bool srw_lock::psi_rd_lock<true>(const char *, unsigned);
void srw_lock::psi_u_lock(const char *file, unsigned line)
bool srw_lock::psi_u_lock(const char *file, unsigned line)
{
PSI_rwlock_locker_state state;
if (PSI_rwlock_locker *locker= PSI_RWLOCK_CALL(start_rwlock_wrwait)
(&state, pfs_psi, PSI_RWLOCK_SHAREDEXCLUSIVELOCK, file, line))
{
lock.u_lock();
const bool nowait= lock.u_lock();
PSI_RWLOCK_CALL(end_rwlock_rdwait)(locker, 0);
return nowait;
}
else
lock.u_lock();
return lock.u_lock();
}
template<bool support_u_lock>
void srw_lock::psi_wr_lock(const char *file, unsigned line)
bool srw_lock::psi_wr_lock(const char *file, unsigned line)
{
PSI_rwlock_locker_state state;
const bool nowait= lock.write_trylock();
@ -283,12 +285,13 @@ void srw_lock::psi_wr_lock(const char *file, unsigned line)
}
else if (!nowait)
lock.wr_lock();
return nowait;
}
template void srw_lock::psi_wr_lock<false>(const char *, unsigned);
template void srw_lock::psi_wr_lock<true>(const char *, unsigned);
template bool srw_lock::psi_wr_lock<false>(const char *, unsigned);
template bool srw_lock::psi_wr_lock<true>(const char *, unsigned);
void srw_lock::psi_u_wr_upgrade(const char *file, unsigned line)
bool srw_lock::psi_u_wr_upgrade(const char *file, unsigned line)
{
PSI_rwlock_locker_state state;
const bool nowait= lock.upgrade_trylock();
@ -303,5 +306,6 @@ void srw_lock::psi_u_wr_upgrade(const char *file, unsigned line)
}
else if (!nowait)
lock.write_lock(true);
return nowait;
}
#endif /* UNIV_PFS_RWLOCK */

View file

@ -46,7 +46,6 @@ Created 9/5/1995 Heikki Tuuri
#include <innodb_priv.h>
#include "lock0lock.h"
#include "sync0rw.h"
/*
WAIT ARRAY
@ -78,26 +77,14 @@ any waiting threads who have missed the signal. */
typedef TTASEventMutex<GenericPolicy> WaitMutex;
/** The latch types that use the sync array. */
union sync_object_t {
/** RW lock instance */
rw_lock_t* lock;
/** Mutex instance */
WaitMutex* mutex;
};
/** A cell where an individual thread may wait suspended until a resource
is released. The suspending is implemented using an operating system
event semaphore. */
struct sync_cell_t {
sync_object_t latch; /*!< pointer to the object the
WaitMutex* mutex; /*!< pointer to the object the
thread is waiting for; if NULL
the cell is free for use */
ulint request_type; /*!< lock type requested on the
object */
const char* file; /*!< in debug version file where
requested */
ulint line; /*!< in debug version line where
@ -108,7 +95,7 @@ struct sync_cell_t {
called sync_array_event_wait
on this cell */
int64_t signal_count; /*!< We capture the signal_count
of the latch when we
of the mutex when we
reset the event. This value is
then passed on to os_event_wait
and we wait only if the event
@ -231,7 +218,7 @@ sync_array_validate(sync_array_t* arr)
cell = sync_array_get_nth_cell(arr, i);
if (cell->latch.mutex != NULL) {
if (cell->mutex) {
count++;
}
}
@ -282,34 +269,14 @@ sync_array_free(
UT_DELETE(arr);
}
/*******************************************************************//**
Returns the event that the thread owning the cell waits for. */
static
os_event_t
sync_cell_get_event(
/*================*/
sync_cell_t* cell) /*!< in: non-empty sync array cell */
{
switch(cell->request_type) {
case SYNC_MUTEX:
return(cell->latch.mutex->event());
case RW_LOCK_X_WAIT:
return(cell->latch.lock->wait_ex_event);
default:
return(cell->latch.lock->event);
}
}
/******************************************************************//**
Reserves a wait array cell for waiting for an object.
The event of the cell is reset to nonsignalled state.
@return sync cell to wait on */
sync_cell_t*
sync_array_reserve_cell(
/*====================*/
sync_array_t* arr, /*!< in: wait array */
void* object, /*!< in: pointer to the object to wait for */
ulint type, /*!< in: lock request type */
void* object, /*!< in: pointer to the object to wait for */
const char* file, /*!< in: file where requested */
unsigned line) /*!< in: line where requested */
{
@ -342,15 +309,9 @@ sync_array_reserve_cell(
++arr->n_reserved;
/* Reserve the cell. */
ut_ad(cell->latch.mutex == NULL);
ut_ad(!cell->mutex);
cell->request_type = type;
if (cell->request_type == SYNC_MUTEX) {
cell->latch.mutex = reinterpret_cast<WaitMutex*>(object);
} else {
cell->latch.lock = reinterpret_cast<rw_lock_t*>(object);
}
cell->mutex = static_cast<WaitMutex*>(object);
cell->waiting = false;
@ -365,8 +326,7 @@ sync_array_reserve_cell(
/* Make sure the event is reset and also store the value of
signal_count at which the event was reset. */
os_event_t event = sync_cell_get_event(cell);
cell->signal_count = os_event_reset(event);
cell->signal_count = os_event_reset(cell->mutex->event());
return(cell);
}
@ -382,11 +342,11 @@ sync_array_free_cell(
{
sync_array_enter(arr);
ut_a(cell->latch.mutex != NULL);
ut_a(cell->mutex);
cell->waiting = false;
cell->signal_count = 0;
cell->latch.mutex = NULL;
cell->mutex = NULL;
/* Setup the list of free slots in the array */
cell->line = arr->first_free_slot;
@ -402,7 +362,7 @@ sync_array_free_cell(
cell = sync_array_get_nth_cell(arr, i);
ut_ad(!cell->waiting);
ut_ad(cell->latch.mutex == 0);
ut_ad(!cell->mutex);
ut_ad(cell->signal_count == 0);
}
#endif /* UNIV_DEBUG */
@ -428,32 +388,24 @@ sync_array_wait_event(
sync_array_enter(arr);
ut_ad(!cell->waiting);
ut_ad(cell->latch.mutex);
ut_ad(cell->mutex);
ut_ad(os_thread_get_curr_id() == cell->thread_id);
cell->waiting = true;
#ifdef UNIV_DEBUG
/* We use simple enter to the mutex below, because if
we cannot acquire it at once, mutex_enter would call
recursively sync_array routines, leading to trouble.
rw_lock_debug_mutex freezes the debug lists. */
rw_lock_debug_mutex_enter();
if (sync_array_detect_deadlock(arr, cell, cell, 0)) {
ib::fatal() << "########################################"
" Deadlock Detected!";
}
rw_lock_debug_mutex_exit();
#endif /* UNIV_DEBUG */
sync_array_exit(arr);
tpool::tpool_wait_begin();
os_event_wait_low(sync_cell_get_event(cell), cell->signal_count);
os_event_wait_low(cell->mutex->event(), cell->signal_count);
tpool::tpool_wait_end();
sync_array_free_cell(arr, cell);
@ -470,12 +422,6 @@ sync_array_cell_print(
FILE* file, /*!< in: file where to print */
sync_cell_t* cell) /*!< in: sync cell */
{
rw_lock_t* rwlock;
ulint type;
ulint writer;
type = cell->request_type;
fprintf(file,
"--Thread " ULINTPF " has waited at %s line " ULINTPF
" for %.2f seconds the semaphore:\n",
@ -483,91 +429,29 @@ sync_array_cell_print(
innobase_basename(cell->file), cell->line,
difftime(time(NULL), cell->reservation_time));
switch (type) {
default:
ut_error;
case RW_LOCK_X:
case RW_LOCK_X_WAIT:
case RW_LOCK_SX:
case RW_LOCK_S:
fputs(type == RW_LOCK_X ? "X-lock on"
: type == RW_LOCK_X_WAIT ? "X-lock (wait_ex) on"
: type == RW_LOCK_SX ? "SX-lock on"
: "S-lock on", file);
rwlock = cell->latch.lock;
if (rwlock) {
fprintf(file,
" RW-latch at %p created in file %s line %u\n",
(void*) rwlock, innobase_basename(rwlock->cfile_name),
rwlock->cline);
writer = rw_lock_get_writer(rwlock);
if (writer != RW_LOCK_NOT_LOCKED) {
fprintf(file,
"a writer (thread id " ULINTPF ") has"
" reserved it in mode %s",
ulint(rwlock->writer_thread),
writer == RW_LOCK_X ? " exclusive\n"
: writer == RW_LOCK_SX ? " SX\n"
: " wait exclusive\n");
}
fprintf(file,
"number of readers " ULINTPF
", waiters flag %d, "
"lock_word: %x\n"
"Last time write locked in file %s line %u"
#if 0 /* JAN: TODO: FIX LATER */
"\nHolder thread " ULINTPF
" file %s line " ULINTPF
#endif
"\n",
rw_lock_get_reader_count(rwlock),
uint32_t{rwlock->waiters},
int32_t{rwlock->lock_word},
innobase_basename(rwlock->last_x_file_name),
rwlock->last_x_line
#if 0 /* JAN: TODO: FIX LATER */
, ulint(rwlock->thread_id),
innobase_basename(rwlock->file_name),
rwlock->line
#endif
);
}
break;
case SYNC_MUTEX:
WaitMutex* mutex = cell->latch.mutex;
const WaitMutex::MutexPolicy& policy = mutex->policy();
WaitMutex* mutex = cell->mutex;
const WaitMutex::MutexPolicy& policy = mutex->policy();
#ifdef UNIV_DEBUG
const char* name = policy.context.get_enter_filename();
if (name == NULL) {
/* The mutex might have been released. */
name = "NULL";
}
#endif /* UNIV_DEBUG */
if (mutex) {
fprintf(file,
"Mutex at %p, %s, lock var %x\n"
#ifdef UNIV_DEBUG
"Last time reserved in file %s line %u"
#endif /* UNIV_DEBUG */
"\n",
(void*) mutex,
policy.to_string().c_str(),
mutex->state()
#ifdef UNIV_DEBUG
,name,
policy.context.get_enter_line()
#endif /* UNIV_DEBUG */
);
}
break;
const char* name = policy.context.get_enter_filename();
if (name == NULL) {
/* The mutex might have been released. */
name = "NULL";
}
#endif /* UNIV_DEBUG */
fprintf(file,
"Mutex at %p, %s, lock var %x\n"
#ifdef UNIV_DEBUG
"Last time reserved in file %s line %u"
#endif /* UNIV_DEBUG */
"\n",
(void*) mutex,
policy.to_string().c_str(),
mutex->state()
#ifdef UNIV_DEBUG
,name, policy.context.get_enter_line()
#endif /* UNIV_DEBUG */
);
if (!cell->waiting) {
fputs("wait has ended\n", file);
@ -592,7 +476,7 @@ sync_array_find_thread(
cell = sync_array_get_nth_cell(arr, i);
if (cell->latch.mutex != NULL
if (cell->mutex
&& os_thread_eq(cell->thread_id, thread)) {
return(cell); /* Found */
@ -643,23 +527,6 @@ sync_array_deadlock_step(
return(FALSE);
}
/**
Report an error to stderr.
@param lock rw-lock instance
@param debug rw-lock debug information
@param cell thread context */
static
void
sync_array_report_error(
rw_lock_t* lock,
rw_lock_debug_t* debug,
sync_cell_t* cell)
{
fprintf(stderr, "rw-lock %p ", (void*) lock);
sync_array_cell_print(stderr, cell);
rw_lock_debug_print(stderr, debug);
}
/******************************************************************//**
This function is called only in the debug version. Detects a deadlock
of one or more threads because of waits of semaphores.
@ -674,15 +541,13 @@ sync_array_detect_deadlock(
sync_cell_t* cell, /*!< in: cell to search */
ulint depth) /*!< in: recursion depth */
{
rw_lock_t* lock;
os_thread_id_t thread;
ibool ret;
rw_lock_debug_t*debug;
ut_a(arr);
ut_a(start);
ut_a(cell);
ut_ad(cell->latch.mutex != 0);
ut_ad(cell->mutex);
ut_ad(os_thread_get_curr_id() == start->thread_id);
ut_ad(depth < 100);
@ -693,10 +558,7 @@ sync_array_detect_deadlock(
return(false);
}
switch (cell->request_type) {
case SYNC_MUTEX: {
WaitMutex* mutex = cell->latch.mutex;
WaitMutex* mutex = cell->mutex;
const WaitMutex::MutexPolicy& policy = mutex->policy();
if (mutex->state() != MUTEX_STATE_UNLOCKED) {
@ -737,125 +599,6 @@ sync_array_detect_deadlock(
/* No deadlock */
return(false);
}
case RW_LOCK_X:
case RW_LOCK_X_WAIT:
lock = cell->latch.lock;
for (debug = UT_LIST_GET_FIRST(lock->debug_list);
debug != NULL;
debug = UT_LIST_GET_NEXT(list, debug)) {
thread = debug->thread_id;
switch (debug->lock_type) {
case RW_LOCK_X:
case RW_LOCK_SX:
case RW_LOCK_X_WAIT:
if (os_thread_eq(thread, cell->thread_id)) {
break;
}
/* fall through */
case RW_LOCK_S:
/* The (wait) x-lock request can block
infinitely only if someone (can be also cell
thread) is holding s-lock, or someone
(cannot be cell thread) (wait) x-lock or
sx-lock, and he is blocked by start thread */
ret = sync_array_deadlock_step(
arr, start, thread, debug->pass,
depth);
if (ret) {
sync_array_report_error(
lock, debug, cell);
rw_lock_debug_print(stderr, debug);
return(TRUE);
}
}
}
return(false);
case RW_LOCK_SX:
lock = cell->latch.lock;
for (debug = UT_LIST_GET_FIRST(lock->debug_list);
debug != 0;
debug = UT_LIST_GET_NEXT(list, debug)) {
thread = debug->thread_id;
switch (debug->lock_type) {
case RW_LOCK_X:
case RW_LOCK_SX:
case RW_LOCK_X_WAIT:
if (os_thread_eq(thread, cell->thread_id)) {
break;
}
/* The sx-lock request can block infinitely
only if someone (can be also cell thread) is
holding (wait) x-lock or sx-lock, and he is
blocked by start thread */
ret = sync_array_deadlock_step(
arr, start, thread, debug->pass,
depth);
if (ret) {
sync_array_report_error(
lock, debug, cell);
return(TRUE);
}
}
}
return(false);
case RW_LOCK_S:
lock = cell->latch.lock;
for (debug = UT_LIST_GET_FIRST(lock->debug_list);
debug != 0;
debug = UT_LIST_GET_NEXT(list, debug)) {
thread = debug->thread_id;
if (debug->lock_type == RW_LOCK_X
|| debug->lock_type == RW_LOCK_X_WAIT) {
/* The s-lock request can block infinitely
only if someone (can also be cell thread) is
holding (wait) x-lock, and he is blocked by
start thread */
ret = sync_array_deadlock_step(
arr, start, thread, debug->pass,
depth);
if (ret) {
sync_array_report_error(
lock, debug, cell);
return(TRUE);
}
}
}
return(false);
default:
ut_error;
}
return(true);
}
#endif /* UNIV_DEBUG */
@ -896,15 +639,9 @@ sync_array_print_long_waits_low(
const time_t now = time(NULL);
for (ulint i = 0; i < arr->n_cells; i++) {
sync_cell_t* cell = sync_array_get_nth_cell(arr, i);
sync_cell_t* cell;
void* latch;
cell = sync_array_get_nth_cell(arr, i);
latch = cell->latch.mutex;
if (latch == NULL || !cell->waiting) {
if (!cell->mutex || !cell->waiting) {
continue;
}
@ -923,7 +660,7 @@ sync_array_print_long_waits_low(
if (diff > longest_diff) {
longest_diff = diff;
*sema = latch;
*sema = cell->mutex;
*waiter = cell->thread_id;
}
}
@ -932,15 +669,9 @@ sync_array_print_long_waits_low(
waiting for a semaphore. */
if (*noticed) {
for (i = 0; i < arr->n_cells; i++) {
void* wait_object;
sync_cell_t* cell;
cell = sync_array_get_nth_cell(arr, i);
wait_object = cell->latch.mutex;
if (wait_object == NULL || !cell->waiting) {
sync_cell_t* cell = sync_array_get_nth_cell(arr, i);
if (!cell->mutex || !cell->waiting) {
continue;
}
@ -1018,11 +749,9 @@ sync_array_print_info_low(
arr->res_count);
for (i = 0; count < arr->n_reserved; ++i) {
sync_cell_t* cell;
sync_cell_t* cell = sync_array_get_nth_cell(arr, i);
cell = sync_array_get_nth_cell(arr, i);
if (cell->latch.mutex != 0) {
if (cell->mutex) {
count++;
sync_array_cell_print(file, cell);
}
@ -1104,15 +833,9 @@ sync_array_print_innodb(void)
fputs("InnoDB: Semaphore wait debug output started for InnoDB:\n", stderr);
for (i = 0; i < arr->n_cells; i++) {
void* wait_object;
sync_cell_t* cell;
cell = sync_array_get_nth_cell(arr, i);
wait_object = cell->latch.mutex;
if (wait_object == NULL || !cell->waiting) {
sync_cell_t* cell = sync_array_get_nth_cell(arr, i);
if (!cell->mutex || !cell->waiting) {
continue;
}
@ -1157,7 +880,7 @@ sync_arr_get_item(
wait_cell = sync_array_get_nth_cell(sync_arr, i);
if (wait_cell) {
wait_object = wait_cell->latch.mutex;
wait_object = wait_cell->mutex;
if(wait_object != NULL && wait_cell->waiting) {
found = TRUE;
@ -1194,13 +917,10 @@ sync_arr_fill_sys_semphore_waits_table(
fields = tables->table->field;
n_items = sync_arr_get_n_items();
ulint type;
for(ulint i=0; i < n_items;i++) {
sync_cell_t *cell=NULL;
if (sync_arr_get_item(i, &cell)) {
WaitMutex* mutex;
type = cell->request_type;
/* JAN: FIXME
OK(fields[SYS_SEMAPHORE_WAITS_THREAD_ID]->store(,
ulint(cell->thread), true));
@ -1212,10 +932,7 @@ sync_arr_fill_sys_semphore_waits_table(
difftime(time(NULL),
cell->reservation_time)));
if (type == SYNC_MUTEX) {
mutex = static_cast<WaitMutex*>(cell->latch.mutex);
if (mutex) {
if (WaitMutex* mutex = cell->mutex) {
// JAN: FIXME
// OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_OBJECT_NAME], mutex->cmutex_name));
OK(fields[SYS_SEMAPHORE_WAITS_WAIT_OBJECT]->store((longlong)mutex, true));
@ -1233,59 +950,6 @@ sync_arr_fill_sys_semphore_waits_table(
//OK(fields[SYS_SEMAPHORE_WAITS_LAST_WRITER_LINE]->store(mutex->line, true));
//fields[SYS_SEMAPHORE_WAITS_LAST_WRITER_LINE]->set_notnull();
//OK(fields[SYS_SEMAPHORE_WAITS_OS_WAIT_COUNT]->store(mutex->count_os_wait, true));
}
} else if (type == RW_LOCK_X_WAIT
|| type == RW_LOCK_X
|| type == RW_LOCK_SX
|| type == RW_LOCK_S) {
rw_lock_t* rwlock=NULL;
rwlock = static_cast<rw_lock_t *> (cell->latch.lock);
if (rwlock) {
ulint writer = rw_lock_get_writer(rwlock);
OK(fields[SYS_SEMAPHORE_WAITS_WAIT_OBJECT]->store((longlong)rwlock, true));
if (type == RW_LOCK_X) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_WAIT_TYPE], "RW_LOCK_X"));
} else if (type == RW_LOCK_X_WAIT) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_WAIT_TYPE], "RW_LOCK_X_WAIT"));
} else if (type == RW_LOCK_S) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_WAIT_TYPE], "RW_LOCK_S"));
} else if (type == RW_LOCK_SX) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_WAIT_TYPE], "RW_LOCK_SX"));
}
if (writer != RW_LOCK_NOT_LOCKED) {
// JAN: FIXME
// OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_OBJECT_NAME], rwlock->lock_name));
OK(fields[SYS_SEMAPHORE_WAITS_WRITER_THREAD]->store(ulint(rwlock->writer_thread), true));
if (writer == RW_LOCK_X) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_RESERVATION_MODE], "RW_LOCK_X"));
} else if (writer == RW_LOCK_X_WAIT) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_RESERVATION_MODE], "RW_LOCK_X_WAIT"));
} else if (type == RW_LOCK_SX) {
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_RESERVATION_MODE], "RW_LOCK_SX"));
}
//OK(fields[SYS_SEMAPHORE_WAITS_HOLDER_THREAD_ID]->store(rwlock->thread_id, true));
//OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_HOLDER_FILE], innobase_basename(rwlock->file_name)));
//OK(fields[SYS_SEMAPHORE_WAITS_HOLDER_LINE]->store(rwlock->line, true));
//fields[SYS_SEMAPHORE_WAITS_HOLDER_LINE]->set_notnull();
OK(fields[SYS_SEMAPHORE_WAITS_READERS]->store(rw_lock_get_reader_count(rwlock), true));
OK(fields[SYS_SEMAPHORE_WAITS_WAITERS_FLAG]->store(
rwlock->waiters,
true));
OK(fields[SYS_SEMAPHORE_WAITS_LOCK_WORD]->store(
rwlock->lock_word,
true));
OK(field_store_string(fields[SYS_SEMAPHORE_WAITS_LAST_WRITER_FILE], innobase_basename(rwlock->last_x_file_name)));
OK(fields[SYS_SEMAPHORE_WAITS_LAST_WRITER_LINE]->store(rwlock->last_x_line, true));
fields[SYS_SEMAPHORE_WAITS_LAST_WRITER_LINE]->set_notnull();
OK(fields[SYS_SEMAPHORE_WAITS_OS_WAIT_COUNT]->store(rwlock->count_os_wait, true));
}
}
}
OK(schema_table_store_record(thd, tables->table));

View file

@ -1235,9 +1235,6 @@ sync_latch_meta_init()
LATCH_ADD_MUTEX(RTR_PATH_MUTEX, SYNC_ANY_LATCH, rtr_path_mutex_key);
LATCH_ADD_MUTEX(RW_LOCK_LIST, SYNC_NO_ORDER_CHECK,
rw_lock_list_mutex_key);
LATCH_ADD_MUTEX(SRV_INNODB_MONITOR, SYNC_NO_ORDER_CHECK,
srv_innodb_monitor_mutex_key);
@ -1354,10 +1351,6 @@ sync_check_init()
sync_latch_meta_init();
/* create the mutex to protect rw_lock list. */
mutex_create(LATCH_ID_RW_LOCK_LIST, &rw_lock_list_mutex);
ut_d(LatchDebug::init());
sync_array_init();
@ -1371,10 +1364,7 @@ sync_check_close()
{
ut_d(LatchDebug::shutdown());
mutex_free(&rw_lock_list_mutex);
sync_array_close();
sync_latch_meta_destroy();
}

File diff suppressed because it is too large Load diff

View file

@ -32,8 +32,8 @@ Mutex, the basic synchronization primitive
Created 9/5/1995 Heikki Tuuri
*******************************************************/
#include "sync0rw.h"
#include "sync0sync.h"
#include "ut0mutex.h"
#ifdef UNIV_PFS_MUTEX
mysql_pfs_key_t buf_pool_mutex_key;
@ -64,7 +64,6 @@ mysql_pfs_key_t rw_lock_debug_mutex_key;
mysql_pfs_key_t rtr_active_mutex_key;
mysql_pfs_key_t rtr_match_mutex_key;
mysql_pfs_key_t rtr_path_mutex_key;
mysql_pfs_key_t rw_lock_list_mutex_key;
mysql_pfs_key_t srv_innodb_monitor_mutex_key;
mysql_pfs_key_t srv_misc_tmpfile_mutex_key;
mysql_pfs_key_t srv_monitor_file_mutex_key;
@ -94,62 +93,6 @@ mysql_pfs_key_t trx_purge_latch_key;
/** For monitoring active mutexes */
MutexMonitor mutex_monitor;
/**
Prints wait info of the sync system.
@param file - where to print */
static
void
sync_print_wait_info(FILE* file)
{
fprintf(file,
"RW-shared spins " UINT64PF ", rounds " UINT64PF ","
" OS waits " UINT64PF "\n"
"RW-excl spins " UINT64PF ", rounds " UINT64PF ","
" OS waits " UINT64PF "\n"
"RW-sx spins " UINT64PF ", rounds " UINT64PF ","
" OS waits " UINT64PF "\n",
(ib_uint64_t) rw_lock_stats.rw_s_spin_wait_count,
(ib_uint64_t) rw_lock_stats.rw_s_spin_round_count,
(ib_uint64_t) rw_lock_stats.rw_s_os_wait_count,
(ib_uint64_t) rw_lock_stats.rw_x_spin_wait_count,
(ib_uint64_t) rw_lock_stats.rw_x_spin_round_count,
(ib_uint64_t) rw_lock_stats.rw_x_os_wait_count,
(ib_uint64_t) rw_lock_stats.rw_sx_spin_wait_count,
(ib_uint64_t) rw_lock_stats.rw_sx_spin_round_count,
(ib_uint64_t) rw_lock_stats.rw_sx_os_wait_count);
fprintf(file,
"Spin rounds per wait: %.2f RW-shared,"
" %.2f RW-excl, %.2f RW-sx\n",
rw_lock_stats.rw_s_spin_wait_count
? static_cast<double>(rw_lock_stats.rw_s_spin_round_count) /
static_cast<double>(rw_lock_stats.rw_s_spin_wait_count)
: static_cast<double>(rw_lock_stats.rw_s_spin_round_count),
rw_lock_stats.rw_x_spin_wait_count
? static_cast<double>(rw_lock_stats.rw_x_spin_round_count) /
static_cast<double>(rw_lock_stats.rw_x_spin_wait_count)
: static_cast<double>(rw_lock_stats.rw_x_spin_round_count),
rw_lock_stats.rw_sx_spin_wait_count
? static_cast<double>(rw_lock_stats.rw_sx_spin_round_count) /
static_cast<double>(rw_lock_stats.rw_sx_spin_wait_count)
: static_cast<double>(rw_lock_stats.rw_sx_spin_round_count));
}
/**
Prints info of the sync system.
@param file - where to print */
void
sync_print(FILE* file)
{
#ifdef UNIV_DEBUG
rw_lock_list_print_info(file);
#endif /* UNIV_DEBUG */
sync_array_print(file);
sync_print_wait_info(file);
}
/** Print the filename "basename" e.g., p = "/a/b/c/d/e.cc" -> p = "e.cc"
@param[in] filename Name from where to extract the basename
@return the basename */
@ -226,20 +169,5 @@ MutexMonitor::reset()
/** Note: We don't add any latch meta-data after startup. Therefore
there is no need to use a mutex here. */
LatchMetaData::iterator end = latch_meta.end();
for (LatchMetaData::iterator it = latch_meta.begin(); it != end; ++it) {
if (*it != NULL) {
(*it)->get_counter()->reset();
}
}
mutex_enter(&rw_lock_list_mutex);
for (rw_lock_t& rw_lock : rw_lock_list) {
rw_lock.count_os_wait = 0;
}
mutex_exit(&rw_lock_list_mutex);
for (auto l : latch_meta) if (l) l->get_counter()->reset();
}

View file

@ -41,7 +41,6 @@ Created July 17, 2007 Vasil Dimov
#include "rem0rec.h"
#include "row0row.h"
#include "srv0srv.h"
#include "sync0rw.h"
#include "sync0sync.h"
#include "trx0sys.h"
#include "que0que.h"

View file

@ -578,7 +578,6 @@ buf_block_t* trx_undo_add_page(trx_undo_t* undo, mtr_t* mtr)
goto func_exit;
}
ut_ad(rw_lock_get_x_lock_count(&new_block->lock) == 1);
buf_block_dbg_add_level(new_block, SYNC_TRX_UNDO_PAGE);
undo->last_page_no = new_block->page.id().page_no();
@ -629,7 +628,7 @@ trx_undo_free_page(
fseg_free_page(TRX_UNDO_SEG_HDR + TRX_UNDO_FSEG_HEADER
+ header_block->frame,
rseg->space, page_no, mtr);
buf_page_free(page_id_t(space, page_no), mtr, __FILE__, __LINE__);
buf_page_free(page_id_t(space, page_no), mtr);
const fil_addr_t last_addr = flst_get_last(
TRX_UNDO_SEG_HDR + TRX_UNDO_PAGE_LIST + header_block->frame);

View file

@ -8150,7 +8150,7 @@ int ha_rocksdb::read_row_from_secondary_key(uchar *const buf,
const Rdb_key_def &kd,
bool move_forward) {
int rc = 0;
uint pk_size;
uint pk_size= 0;
/* Get the key columns and primary key value */
const rocksdb::Slice &rkey = m_scan_it->key();

View file

@ -225,15 +225,6 @@ innodb_dict_lru_count_idle server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NU
innodb_dblwr_writes server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of doublewrite operations that have been performed (innodb_dblwr_writes)
innodb_dblwr_pages_written server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of pages that have been written for doublewrite operations (innodb_dblwr_pages_written)
innodb_page_size server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 value InnoDB page size in bytes (innodb_page_size)
innodb_rwlock_s_spin_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin waits due to shared latch request
innodb_rwlock_x_spin_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin waits due to exclusive latch request
innodb_rwlock_sx_spin_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin waits due to sx latch request
innodb_rwlock_s_spin_rounds server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin loop rounds due to shared latch request
innodb_rwlock_x_spin_rounds server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin loop rounds due to exclusive latch request
innodb_rwlock_sx_spin_rounds server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rwlock spin loop rounds due to sx latch request
innodb_rwlock_s_os_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of OS waits due to shared latch request
innodb_rwlock_x_os_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of OS waits due to exclusive latch request
innodb_rwlock_sx_os_waits server 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of OS waits due to sx latch request
dml_reads dml 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rows read
dml_inserts dml 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rows inserted
dml_deletes dml 0 NULL NULL NULL 0 NULL NULL NULL NULL NULL NULL NULL 0 status_counter Number of rows deleted