- InnoDB information schema query access the tablespace name after
getting freed by concurrent rename operation. To avoid this, InnoDB
should take exclusive tablespace latch during rename operation
and I_S query should take shared tablespace latch before accessing
the name
Every operation that is going to write redo log is supposed to
invoke log_free_check() before acquiring any latches. If there
is a risk of log buffer overrun, a log checkpoint would be
triggered by that call.
ibuf_merge_space(), ibuf_merge_in_background(),
ibuf_delete_for_discarded_space(): Invoke log_free_check()
when the current thread is not holding any page latches.
Unfortunately, in lower-level code called from ibuf_insert()
or ibuf_merge_or_delete_for_page(), some page latches may be
held and a call to log_free_check() could hang.
ibuf_set_bitmap_for_bulk_load(): Use the caller's mini-transaction.
The caller should have invoked log_free_check() while not holding
any page latches.
Something appears to be broken in the DBUG subsystem.
Let us remove frequent calls to it from the InnoDB internal SQL interpreter
that is used in the purge of transaction history.
The DBUG_PRINT in que_eval_sql() can remain for now, because those
operations are much less frequent.
Post-push fix. The flag of transaction which indicates that it's necessary
to forbid gap lock inheritance after XA PREPARE could be inverted if
lock_release_on_prepare_try() is invoked several times. The fix is to
toggle it on lock_release_on_prepare() exit.
ha_innobase::check(): Do not enable READ UNCOMMITTED isolation level
for temporary tables, because it would report index count mismatch
for secondary indexes.
row_check_index(): Ignore EXTENDED for temporary tables, because
the tables are private to the current connection and there will be
no purge of committed transaction history.
The test innodb.innodb-wl5522-debug would occasionally hang
(especially when run with ./mtr --rr) due to a deadlock between
btr_store_big_rec_extern_fields() and dict_stats_analyze_index().
The two threads would acquire the clustered index root page latch and
the tablespace latch in the opposite order. The deadlock was possible
because dict_stats_analyze_index() was holding the index latch in
shared mode and an index root page latch, while waiting for the
tablespace latch. If a stronger dict_index_t::lock had been held
by dict_stats_analyze_index(), any operations that free or allocate
index pages would have been blocked.
In each caller of fseg_n_reserved_pages() except ibuf_init_at_db_start()
which is a special case for ibuf.index at database startup, we must hold
an index latch that prevents concurrent allocation or freeing of index
pages.
Any operation that allocates or free pages that belong to an index tree
must first acquire an index latch in Update or Exclusive mode, and while
holding that, acquire an index root page latch in Update or Exclusive
mode.
dict_index_t::clear(): Also acquire an index latch. Otherwise,
the test innodb.insert_into_empty could hang.
btr_get_size_and_reserved(): Assert that a strong enough index latch
is being held. Only acquire a shared fil_space_t::latch; we are only
reading, not modifying any data.
dict_stats_update_transient_for_index(),
dict_stats_analyze_index(): Acquire a strong enough index latch. Only
acquire a shared fil_space_t::latch.
These operations had followed the same order of acquiring latches in
every InnoDB version since the very beginning
(commit c533308a15).
The calls for acquiring tablespace latch had previously been moved in
commit 87839258f8 and
commit 1e9c922fa7.
The hang was introduced in
commit 2e814d4702 which imported
mysql/mysql-server@ac74632293
which failed to strengthen the locking requirements of the function
btr_get_size().
1. The merge aeccbbd926 has overwritten
lock0lock.cc, and the changes of MDEV-29622 and MDEV-29635 were
partially lost, this commit restores the changes.
2. innodb.deadlock_wait_thr_race test:
The following hang was found during testing.
There is deadlock_report_before_lock_releasing sync point in
Deadlock::report(), which is waiting for sel_cont signal under lock_sys_t
lock. The signal must be issued after "UPDATE t SET b = 100" rollback,
and that rollback is executing undo record, which is blocked
on dict_sys latch request. dict_sys is locked by the thread of statistics
update(dict_stats_save()), and during that update lock_sys lock is
requested, and can't be acquired as Deadlock::report() holds it. We have
to disable statistics update to make the test stable.
But even if statistics update is disabled, and transaction with consistent
snapshot is started at the very beginning of the test to prevent purging,
the purge can still be invoked for system tables, and it tries to open
system table by id, what causes dict_sys.freeze() call and dict_sys
latching. What, in combination with lock_sys::xx_lock() causes the same
deadlock as described above. We need to disable purging globally for the
test as well.
All the above is applicable to innodb.deadlock_wait_lock_race test also.
Non-blocking log_write_upto (MDEV-24341) was only designed for the
client connections. Fix, so it is not be triggered for any system THD.
Previously, an incomplete solution only excluded Innodb purge THDs, but
not the slave for example.
The hang in MDEV still remains somewhat a mystery though, it is not
immediately clear how exactly condition variable can become corrupted.
But it is clear that it can be avoided.
To prevent ASAN heap-use-after-poison in the MDEV-16549 part of
./mtr --repeat=6 main.derived
the initialization of Name_resolution_context was cleaned up.
- Background statistics thread should keep the table in the
statistics queue itself when the table under bulk insert operation
dict_stats_analyze_index(): Set the maximum value for index_stats_t
if the table is in bulk operation
dict_stats_update(), dict_stats_update_transient_for_index(),
dict_stats_update_transient(): Returns DB_SUCCESS_LOCKED_REC
if the table under bulk insert operation
dict_stats_process_entry_from_recalc_pool(): Add the table
back to recalc pool if the table under bulk insert operation
The lock is created during page splitting after moving records and
locks(lock_move_rec_list_(start|end)()) to the new page, and inheriting
the locks to the supremum of left page from the successor of the infimum
on right page.
There is no need in such inheritance for READ COMMITTED isolation level
and not-gap locks, so the fix is to add the corresponding condition in
gap lock inheritance function.
One more fix is to forbid gap lock inheritance if XA was prepared. Use the
most significant bit of trx_t::n_ref to indicate that gap lock inheritance
is forbidden. This fix is based on
mysql/mysql-server@b063e52a83
Suppose we have two transactions, trx 1 and trx 2.
trx 2 does deadlock resolving from lock_wait(), it sets
victim->lock.was_chosen_as_deadlock_victim=true for trx 1, but has not
yet invoked lock_cancel_waiting_and_release().
trx 1 checks the flag in lock_trx_handle_wait(), and starts rollback
from row_mysql_handle_errors(). It can change trx->lock.wait_thr and
trx->state as it holds trx_t::mutex, but trx 2 has not yet requested it,
as lock_cancel_waiting_and_release() has not yet been called.
After that trx 1 tries to release locks in trx_t::rollback_low(),
invoking trx_t::rollback_finish(). lock_release() is blocked on try to
acquire lock_sys.rd_lock(SRW_LOCK_CALL) in lock_release_try(), as
lock_sys is blocked by trx 2, as deadlock resolution works under
lock_sys.wr_lock(SRW_LOCK_CALL), see Deadlock::report() for details.
trx 2 executes lock_cancel_waiting_and_release() for deadlock victim, i.
e. for trx 1. lock_cancel_waiting_and_release() contains some
trx->lock.wait_thr and trx->state assertions, which will fail, because
trx 1 has changed them during rollback execution.
So, according to the above scenario, it's legal to have
trx->lock.wait_thr==0 and trx->state!=TRX_STATE_ACTIVE in
lock_cancel_waiting_and_release(), if it was invoked from
Deadlock::report(), and the fix is just in the assertion conditions
changing.
The fix is just in changing assertion condition.
There is also lock_wait() cleanup around trx->error_state.
If trx->error_state can be changed not by the owned thread, it must be
protected with lock_sys.wait_mutex, as lock_wait() uses trx->lock.cond
along with that mutex.
Also if trx->error_state was changed before lock_sys.wait_mutex
acquision, then it could be reset with the following code, what is
wrong. Also we need to check trx->error_state before entering waiting
loop, otherwise it can be the case when trx->error_state was set before
lock_sys.wait_mutex acquision, but the thread will be waiting on
trx->lock.cond.
Returning DB_SUCCESS unconditionally if !trx->lock.wait_lock in
lock_trx_handle_wait() is wrong. Because even if
trx->lock.was_chosen_as_deadlock_victim was not set before the first check
in lock_trx_handle_wait(), it can be set after
the check, and trx->lock.wait_lock can be reset by another thread from
lock_reset_lock_and_trx_wait() if the transaction was chosen as deadlock
victim. In this case lock_trx_handle_wait() will return DB_SUCCESS even
the transaction was marked as deadlock victim, and continue execution
instead of rolling back.
The fix is to check trx->lock.was_chosen_as_deadlock_victim once more if
trx->lock.wait_lock is reset, as trx->lock.wait_lock can be reset only
after trx->lock.was_chosen_as_deadlock_victim was set if the transaction
was chosen as deadlock victim.
Until now, the attribute EXTENDED of CHECK TABLE was ignored by InnoDB,
and InnoDB only counted the records in each index according
to the current read view. Unless the attribute QUICK was specified, the
function btr_validate_index() would be invoked to validate the B-tree
structure (the sibling and child links between index pages).
The EXTENDED check will not only count all index records according to the
current read view, but also ensure that any delete-marked records in the
clustered index are waiting for the purge of history, and that all
secondary index records point to a version of the clustered index record
that is waiting for the purge of history. In other words, no index may
contain orphan records. Normal MVCC reads and the non-EXTENDED version
of CHECK TABLE would ignore these orphans.
Unpurged records merely result in warnings (at most one per index),
not errors, and no indexes will be flagged as corrupted due to such
garbage. It will remain possible to SELECT data from such indexes or
tables (which will skip such records) or to rebuild the table to
reclaim some space.
We introduce purge_sys.end_view that will be (almost) a copy of
purge_sys.view at the end of a batch of purging committed transaction
history. It is not an exact copy, because if the size of a purge batch
is limited by innodb_purge_batch_size, some records that
purge_sys.view would allow to be purged will be left over for
subsequent batches.
The purge_sys.view is relevant in the purge of committed transaction
history, to determine if records are safe to remove. The new
purge_sys.end_view is relevant in MVCC operations and in
CHECK TABLE ... EXTENDED. It tells which undo log records are
safe to access (have not been discarded at the end of a purge batch).
purge_sys.clone_oldest_view<true>(): In trx_lists_init_at_db_start(),
clone the oldest read view similar to purge_sys_t::clone_end_view()
so that CHECK TABLE ... EXTENDED will not report bogus failures between
InnoDB restart and the completed purge of committed transaction history.
purge_sys_t::is_purgeable(): Replaces purge_sys_t::changes_visible()
in the case that purge_sys.latch will not be held by the caller.
Among other things, this guards access to BLOBs. It is not safe to
dereference any BLOBs of a delete-marked purgeable record, because
they may have already been freed.
purge_sys_t::view_guard::view(): Return a reference to purge_sys.view
that will be protected by purge_sys.latch, held by purge_sys_t::view_guard.
purge_sys_t::end_view_guard::view(): Return a reference to
purge_sys.end_view while it is protected by purge_sys.end_latch.
Whenever a thread needs to retrieve an older version of a clustered
index record, it will hold a page latch on the clustered index page
and potentially also on a secondary index page that points to the
clustered index page. If these pages contain purgeable records that
would be accessed by a currently running purge batch, the progress of
the purge batch would be blocked by the page latches. Hence, it is
safe to make a copy of purge_sys.end_view while holding an index page
latch, and consult the copy of the view to determine whether a record
should already have been purged.
btr_validate_index(): Remove a redundant check.
row_check_index_match(): Check if a secondary index record and a
version of a clustered index record match each other.
row_check_index(): Replaces row_scan_index_for_mysql().
Count the records in each index directly, duplicating the relevant
logic from row_search_mvcc(). Initialize check_table_extended_view
for CHECK ... EXTENDED while holding an index leaf page latch.
If we encounter an orphan record, the copy of purge_sys.end_view that
we make is safe for visibility checks, and trx_undo_get_undo_rec() will
check for the safety to access each undo log record. Should that check
fail, we should return DB_MISSING_HISTORY to report a corrupted index.
The EXTENDED check tries to match each secondary index record with
every available clustered index record version, by duplicating the logic
of row_vers_build_for_consistent_read() and invoking
trx_undo_prev_version_build() directly.
Before invoking row_check_index_match() on delete-marked clustered index
record versions, we will consult purge_sys.is_purgeable() in order to
avoid accessing freed BLOBs.
We will always check that the DB_TRX_ID or PAGE_MAX_TRX_ID does not
exceed the global maximum. Orphan secondary index records will be
flagged only if everything up to PAGE_MAX_TRX_ID has been purged.
We warn also about clustered index records whose nonzero DB_TRX_ID
should have been reset in purge or rollback.
trx_set_rw_mode(): Move an assertion from ReadView::set_creator_trx_id().
trx_undo_prev_version_build(): Remove two debug-only parameters,
and return an error code instead of a Boolean.
trx_undo_get_undo_rec(): Return a pointer to the undo log record,
or nullptr if one cannot be retrieved. Instead of consulting the
purge_sys.view, consult the purge_sys.end_view to determine which
records can be accessed.
trx_undo_get_rec_if_purgeable(): A variant of trx_undo_get_undo_rec()
that will consult purge_sys.view instead of purge_sys.end_view.
TRX_UNDO_CHECK_PURGEABILITY: A new parameter to
trx_undo_prev_version_build(), passed by row_vers_old_has_index_entry()
so that purge_sys.view instead of purge_sys.end_view will be consulted
to determine whether a secondary index record may be safely purged.
row_upd_changes_disowned_external(): Remove. This should be more
expensive than briefly latching purge_sys in trx_undo_prev_version_build()
(which may make use of transactional memory).
row_sel_reset_old_vers_heap(): New function, split from
row_sel_build_prev_vers_for_mysql().
row_sel_build_prev_vers_for_mysql(): Reorder some parameters
to simplify the call to row_sel_reset_old_vers_heap().
row_search_for_mysql(): Replaced with direct calls to row_search_mvcc().
sel_node_get_nth_plan(): Define inline in row0sel.h
open_step(): Define at the call site, in simplified form.
sel_node_reset_cursor(): Merged with the only caller open_step().
---
ReadViewBase::check_trx_id_sanity(): Remove.
Let us handle "future" DB_TRX_ID in a more meaningful way:
row_sel_clust_sees(): Return DB_SUCCESS if the record is visible,
DB_SUCCESS_LOCKED_REC if it is invisible, and DB_CORRUPTION if
the DB_TRX_ID is in the future.
row_undo_mod_must_purge(), row_undo_mod_clust(): Silently ignore
corrupted DB_TRX_ID. We are in ROLLBACK, and we should have noticed
that corruption when we were about to modify the record in the first
place (leading us to refuse the operation).
row_vers_build_for_consistent_read(): Return DB_CORRUPTION if
DB_TRX_ID is in the future.
Tested by: Matthias Leich
Reviewed by: Vladislav Lesin
See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
after 6b685ea7b0 one can no longer violate the locking protocol
by invoking thd_get_ha_data() on some other thread without
protecting that with a mutex
row_purge_step(): Process all available purge_node_t::undo_recs.
row_purge_end(): Replaced with purge_node_t::end().
TODO: Do we need a "query graph node" at all for purge?
purge_sys_t::low_limit_no(): Adjust a comment. Actually, this
is protected after all.
TrxUndoRsegsIterator::set_next(): Reduce the critical section
of purge_sys.rseg->latch. Some purge_sys fields are accessed
only by the purge coordinator task.
ReadViewBase::snapshot(): In case m_low_limit_no==m_low_limit_id
and m_ids would include everything between that and m_up_limit_id,
set all fields to m_up_limit_id and clear m_ids, to speed up
changes_visible() and append().
rw_trx_hash_t::debug_iterator(): Add an assertion.
btr_page_reorganize_low(): Do not invoke lock_move_reorganize_page()
on a dummy index during change buffer merge. The ibuf.index page
latch that we are holding may block a DDL operation that is waiting
in ibuf_delete_for_discarded_space() while holding exclusive
lock_sys.latch. ibuf_insert_low() would refuse to buffer a change
if any locks exist for the index page.
btr_search_guess_on_hash() would only acquire an index page latch if it
is invoked with ahi_latch=NULL. If it's invoked from
row_sel_try_search_shortcut_for_mysql() with ahi_latch!=NULL, a page
will not be latched, and row_search_mvcc() will get a pointer to the
record, which can be changed by some other transaction before the record
was stored in result buffer with row_sel_store_mysql_rec() call.
ahi_latch argument of btr_cur_search_to_nth_level_func() and
btr_pcur_open_with_no_init_func() is used only for
row_sel_try_search_shortcut_for_mysql().
btr_cur_search_to_nth_level_func(..., ahi_latch !=0, ...) is invoked
only from btr_pcur_open_with_no_init_func(..., ahi_latch !=0, ...),
which, in turns, is invoked only from
row_sel_try_search_shortcut_for_mysql().
I suppose that separate case with ahi_latch!=0 was intentionally
implemented to protect row_sel_store_mysql_rec() call in
row_search_mvcc() just after row_sel_try_search_shortcut_for_mysql()
call. After the ahi_latch was moved from row_seach_mvcc() to
row_sel_try_search_shortcut_for_mysql(), there is no need in it at all
if btr_search_guess_on_hash() latches a page unconditionally. And if
btr_search_guess_on_hash() latched the page, any access to the record in
row_sel_try_search_shortcut_for_mysql() after btr_pcur_open_with_no_init()
call will be protected with the page latch.
The fix is to remove ahi_latch argument from
btr_pcur_open_with_no_init_func(), btr_cur_search_to_nth_level_func()
and btr_search_guess_on_hash().
There will not be test, as to test it we need to freeze some SELECT
execution in the point between row_sel_try_search_shortcut_for_mysql()
and row_sel_store_mysql_rec() calls in row_search_mvcc(), and to change
the record in some other transaction to let row_sel_store_mysql_rec() to
store changed record in result buffer. Buf we can't do this with the
fix, as the page will be latched in btr_search_guess_on_hash() call.
dict_load_foreigns(): Remove the constant parameter uncommitted=false.
The parameter only had to be added to dict_load_foreign().
Spotted by Alexey Midenkov
row_purge_get_partial(): Replaces trx_undo_rec_get_partial_row().
Also copy the purge_node_t::ref to the purge_node_t::row.
In this way, the clustered index key fields will always be
available, even if thanks to
commit d384ead0f0 (MDEV-14799)
they would no longer be repeated in the remaining part of the
undo log record.
trx->mysql_thd can be zeroed-out between thd_get_thread_id() and
thd_query_safe() calls in fill_trx_row(). trx_disconnect_prepared() zeroes out
trx->mysql_thd. And this can cause null pointer dereferencing in
fill_trx_row().
fill_trx_row() is invoked from fetch_data_into_cache() under trx_sys.mutex.
Bug fix is in reseting trx_t::mysql_thd in trx_disconnect_prepared() under
trx_sys.mutex lock too.
MTR test case can't be created for the fix, as we need to wait for
trx_t::mysql_thd reseting in fill_trx_row() after trx_t::mysql_thd was
checked for null while trx_sys.mutex is held. But trx_t::mysql_thd must be
reset in trx_disconnect_prepared() under trx_sys.mutex. There will be deadlock.