This race condition was introduced by
commit ad6171b91c (MDEV-22456).
In the observed case, two threads were executing
btr_search_drop_page_hash_index() on the same block,
to free a stale entry that was attached to a dropped index.
Both threads were holding an S latch on the block.
We must prevent the double-free of block->index by holding
block->lock in exclusive mode.
btr_search_guess_on_hash(): Do not invoke
btr_search_drop_page_hash_index(block) to get rid of
stale entries, because we are not necessarily holding
an exclusive block->lock here.
buf_defer_drop_ahi(): New function, to safely drop stale
entries in buf_page_mtr_lock(). We will skip the call to
btr_search_drop_page_hash_index(block) when only requesting
bufferfixing (no page latch), because in that case, we should
not be accessing the adaptive hash index, and we might get
a deadlock if we acquired the page latch.
We are supposed to commit and restart the mini-transaction
between records. There is no point to store and restore the
persistent cursor position otherwise.
If buf_page_optimistic_get() is patched to always fail, the
debug build would fail to start up due to trying to re-acquire
an already S-latched block.
This bug (which should not have visible impact to users, because
the code is only executed during startup, while no other threads
are accessing B-trees or causing pages to be evicted from the
buffer pool) was caught as part of a debugging effort for
something else.
The debugging approach was: Make buf_page_optimistic_get()
always return FALSE, and add ut_a(block->lock.lock_word == X_LOCK_DECR)
to both buf_LRU_get_free_only() and buf_LRU_block_free_non_file_page().
This would catch misuse of the buffer pool. If it were not for
buf_page_optimistic_get(), no buf_block_t::lock of any BUF_BLOCK_NOT_USED
block would ever be acquired.
Introduce a new ATTRIBUTE_NOINLINE to
ib::logger member functions, and add UNIV_UNLIKELY hints to callers.
Also, remove some crash reporting output. If needed, the
information will be available using debugging tools.
Furthermore, remove some fts_enable_diag_print output that included
indexed words in raw form. The code seemed to assume that words are
NUL-terminated byte strings. It is not clear whether a NUL terminator
is always guaranteed to be present. Also, UCS2 or UTF-16 strings would
typically contain many NUL bytes.
Problem:
========
During buffer pool resizing, InnoDB recreates the dictionary hash
tables. Dictionary hash table reuses the heap of AHI hash tables.
It leads to memory corruption.
Fix:
====
- While disabling AHI, free the heap and AHI hash tables. Recreate the
AHI hash tables and assign new heap when AHI is enabled.
- btr_blob_free() access invalid page if page was reallocated during
buffer poolresizing. So btr_blob_free() should get the page from
buf_pool instead of using existing block.
- btr_search_enabled and block->index should be checked after
acquiring the btr_search_sys latch
- Moved the buffer_pool_scan debug sync to earlier before accessing the
btr_search_sys latches to avoid the hang of truncate_purge_debug
test case
- srv_printf_innodb_monitor() should acquire btr_search_sys latches
before AHI hash tables.
Problem:
=======
While evicting the uncompressed page from buffer pool, InnoDB writes
the checksum for the compressed page in buf_LRU_free_page().
So while flushing the compressed page, checksum validation fails
when innodb_checksum_algorithm variable changed to strict_none.
Solution:
========
- Calculate the checksum only during flushing of page. Removed the
checksum write in buf_LRU_free_page().
namespace intrusive: removed
split class into two: ilist<T> and sized_ilist<T> which has a size field.
ilist<T> no more NULLify pointers to bring a slignly better performance.
As a consequence, fil_space_t::is_in_unflushed_spaces and
fil_space_t::is_in_rotation_list boolean members are needed now.
There was a race condition where the connection of the
victim of a KILL statement is disconnected while the
KILL statement is executing.
As a side effect of this fix, we will make XA PREPARE
transactions immune to KILL statements.
Starting with MariaDB 10.2, we have a pool of trx_t objects.
trx_free() would only free memory to the pool. We poison the
contents of freed objects in the pool in order to catch misuse.
trx_free(): Unpoison also trx->mysql_thd and trx->state.
This is to counter the poisoning of *trx in trx_pools->mem_free().
Unpoison only on AddressSanitizer or Valgrind, but not on MemorySanitizer.
Pool: Unpoison allocated objects only on AddressSanitizer or
Valgrind, but not on MemorySanitizer.
innobase_kill_query(): Properly protect trx, acquiring also
trx_sys_t::mutex and checking trx_t::mysql_thd and trx_t::state.
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock
This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.
Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();
The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.
Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
are no tables marked for reopen. (performance)
ins_node_create_entry_list(): Create dummy empty tuples for
corrupted or incomplete indexes, to avoid dereferencing a NULL
dict_field_t::col pointer in row_build_index_entry_low().
This issue was found by a crash in the test gcol.innodb_virtual_basic
when merging the fix to 10.5.
prepare_inplace_alter_table_dict(): In the error handling, relax
a debug assertion for the case that we did not execute
dict_stats_wait_bg_to_stop_using_table() yet.
For no good reason, innodb_encryption_threads was limited to
4,294,967,295. Expectedly, the server would crash if such an
insane value was specified. Let us limit the maximum to 255.
The encryption threads are not doing much useful work.
They are basically only dirtying pages by performing
dummy writes via the redo log. The encryption key rotation
or the in-place addition or removal of encryption
will take place in the page cleaner.
In a quick test on a 20-core CPU (40 threads in total),
the sweet spot on an otherwise idle server seemed to be
innodb_encryption_threads=16 for the test
encryption.encrypt_and_grep. The new limit 255 should be
more than enough for even bigger servers.
This is a regression due to MDEV-16376
commit 8dc70c862b.
To make dict_index_t::detach_columns() idempotent,
we cleared dict_index_t::n_fields. But, this could
cause trouble with purge after a secondary index
creation failed (not even involving virtual columns).
A better way is to clear the dict_field_t::col pointers
that point to virtual columns that are being freed
due to aborting index creation on an index that depends
on a virtual column.
Note: the v_cols[] of an existing dict_table_t object will
never be modified. If any virtual columns are added or removed,
ha_innobase::commit_inplace_alter_table() would invoke
dict_table_remove_from_cache() and reload the table to dict_sys.
Index creation is a special case where the dict_index_t points
to virtual columns that do not yet exist in dict_table_t.
error is logged
The fix is to set flag in ib::error::~error() and check it in
mariabackup.
ib::error::error() is replaced with ib::warn::warn() in
AIO::linux_create_io_ctx() because of two reasons:
1) if we leave it as is, then mariabackup MTR tests will fail with --mem
option, because Linux AIO can not be used on tmpfs,
2) when Linux AIO can not be initialized, InnoDB falls back to simulated
AIO, so such sutiation is not fatal error, it should be treated as warning.
lock_table_locks_lookup(): Relax the assertion.
Locks must not exist while online secondary index creation is
in progress. However, if CREATE UNIQUE INDEX has not been committed
yet, but the index creation has been completed, concurrent DML
transactions may acquire record locks on the index. Furthermore,
such concurrent DML may cause duplicate key violation, causing
the DDL operation to be rolled back. After that, the online_status
may be ONLINE_INDEX_ABORTED or ONLINE_INDEX_ABORTED_DROPPED.
So, the debug assertion may only forbid the state ONLINE_INDEX_CREATION.
In commit ad6171b91c (MDEV-22456)
we removed the acquisition of the adaptive hash index latch
from the caller of btr_search_update_hash_ref().
The tests innodb.innodb_buffer_pool_resize_with_chunks
and innodb.innodb_buffer_pool_resize
would occasionally fail starting with 10.3,
due to MDEV-12288 causing more purge activity during the test.
btr_search_update_hash_ref(): After acquiring the adaptive hash index
latch, check that the adaptive hash index is still enabled on the page.
The rw_lock_stats were incorrectly updated.
While global statistics have limited usefulness, we cannot
remove them from a GA version. This contribution is slightly
improving performance in write workloads.
If the InnoDB buffer pool contains many pages for a table or index
that is being dropped or rebuilt, and if many of such pages are
pointed to by the adaptive hash index, dropping the adaptive hash index
may consume a lot of time.
The time-consuming operation of dropping the adaptive hash index entries
is being executed while the InnoDB data dictionary cache dict_sys is
exclusively locked.
It is not actually necessary to drop all adaptive hash index entries
at the time a table or index is being dropped or rebuilt. We can let
the LRU replacement policy of the buffer pool take care of this gradually.
For this to work, we must detach the dict_table_t and dict_index_t
objects from the main dict_sys cache, and once the last
adaptive hash index entry for the detached table is removed
(when the garbage page is evicted from the buffer pool) we can free
the dict_table_t and dict_index_t object.
Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE
skip both the buffer pool eviction and the drop of the adaptive hash index.
We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE.
We can remove the eviction from DROP TABLE. We must retain the eviction
in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the
discarded table is being re-imported with the same tablespace identifier,
the fresh data from the imported tablespace will replace any stale pages
in the buffer pool.
rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can
no longer be interrupted inside InnoDB.
fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(),
fseg_free_page_low(), fseg_free_extent(): Remove the parameter
that specifies whether the adaptive hash index should be dropped.
btr_search_lazy_free(): Lazily free an index when the last
reference to it is dropped from the adaptive hash index.
buf_pool_clear_hash_index(): Declare static, and move to the
same compilation unit with the bulk of the adaptive hash index
code.
dict_index_t::clone(), dict_index_t::clone_if_needed():
Clone an index that is being rebuilt while adaptive hash index
entries exist. The original index will be inserted into
dict_table_t::freed_indexes and dict_index_t::set_freed()
will be called.
dict_index_t::set_freed(), dict_index_t::freed(): Note that
or check whether the index has been freed. We will use the
impossible page number 1 to denote this condition.
dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count().
dict_index_t::detach_columns(): Move the assignment n_fields=0
to ha_innobase_inplace_ctx::clear_added_indexes().
We must have access to the columns when freeing the
adaptive hash index. Note: dict_table_t::v_cols[] will remain
valid. If virtual columns are dropped or added, the table
definition will be reloaded in ha_innobase::commit_inplace_alter_table().
buf_page_mtr_lock(): Drop a stale adaptive hash index if needed.
We will also reduce the number of btr_get_search_latch() calls
and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT
in order to benefit cmake -DWITH_INNODB_AHI=OFF.
On a checksum failure of a ROW_FORMAT=COMPRESSED page,
buf_LRU_free_one_page() would invoke buf_LRU_block_remove_hashed()
which will read the uncompressed page frame, although it would not
be initialized. With bad enough luck, fil_page_get_type(page)
could return an unrecognized value and cause the server to abort.
buf_page_io_complete(): On the corruption of a ROW_FORMAT=COMPRESSED
page, zerofill the uncompressed page frame.
This essentially reverts commit b393e2cb0c.
The leak might have been fixed, but because the
DEBUG_SYNC instrumentation for InnoDB purge threads was reverted
in 10.5 commit 5e62b6a5e0
as part of introducing a thread pool, it is easiest to revert
the entire change.
- There are multiple inconsistency and incorrect way in which rw-lock
stats are calculated.
- shared rw-lock stats:
"rounds" counter is incremented only once for N rounds done
in spin-cycle.
- all rw-lock stats:
If the spin-cycle is short-circuited then attempts are re-counted.
[If spin-cycle is interrupted, before it completes
srv_n_spin_wait_rounds (default 30) rounds, spin_count is incremented
to consider this. If thread resumes spin-cycle (due to unavailability
of the locks) and is again interrupted or completed, spin_count
is again incremented with the total count, failing to adjust the
previous attempt increment].
- s/x rw-lock stats:
spin_loop counter is not incremented at-all instead it is projected
as 0 (in show engine output) and division to calculate spin-round per
spin-loop is adjusted.
As per the original semantics spin_loop counter should be incremented
once per spin_loop execution.
- sx rw-lock stats:
sx locks increments spin_loop counter but instead of incrementing it
once for a spin_loop invocation it does it multiple times based on how
many time spin_loop flow is repeated for same instance post os-wait.
innobase_get_charset(), innobase_get_stmt_safe(): Remove.
It is more efficient and readable to invoke thd_charset()
and thd_query_safe() directly, without a non-inlined wrapper function.
As part of the SPATIAL INDEX implementation in InnoDB,
dict_index_t was expanded by a rtr_ssn_t field. There are only
3 operations for this field, all protected by rtr_ssn_t::mutex:
* btr_cur_search_to_nth_level() stores the least significant 32 bits
of the 64-bit value that is stored in the index root page.
(This would better be done when the table is opened for the
very first time.)
* rtr_get_new_ssn_id() increments the value by 1.
* rtr_get_current_ssn_id() reads the current value.
All these operations can be implemented equally safely by using
atomic memory access operations.
Let us limit the maximum value of the debug parameter
innodb_data_file_size to 256 MiB. It is only being used
in the test innodb.log_data_file_size, and the size
of the system tablespace should never exceed some 70 MiB
in ./mtr. Thus, 256 MiB should be a reasonable limit.
The fact that negative values that are passed to unsigned parameters
wrap around to the maximum value appears to be a regression due to
commit 18ef02b04d
and has been filed as bug MDEV-22219.
The test was incompatible with ./mtr --repeat=2 until
commit 2d6719d7ee
fixed that.
It turns out that the failing assertion that we disabled in
commit 3db94d2403
is bogus and can fail when the change buffer is emptied
during the last batch of crash recovery. The reason for this
is the condition around the page_create_empty() call in
page_cur_delete_rec(). The condition was removed in MariaDB
Server 10.5 as part of MDEV-12353, in
commit 7ae21b18a6 and
commit f8a9f90667.
The bug that the assertion aimed to catch is MDEV-22497, which
was fixed in commit 26aab96ecf.
The InnoDB insert buffer was upgraded in MySQL 5.5 into a change
buffer that also covers delete-mark and delete (purge) operations.
There is an important constraint for delete operations: a B-tree
leaf page must not become empty unless the entire tree becomes empty,
consisting of an empty root page. Because change buffer merges only
occur on a single leaf page at a time, delete operations must not be
buffered if it is possible that the last record of the page could be
deleted. (In that case, we would refuse to use the change buffer, and
if we really delete the last record, we would shrink the index tree.)
The function ibuf_get_volume_buffered_hash() is part of our insurance
that the page would not become empty. It is supposed to map each
buffered INSERT or DELETE_MARK record payload into a hash value.
We will only count each such record as a distinct key if there is no
hash collision. DELETE operations will always decrement the predicted
number fo records in the page.
Due to a bug in the function, we would actually compute the hash value
not only on the record payload, but also on some following bytes,
in case the record contains NULL values. In MySQL Bug #61104, we had
some examples of this dating back to 2012. But back then, we failed to
reproduce the bug, and in commit d84c95579b
we simply demoted the hard assertion to a message printout and a debug
assertion failure.
ibuf_get_volume_buffered_hash(): Correctly compute the hash value
of the payload bytes only. Note: we will consider
('foo','bar'),(NULL,'foobar'),('foob','ar') to be equal, but this
is not a problem, because in case of a hash collision, we could
also consider ('boo','far') to be equal, and underestimate the number
of records in the page, leading to refusing to buffer a DELETE.
This is a partial backport of
commit 5e7e7153b4 from 10.4.
assert_trx_is_free(): Assert !is_wsrep().
trx_init(): Do not initialize trx->wsrep, because it must have been
initialized already.
trx_commit_in_memory(): Invoke wsrep_commit_ordered(). This call
was being skipped, because the transaction object had already been
freed to the pool.
trx_rollback_for_mysql(), innobase_commit_low(),
innobase_rollback_trx(): Always reset trx->wsrep.
FOREIGN_KEY_CHECKS is disabled
- Referenced index can be null While renaming the referenced column name.
In that case, rename the referenced column name in dict_foreign_t and
find the equivalent referenced index.
During the UPDATE of PRIMARY KEY columns, we may miscalculate the
size of the clustered index record.
row_upd_clust_rec_by_insert(): Pass the total number of off-page columns,
which may include such columns that were inherited from the record
and not created as part of the UPDATE operation.
This is based on
mysql/mysql-server@490c45e8c8
which is a follow-up to
mysql/mysql-server@1fa475b85d
which we filed and fixed as MDEV-21511.
No test case was provided by Oracle.
Several MYSQL_SYSVAR_STR parameters that employ both a validate
function callback fail to copy the string for saving the
validated value. The affected variables include the following:
innodb_ft_aux_table
innodb_ft_server_stopword_table
innodb_ft_user_stopword_table
innodb_buffer_pool_filename
The test case is an enhanced version of
mysql/mysql-server@0b0c30641f
and the code changes are inspired by their fixes.
We are also importing and adjusting the test innodb_fts.stopword
to get coverage for the variable innodb_ft_user_stopword_table.
buf_dump(), buf_load(): Protect srv_buf_dump_filename with
LOCK_global_system_variables.
fts_load_user_stopword(): Minor cleanup
fts_load_stopword(): Remove the parameter global_stopword_table.
innobase_fts_load_stopword(): Protect innodb_server_stopword_table
against concurrent SET GLOBAL.