In commit 1193a793c4 we
set innodb_use_native_aio=OFF when using io_uring
on a kernel where write requests could potentially be lost.
The last reproducible issue was fixed in Linux 5.16-rc1
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v5.16-rc1&id=d3e3c102d107bb84251455a298cf475f24bab995
and the fix was backported to 5.15.3.
Hence, using a 5.16 or later kernel should be fine.
The Debian kernel 5.15.0-1-amd64 (5.15.3-1) was tested.
On Debian, utsname::release or uname -r does not reflect the
exact minor version while utsname::version and uname -v does.
On Fedora however the utsname::version is rather different:
$ uname -r
5.14.20-200.fc34.x86_64
$ uname -v
#1 SMP Thu Nov 18 22:03:20 UTC 2021
As such we use the version, but fall back to the release if
there isn't the beginnings of a kernel version in the version.
Thanks to Daniel Black for reporting the Linux kernel bug and
Jens Axboe for actually fixing it.
Co-Authored-By: Daniel Black <daniel@mariadb.org>
Closes: #1953
The macro my_offsetof() performs pointer arithmetics that may be
undefined behavior. As reported in MDEV-26272, it may cause
clang -fsanitize=undefined to generate invalid memory references.
struct PFS_events_statements: Convert to std::is_standard_layout
by encapsulating the standard-layout struct PFS_events instead of
deriving from it, so that the standard macro offsetof() can be used.
PFS_events_statements::copy(): Renamed from copy_events_statements().
A cast to void* is now needed in memcpy() to avoid GCC -Wclass-memaccess
"writing to an object ... leaves 64 bytes unchanged".
buf_LRU_scan_and_free_block(): It turns out that even with
-fno-expensive-optimizations, GCC 4.8.5 may fail to split an instruction.
For the non-embedded server, -O1 would fail and -Og would seem to work,
while the embedded server build seems to require -O0.
buf_block_init(): Correct the MemorySanitizer instrumentation.
buf_page_get_low(): Do not read dirty data from read-fixed blocks.
These data races were identified by MemorySanitizer. If a read-fixed
block is being accessed, we must acquire and release a page latch,
so that the read-fix (and the exclusive page latch) will be released
and it will be safe to read the page frame contents if needed,
even before acquiring the final page latch. We do that in
buf_read_ahead_linear() and for the allow_ibuf_merge check.
mtr_t::page_lock(): Assert that the block is not read-fixed.
buf_page_get_low(): When we are creating an uncompressed page frame
for a ROW_FORMAT=COMPRESSED page, we must release the buf_pool.page_hash
latch and buf_pool.mutex while waiting for other threads to release their
fixes on the block.
This was caught by an occasional hang of the test innodb_zip.bug56680.
The following options were introduced in
commit 2e814d4702 (mariadb-10.2.2)
and have little use:
innodb_disable_resize_buffer_pool_debug had no effect even in
MariaDB 10.2.2 or MySQL 5.7.9. It was introduced in
mysql/mysql-server@5c4094cf49
to work around a problem that was fixed in
mysql/mysql-server@2957ae4f99
(but the parameter was not removed).
innodb_page_cleaner_disabled_debug and innodb_master_thread_disabled_debug
are only used by the test innodb.redo_log_during_checkpoint
that will be removed as part of this commit.
innodb_dict_stats_disabled_debug is only used by that test,
and it is redundant because one could simply use
innodb_stats_persistent=OFF or the STATS_PERSISTENT=0 attribute
of the table in the test to achieve the same effect.
buf_page_t::frame: Moved from buf_block_t::frame.
All 'thin' buf_page_t describing compressed-only ROW_FORMAT=COMPRESSED
pages will have frame=nullptr, while all 'fat' buf_block_t
will have a non-null frame pointing to aligned innodb_page_size bytes.
This eliminates the need for separate states for
BUF_BLOCK_FILE_PAGE and BUF_BLOCK_ZIP_PAGE.
buf_page_t:🔒 Moved from buf_block_t::lock. That is, all block
descriptors will have a page latch. The IO_PIN state that was used
for discarding or creating the uncompressed page frame of a
ROW_FORMAT=COMPRESSED block is replaced by a combination of read-fix
and page X-latch.
page_zip_des_t::fix: Replaces state_, buf_fix_count_, io_fix_, status
of buf_page_t with a single std::atomic<uint32_t>. All modifications
will use store(), fetch_add(), fetch_sub(). This space was previously
wasted to alignment on 64-bit systems. We will use the following encoding
that combines a state (partly read-fix or write-fix) and a buffer-fix
count:
buf_page_t::NOT_USED=0 (previously BUF_BLOCK_NOT_USED)
buf_page_t::MEMORY=1 (previously BUF_BLOCK_MEMORY)
buf_page_t::REMOVE_HASH=2 (previously BUF_BLOCK_REMOVE_HASH)
buf_page_t::FREED=3 + fix: pages marked as freed in the file
buf_page_t::UNFIXED=1U<<29 + fix: normal pages
buf_page_t::IBUF_EXIST=2U<<29 + fix: normal pages; may need ibuf merge
buf_page_t::REINIT=3U<<29 + fix: reinitialized pages (skip doublewrite)
buf_page_t::READ_FIX=4U<<29 + fix: read-fixed pages (also X-latched)
buf_page_t::WRITE_FIX=5U<<29 + fix: write-fixed pages (also U-latched)
buf_page_t::WRITE_FIX_IBUF=6U<<29 + fix: write-fixed; may have ibuf
buf_page_t::WRITE_FIX_REINIT=7U<<29 + fix: write-fixed (no doublewrite)
buf_page_t::write_complete(): Change WRITE_FIX or WRITE_FIX_REINIT to
UNFIXED, and WRITE_FIX_IBUF to IBUF_EXIST, before releasing the U-latch.
buf_page_t::read_complete(): Renamed from buf_page_read_complete().
Change READ_FIX to UNFIXED or IBUF_EXIST, before releasing the X-latch.
buf_page_t::can_relocate(): If the page latch is being held or waited for,
or the block is buffer-fixed or io-fixed, return false. (The condition
on the page latch is new.)
Outside buf_page_get_gen(), buf_page_get_low() and buf_page_free(), we
will acquire the page latch before fix(), and unfix() before unlocking.
buf_page_t::flush(): Replaces buf_flush_page(). Optimize the
handling of FREED pages.
buf_pool_t::release_freed_page(): Assume that buf_pool.mutex is held
by the caller.
buf_page_t::is_read_fixed(), buf_page_t::is_write_fixed(): New predicates.
buf_page_get_low(): Ignore guesses that are read-fixed because they
may not yet be registered in buf_pool.page_hash and buf_pool.LRU.
buf_page_optimistic_get(): Acquire latch before buffer-fixing.
buf_page_make_young(): Leave read-fixed blocks alone, because they
might not be registered in buf_pool.LRU yet.
recv_sys_t::recover_deferred(), recv_sys_t::recover_low():
Possibly fix MDEV-26326, by holding a page X-latch instead of
only buffer-fixing the page.
MDEV-23855 and MDEV-23399 already moved some transient data fields
from buffer pool page descriptors to IORequest, but the write buffer
of PAGE_COMPRESSED or ENCRYPTED tables was missed. Since is only needed
during asynchronous page write requests, it belongs to IORequest.
btr_cur_optimistic_latch_leaves(): Use transactional_shared_lock_guard.
btr_cur_latch_leaves(): Avoid acquiring some page latches, because
the changes are already blocked by index->lock.
btr_cur_search_to_nth_level_func(): Remove a redundant variable
retrying_for_search_prev=!!prev_tree_blocks, and avoid acquiring
some page latches.
dict_stats_recalc_pool_del(): Always reposition the iterators after
releasing and reacquiring the mutex. Another thread could have modified
recalc_pool, causing reallocation of the underlying memory while
we were waiting.
This fixes a regression that was caused by
commit 45a05fda27 (MDEV-25919).
fil_space_decrypt(): change signature to return status via dberr_t only.
Also replace impossible condition with an assertion and prove it via
test cases.
In commit 7ae21b18a6 (MDEV-12353)
the recovery of ROW_FORMAT=COMPRESSED tables was changed.
Changes would be logged in a physical format for the compressed
page image, so that the page need not be decompressed or compressed
during recovery.
page_zip_write_rec(): Log any update of the delete-mark flag in the
ROW_FORMAT=COMPRESSED page.
page_zip_dir_insert(): Copy the delete-mark flag. A delete-marked
record may be inserted by btr_cur_pessimistic_update() via
btr_cur_insert_if_possible(), page_cur_tuple_insert(),
page_cur_insert_rec_zip(). In the observed scenario, it was
an ROLLBACK. Presumably, the test case involved repeated DELETE
and INSERT of the same key, or updating a key back and forth.
This change alone might make the adjustment in page_zip_write_rec()
redundant, but we play it safe because we failed to create a
minimal test case for this scenario.
If the server is killed during any DDL operation that is about to
delete an .ibd file, recovery could crash when attempting to load
the table definition of the being-dropped table. By design of
commit 1bd681c8b3 (MDEV-25506 part 3),
a table whose name starts with #sql-ib in the data dictionary may
belong to an uncommitted transaction. So, we must ignore any missing
SYS_COLUMNS, SYS_FIELDS, and SYS_VIRTUAL records for such tables.
The "ID mismatch" error messages were misleading; they really mean
"record not found".
buf_flush_check_neighbors(): Relax a debug assertion that
could fail for the very last page(s) of a ROW_FORMAT=COMPRESSED tables using a 1024-byte or 2048-byte page size.
This assertion started to fail after
commit d09426f9e6 (MDEV-26537)
modified the .ibd file extension to occur in steps of 4096 bytes.
- In ha_innobase::prepare_inplace_alter_table(), InnoDB should
check whether the table is empty. If the table is empty then
server should avoid downgrading the MDL after prepare phase.
It is more like instant alter, does change only in dicationary
and metadata.
- Changed few debug test case to make non-empty DDL table
Upon investigation, decided this to be a compiler bug
(happens with new compiler, on code that did not change for the last 15 years)
Fixed by de-optimizing single function remove_key(), using MSVC pragma
Upon investigation, decided this to be a compiler bug
(happens with new compiler, on code that did not change for the last 15 years)
Fixed by de-optimizing single function remove_key(), using MSVC pragma
In dict_index_t::clear(), InnoDB frees all the page except root page.
root page leaf segment has reset and does reinitialize again.
t in fseg_create(), we do have the assumption that only
FIL_PAGE_TYPE_TRX_SYS or FIL_PAGE_TYPE_TRX_SYS page should
be re-created for non-full-crc32 format. This assumption is wrong
in case of rollback of bulk insert operation.
The debug assertion that was added in
commit 9b967c4c31
tripped Valgrind and MemorySanitizer.
buf_block_init(): Assert that block->page.hash was zero-initialized.
In commit c091a0bc8d we removed
the use of the HASH_ macros for inserting into
buf_pool.page_hash, or accessing buf_page_t::hash.
However, the binary buddy allocator for block->page.zip.data would
still use the HASH_ macros. HASH_INSERT and not HASH_DELETE would reset
the next-block pointer to the null pointer. Our replacement of
HASH_DELETE() will reset the next-block pointer, and the replacement of
HASH_INSERT() assumes that the pointer is the null pointer.
buf_LRU_block_free_non_file_page(): Assert that the next-block pointer
is the null pointer.
buf_buddy_block_free(): Reset the pointer before invoking
buf_LRU_block_free_non_file_page(). Without this, the added
assertion would fail in the test encryption.innochecksum.