- InnoDB fails to recover the full crc32 encrypted page from
doublewrite buffer. The reason is that buf_dblwr_t::recover()
fails to identify the space id from the page because the page has
been encrypted from FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION bytes.
Fix:
===
buf_dblwr_t::recover(): preserve any pages whose space_id
does not match a known tablespace. These could be encrypted pages
of tablespaces that had been created with
innodb_checksum_algorithm=full_crc32.
buf_page_t::read_complete(): If the page looks corrupted and the
tablespace is encrypted and in full_crc32 format, try to
restore the page from doublewrite buffer.
recv_dblwr_t::recover_encrypted_page(): Find the page which
has the same page number and try to decrypt the page using
space->crypt_data. After decryption, compare the space id.
Write the recovered page back to the file.
Heap tables are allocated blocks to store rows according to
my_default_record_cache (mapped to the server global variable
read_buffer_size).
This causes performance issues when the record length is big
(> 1000 bytes) and the my_default_record_cache is small.
Changed to instead split the default heap allocation to 1/16 of the
allowed space and not use my_default_record_cache anymore when creating
the heap. The allocation is also aligned to be just under a power of 2.
For some test that I have been running, which was using record length=633,
the speed of the query doubled thanks to this change.
Other things:
- Fixed calculation of max_records passed to hp_create() to take
into account padding between records.
- Updated calculation of memory needed by heap tables. Before we
did not take into account internal structures needed to access rows.
- Changed block sized for memory_table from 1 to 16384 to get less
fragmentation. This also avoids a problem where we need 1K
to manage index and row storage which was not counted for before.
- Moved heap memory usage to a separate test for 32 bit.
- Allocate all data blocks in heap in powers of 2. Change reported
memory usage for heap to reflect this.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Note: Changes to the test innodb.stats_persistent
in commit e5c4c0842d (MDEV-35443)
are not merged, because the test scenario is impossible
due to commit e66928ab28 (MDEV-33462).
opt_calc_index_goodness(): Correct an inaccurate condition.
We can very well use a clustered index of a table that is subject
to online rebuild. But we must not choose an index that has not been
committed (it is a secondary index that was not fully created)
or that is corrupted or not a normal B-tree index.
opt_search_plan_for_table(): Remove some redundant code, now that
opt_calc_index_goodness() checks against corrupted indexes.
The test case allows this code to be exercised. The main observation
in the following:
./mtr --rr innodb.stats_persistent
rr replay var/log/mysqld.1.rr/latest-trace
should be that when opt_search_plan_for_table() is being invoked by
dict_stats_update_persistent() on the being-altered statistics table
in the 2nd call after ha_innobase::inplace_alter_table(),
and the fix in opt_calc_index_goodness() is absent,
it would choose the code path if (n_fields == 0), that is, a full
table scan, instead of searching for the record. The GDB commands to
execute in "rr replay" would be as follows:
break ha_innobase::inplace_alter_table
continue
break opt_search_plan_for_table
continue
continue
next
next
…
Reviewed by: Vladislav Lesin
The assertion fails during wsrep recovery step, in function
innobase_rollback_by_xid(). The transaction's xid is normally
cleared as part of lookup by xid, unless the transaction has
a wsrep specific xid.
This is a regression from MDEV-24035 (commit ddd7d5d8e3)
which removed the part clears xid before rollback for transaction
with a wsrep specific xid.
In function buf_page_create_low(), remove duplicate code that
over-write the ibuf_exist variable incorrectly when only compressed
page is loaded in buffer pool. This would help removing any old change
buffer record immediately before re-using the page.
The limit of socket length on unix according to libc is 108, see
sockaddr_un::sun_path, but in the table it is a string of max length
64, which results in truncation of socket and failure to connect by
plugins using servers such as spider.
This regression is introduced in 10.6 by following commit.
commit 35d477dd1d
MDEV-34453 Trying to read 16384 bytes at 70368744161280
The page state could change after being buffer-fixed and needs to be
read again after locking the page.
trx_sys_t::find_same_or_older_in_purge(): Correct a mistake that
was made in commit 19acb0257e
(MDEV-35508) and make the caching logic correspond to the one in
trx_sys_t::find_same_or_older(). In the more common code path
for 64-bit systems, the condition !hot was inadvertently inverted,
making us wrongly skip calls to find_same_or_older_low() when the
transaction may still be active.
Furthermore, the call should have been to find_same_or_older_low()
and not the wrapper find_same_or_older().
Under unknown circumstances, the SQL layer may wrongly disregard an
invocation of thd_mark_transaction_to_rollback() when an InnoDB
transaction had been aborted (rolled back) due to one of the following errors:
* HA_ERR_LOCK_DEADLOCK
* HA_ERR_RECORD_CHANGED (if innodb_snapshot_isolation=ON)
* HA_ERR_LOCK_WAIT_TIMEOUT (if innodb_rollback_on_timeout=ON)
Such an error used to cause a crash of InnoDB during transaction commit.
These changes aim to catch and report the error earlier, so that not only
this crash can be avoided but also the original root cause be found and
fixed more easily later.
The idea of this fix is from Michael 'Monty' Widenius.
HA_ERR_ROLLBACK: A new error code that will be translated into
ER_ROLLBACK_ONLY, signalling that the current transaction
has been aborted and the only allowed action is ROLLBACK.
trx_t::state: Add TRX_STATE_ABORTED that is like
TRX_STATE_NOT_STARTED, but noting that the transaction had been
rolled back and aborted.
trx_t::is_started(): Replaces trx_is_started().
ha_innobase: Check the transaction state in various places.
Simplify the logic around SAVEPOINT.
ha_innobase::is_valid_trx(): Replaces ha_innobase::is_read_only().
The InnoDB logic around transaction savepoints, commit, and rollback
was unnecessarily complex and might have contributed to this
inconsistency. So, we are simplifying that logic as well.
trx_savept_t: Replace with const undo_no_t*. When we rollback to
a savepoint, all we need to know is the number of undo log records
that must survive.
trx_named_savept_t, DB_NO_SAVEPOINT: Remove. We can store undo_no_t
directly in the space allocated at innobase_hton->savepoint_offset.
fts_trx_create(): Do not copy previous savepoints.
fts_savepoint_rollback(): If a savepoint was not found, roll back
everything after the default savepoint of fts_trx_create().
The test innodb_fts.savepoint is extended to cover this code.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
buf_dblwr_t::recover(): Correct a debug assertion failure that had
been added in commit bb47e575de (MDEV-34830).
The server may have been killed while a log write was in progress, and
therefore recv_sys.scanned_lsn may be up to RECV_PARSING_BUF_SIZE bytes
ahead of recv_sys.recovered_lsn.
Thanks to Matthias Leich for providing "rr replay" traces and
testing this.
fil_space_t::create(): Instead of invoking the default fil_space_t
constructor on a zero-filled buffer, allocate an uninitialized buffer
and invoke an explicitly defined constructor on it. Also, specify
initializer expressions for all constant data members, so that all of them
will be initialized in the constructor.
fil_space_t::being_imported: Replaces part of fil_space_t::purpose.
fil_space_t::is_being_imported(), fil_space_t::is_temporary():
Replaces fil_space_t::purpose.
fil_space_t:🆔 Changed the type from ulint to uint32_t to reduce
incompatibility with later branches that include
commit ca501ffb04 (MDEV-26195).
fil_space_t::try_to_close(): Do not attempt to close files that are
in an I/O bound phase of ALTER TABLE…IMPORT TABLESPACE.
log_file_op, first_page_init: recv_spaces_t:
Use uint32_t for the tablespace id.
Reviewed by: Debarun Banerjee
os_innodb_umask was of the incorrect type resulting in warnings
in clang-19. The correct type is mode_t.
As os_innodb_umask was set during innnodb_init from my_umask,
corrected the type there along with its companion my_umask_dir.
Because of this, the defaults mask values in innodb never
had an effect.
The resulting change allow found signed differences in
my_create{,_nosymlink}, open_nosymlinks:
mysys/my_create.c:47:20: error: operand of ?: changes signedness from ‘int’ to ‘mode_t’ {aka ‘unsigned int’} due to unsignedness of other operand [-Werror=sign-compare]
47 | CreateFlags ? CreateFlags : my_umask);
Ref: clang-19 warnings:
[55/123] Building CXX object storage/innobase/CMakeFiles/innobase.dir/os/os0file.cc.o
storage/innobase/os/os0file.cc:1075:46: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1075 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
storage/innobase/os/os0file.cc:1249:46: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1249 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
storage/innobase/os/os0file.cc:1381:45: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1381 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
Threads can normally exit without a explicit pthread_exit call.
There seem to date to old glibc bugs, many around 2.2.5.
The semi related bug was https://bugs.mysql.com/bug.php?id=82886.
To improve safety in the signal handlers DBUG_* code was removed.
These where also needed to avoid some MSAN unresolved stack issues.
This is effectively a backport of 2719cc4925.
The partitioning error handling code was looking at
thd->lex->alter_info.partition_flags in non-alter-table cases, in which cases
the value is stale and contains whatever was set by any earlier ALTER TABLE.
This could cause the wrong error code to be generated, which then in some cases
can cause replication to break with "different errorcode" error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This is a fixup of MDEV-26345 commit
77ed235d50.
In MDEV-26345 the spider group by handler was updated so that it uses
the item_ptr fields of Query::group_by and Query::order_by, instead of
item. This was and is because the call to
join->set_items_ref_array(join->items1) during the execution stage,
just before the execution replaces the order-by / group-by item arrays
with Item_temptable_field.
Spider traverses the item tree during the group by handler (gbh)
creation at the end of the optimization stage, and decides a gbh could
handle the execution of the query. Basically spider gbh can handle the
execution if it can construct a well-formed query, executes on the
data node, and store the results in the correct places. If so, it will
create one, otherwise it will return NULL and the execution will use
the usual handler (ha_spider instead of spider_group_by_handler). To
that end, the general principle is the items checked for creation
should be the same items later used for query construciton. Since in
MDEV-26345 we changed to use the item_ptr field instead of item field
of order-by and group-by in query construction, in this patch we do
the same for the gbh creation.
The item_ptr field could be the uninitialised NULL value during the
gbh creation. This is because the optimizer may replace a DISTINCT
with a GROUP BY, which only happens if the original GROUP BY is empty.
It creates the artificial GROUP BY by calling create_distinct_group(),
which creates the corresponding ORDER object with item field aligning
with somewhere in ref_pointer_array, but leaving item_ptr to be NULL.
When spider finds out that item_ptr is NULL, it knows there's some
optimizer skullduggery and it is passed a query different from the
original. Without a clear contract between the server layer and the
gbh, it is better to be safe than sorry and not create the gbh in this
case.
Also add a check and error reporting for the unlikely case of item_ptr
changing from non-NULL at gbh construction to NULL at execution to
prevent server crash.
Also, we remove a check added in MDEV-29480 of order by items being
aggregate functions. That check was added with the premise that spider
was including auxiliary SELECT items which is referenced by ORDER BY
items. This premise was no longer true since MDEV-26345, and caused
problems such as MDEV-29546, which was fixed by MDEV-26345.
btr_cur_t::search_leaf(): In the BTR_SEARCH_PREV and BTR_MODIFY_PREV
modes, reset the previous search status before invoking
page_cur_search_with_match(). Otherwise, we the search could invoke
in a totally wrong subtree.
This fixes a regression that was introduced in
commit de4030e4d4 (MDEV-30400).
buf_block_alloc(): Define as an alias in buf0lru.h, which defines
the underlying buf_LRU_get_free_block().
buf_block_free(): Define as an alias of the non-inline function
buf_pool.free_block(block).
Reviewed by: Vladislav Lesin
Instead of repurposing buf_page_t::access_time for state()==MEMORY
blocks that are part of recv_sys.pages, let us define an anonymous
union around buf_page_t::hash. In this way, we will be able to
declare access_time private.
Reviewed by: Vladislav Lesin
row_purge_remove_sec_if_poss_leaf(): If there is an active transaction
that is not newer than PAGE_MAX_TRX_ID, return the bogus value 1
so that row_purge_remove_sec_if_poss_tree() is guaranteed to recheck if
the record needs to be purged. It could be the case that an active
transaction would insert this record between the time this check
completed and row_purge_remove_sec_if_poss_tree() acquired a latch
on the secondary index leaf page again.
row_purge_del_mark_error(), row_purge_check(): Some unlikely code
refactored into separate non-inline functions.
trx_sys_t::find_same_or_older_low(): Move the unlikely and bulky
part of trx_sys_t::find_same_or_older() to a non-inline function.
trx_sys_t::find_same_or_older_in_purge(): A variant of
trx_sys_t::find_same_or_older() for use in the purge subsystem,
with potential concurrent access of the same trx_t object from
multiple threads.
trx_t::max_inactive_id_atomic: An Atomic_relaxed alias of the
regular data field trx_t::max_inactive_id, which we
use on systems that have native 64-bit loads or stores.
On any 64-bit system that seems to be supported by GCC, Clang or MSVC,
relaxed atomic loads and stores use the regular load and store
instructions. On -march=i686 the 64-bit atomic loads and stores
would use an XMM register.
This fixes a regression that had been introduced in
commit b7b9f3ce82 (MDEV-34515).
There would be messages
[ERROR] InnoDB: tried to purge non-delete-marked record in index
in the server error log, and an assertion ut_ad(0) would cause a
crash of debug instrumented builds. This could also cause incorrect
results for MVCC reads and corrupted secondary indexes.
The debug instrumented test case was written by Debarun Banerjee.
Reviewed by: Debarun Banerjee
Ignore snapshot isolation conflict during fragment removal, before
streaming transaction commits. This happens when a streaming
transaction creates a read view that precedes the INSERTion of
fragments into the streaming_log table. Fragments are INSERTed
using a different transaction. These fragment are then removed
as part of COMMIT of the streaming transaction. This fragment
removal operation could fail when the fragments were not part
the transaction's read view, thus violating snapshot isolation.
We periodically observe assertion failures in the mtr tests,
specifically in the /storage/innobase/row/row0ins.cc file,
following a WSREP error. The error message is: 'WSREP: record
locking is disabled in this thread, but the table being modified
is not mysql/wsrep_streaming_log: mysql/innodb_table_stats.'"
This issue seems to occur because, upon opening the table,
innodb_stats_auto_recalc may trigger, which Galera does not
anticipate. This commit should fix this bug.
The existing default value 1000 is too big and could result in
"hanging" when failing to connect a remote server. Three tries in
total is a more sensible default.
ha_storage_put_memlim(): Initialize node->next in order to avoid a
crash on a subsequent invocation, due to dereferencing an uninitialized
pointer.
This fixes a regression that had been introduced in
commit ccb6cd8053 (MDEV-35189).
Reviewed by: Debarun Banerjee
It is `ulint` on 10.6 and `uint32_t` on 10.11+, but I included its
format specifier change in 10.6 (MDEV-35430, merged #3493) rather
than 10.11. This commit reverts that change so 10.11 can reapply it.
Create a MY_WARNING_FLAGS_NON_FATAL for testing warnings
Add -Weffc++ as no-error espect, disabling from rocksdb due
to excessive errors that will be corrected later.
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
Various additional fixes, each too small to put into
their own commit.
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
Change the type of my_hash_get_key to:
1) Return const
2) Change the context parameter to be const void*
Also fix casting in hash adjacent areas.
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
The functions queue_compare, qsort2_cmp, and qsort_cmp2
all had similar interfaces, and were used interchangable
and unsafely cast to one another.
This patch consolidates the functions all into the
qsort_cmp2 interface.
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
A race condition was observed between two buf_page_get_zip() for a page.
One of them had proceeded to buf_read_page(), allocating and x-latching
a buf_block_t that initially comprises only an uncompressed page frame.
While that thread was waiting inside buf_block_alloc(), another thread
would try to access the same page. Without acquiring a page latch, it
would wrongly conclude that there is corruption because no compressed
page frame exists for the block.
buf_page_get_zip(): Simplify the logic and correct the documentation.
Always acquire a shared latch to prevent any race condition with a
concurrent read operation. No longer increment a buffer-fix; the latch
is sufficient for preventing page relocation or eviction.
buf_read_page(): Add the parameter bool unzip=true. In buf_page_get_zip()
there is no need to allocate an uncompressed page frame for reading a
compressed BLOB page. We only need that for other ROW_FORMAT=COMPRESSED
pages, or for writing compressed BLOB pages.
btr_copy_zblob_prefix(): Remove the message "Cannot load compressed BLOB"
because buf_page_get_zip() will already have reported a more specific
error whenever it returns nullptr.
row_merge_buf_add(): Do not crash on BLOB corruption, but return an
error instead. (In debug builds, an assertion will fail if this
corruption is noticed.)
Reviewed by: Debarun Banerjee
When srv_page_size and innodb_page_size were introduced,
the functions page_align() and page_offset() got more expensive.
Let us try to replace such calls with simpler pointer arithmetics
with respect to the buffer page frame.
page_rec_get_next_non_del_marked(): Add a page frame as a parameter,
and template<bool comp>.
page_rec_next_get(): A more efficient variant of page_rec_get_next(),
with template<bool comp> and const page_t* parameters.
lock_get_heap_no(): Replaces page_rec_get_heap_no() outside debug checks.
fseg_free_step(), fseg_free_step_not_header(): Take the header block
as a parameter.
Reviewed by: Vladislav Lesin
ut_hash_ulint(): Remove. The exclusive OR before a modulus operation
does not serve any useful purpose; it is only obfuscating code and
wasting some CPU cycles.
Reviewed by: Debarun Banerjee
ut_fold_ull(): For SIZEOF_SIZE_T < 8, we simulate universal hashing
(Carter and Wegman, 1977) by pretending that SIZE_T_MAX + 1
is a prime. In other words, we implement a Rabin–Karp rolling
hash algorithm similar to java.lang.String.hashCode().
This is used for representing 64-bit dict_index_t::id or
dict_table_t::id in the native word size.
For SIZEOF_SIZE_T >= 8, we just use an identity mapping.
Reviewed by: Debarun Banerjee
The HASH_ macros are unnecessarily obfuscating the logic,
so we had better replace them.
hash_cell_t::search(): Implement most of the HASH_DELETE logic,
for a subsequent insert or remove().
hash_cell_t::remove(): Remove an element.
hash_cell_t::find(): Implement the HASH_SEARCH logic.
xb_filter_hash_free(): Avoid any hash table lookup;
just traverse the hash bucket chains and free each element.
xb_register_filter_entry(): Search databases_hash only once.
rm_if_not_found(): Make use of find_filter_in_hashtable().
dict_sys_t::acquire_temporary_table(), dict_sys_t::find_table():
Define non-inline to avoid unnecessary code duplication.
dict_sys_t::add(dict_table_t *table), dict_table_rename_in_cache():
Look for duplicate while finding the insert position.
dict_table_change_id_in_cache(): Merged to the only caller
row_discard_tablespace().
hash_insert(): Helper function of dict_sys_t::resize().
fil_space_t::create(): Look for a duplicate (and crash if found)
when searching for the insert position.
lock_rec_discard(): Take the hash array cell as a parameter
to avoid a duplicated lookup.
lock_rec_free_all_from_discard_page(): Remove a parameter.
Reviewed by: Debarun Banerjee
buf_pool_t::LRU_warn(): Also clear the try_LRU_scan flag, to ensure
that need_LRU_eviction() will hold. This should ensure progress when
buf_LRU_get_free_block() is expecting buf_flush_page_cleaner() to
make some room, even when buf_pool.LRU.count is small.
This hang was observed in trx_lists_init_at_db_start() while the last
batch of crash recovery was in progress, but it could theoretically
be possible also when a large part of the buffer pool is occupied by
record locks or the adaptive hash index.
Reviewed by: Debarun Banerjee