page_cur_search_with_match(): Remove rec_get_offsets(), and instead
determine the start and end of each field while comparing.
page_dir_slot_get_rec(), page_dir_slot_get_rec_validate():
Add a parameter to avoid invoking page_align().
page_cur_dtuple_cmp(): Replaces cmp_dtuple_rec_leaf() for both
leaf and non-leaf pages. In SPATIAL INDEX, non-leaf records are
special, because the child page number may be part of the comparison.
Reviewed by: Vladislav Lesin
For some reason, page_cur_search_with_match_bytes(), which can speed
up append operations (PAGE_CUR_LE used by INSERT), was only enabled
if innodb_adaptive_hash_index=ON even though it has nothing to do with
the adaptive hash index.
Furthermore, mysql/mysql-server@c9bbc83d11
a.k.a. commit c9bbc83d11 reduced a limit
from 3 to 2 but forgot to adjust the PAGE_N_DIRECTION limit accordingly.
We are adjusting that as well.
Reviewed by: Vladislav Lesin
During a workload, an adaptive hash index had been built on
UNIQUE INDEX(ID) on SYS_TABLES, and during a DROP TABLE
operation the adaptive hash index would be widened to cover
also the PRIMARY KEY(NAME) field that the index includes: (ID,NAME).
Such an adaptive hash index is unlikely to satisfy (m)any queries.
Let us limit the AHI prefix to the unique fields.
Reviewed by: Vladislav Lesin
btr_search_drop_page_hash_index(): Replace the Boolean parameter
with const dict_index_t *not_garbage. If buf_block_t::index points
to that, there is no need to acquire btr_sea::partition::latch.
The old parameter bool garbage_collect=false is equivalent to the
parameter not_garbage=nullptr. The parameter garbage_collect=true
will be replaced either with the actual index that is associated
with the buffer page, or with a bogus pointer not_garbage=-1 to
indicate that any lazily entries for a freed index need to be removed.
buf_page_get_low(), buf_page_get_gen(), mtr_t::page_lock(),
mtr_t::upgrade_buffer_fix(): Do not invoke
btr_search_drop_page_hash_index(). Our caller will have to do it
when appropriate.
buf_page_create_low(): Keep invoking btr_search_drop_page_hash_index().
This is the normal way of lazily dropping the adaptive hash index
after a DDL operation such as DROP INDEX operation.
btr_block_get(), btr_root_block_get(), btr_root_adjust_on_import(),
btr_read_autoinc_with_fallback(), btr_cur_instant_init_low(),
btr_cur_t::search_leaf(), btr_cur_t::pessimistic_search_leaf(),
btr_pcur_optimistic_latch_leaves(), dict_stats_analyze_index_below_cur():
Invoke btr_search_drop_page_hash_index(block, index) for pages that
may be leaf pages. No adaptive hash index may have been created on
anything else than a B-tree leaf page.
btr_cur_search_to_nth_level(): Do not invoke
btr_search_drop_page_hash_index(), because we are only accessing
non-leaf pages and the adaptive hash index may only have been created
on leaf pages.
btr_page_alloc_for_ibuf() and many other callers of buf_page_get_gen()
or similar functions do not invoke btr_search_drop_page_hash_index(),
because the adaptive hash index is never created on such pages.
If a page in the tablespace was freed as part of a DDL operation and
reused for something else, then buf_page_create_low() will take care
of dropping the adaptive hash index before the freed page will be
modified.
It is notable that while the flst_ functions may access pages that are
related to allocating B-tree index pages (the BTR_SEG_TOP and BTR_SEG_LEAF
linked from the index root page), those pages themselves can never be
stored in the adaptive hash index. Therefore, it is not necessary to
invoke btr_search_drop_page_hash_index() on them.
Reviewed by: Vladislav Lesin
btr_search_info_update_hash(): Do nothing if the record is positioned
on the page supremum or infimum pseudo-record. The adaptive hash index
can only include user records. This deficiency would cause the
adaptive hash index parameters to change between hashing a prefix of
1 field or a prefix of 1 byte.
Reviewed by: Vladislav Lesin
btr_search_guess_on_hash(): Only set BTR_CUR_HASH_FAIL on actual mismatch.
If the page latch cannot be acquired, the hash search might very well
have succeeded. Do not count that as a failure, that is, do not
unnecessarily invoke btr_search_update_hash_ref() after a normal search.
Set cursor->flag=BTR_CUR_HASH_ABORT if the current parameters of the
adaptive hash index are not suitable for the search and a call to
btr_cur_t::search_info_update() might help.
btr_cur_t::search_leaf(): Do not invoke search_info_update()
if btr_search_guess_on_hash() failed due to contention.
btr_cur_t::pessimistic_search_leaf(): Do not invoke search_info_update()
on the change buffer tree. Preivously, this condition was being checked
inside search_info_update().
btr_cur_t::search_leaf(): Do not attempt to use the adaptive
hash index for PAGE_CUR_G or PAGE_CUR_L, because those modes
expect an inequal result, and the adaptive hash index can only
deliver equal results.
btr_cur_t::check_mismatch(): Only handle PAGE_CUR_LE and PAGE_CUR_GE.
For PAGE_CUR_LE (bool ge=false), qualify a full match for the last
record of a page that is not at the end of the index. Previously,
an adaptive hash index lookup would fail when the record is at the end
of an index page but not at the end of the index. This would lead to
unnecessary rebuild of the adaptive hash index in read-only workloads.
Reviewed by: Vladislav Lesin
Now that ut_fold_ulint_pair() and ut_fold_binary() are no longer needed
for anything else than compatibility with old InnoDB data files that may
use innodb_checksum_algorithm=innodb, let us move the code to a single
compilation unit.
Reviewed by: Vladislav Lesin
Let us use implement a simple fixed-size allocator for the adaptive hash
index, insted of complicating mem_heap_t or mem_block_info_t.
MEM_HEAP_BTR_SEARCH: Remove.
mem_block_info_t::free_block(), mem_heap_free_block_free(): Remove.
mem_heap_free_top(), mem_heap_get_top(): Remove.
btr_sea::partition::spare: Replaces mem_block_info_t::free_block.
This keeps one spare block per adaptive hash index partition, to
process an insert.
We must not wait for buf_pool.mutex while holding
any btr_sea::partition::latch. That is why we cache one block for
future allocations. This is protected by a new
btr_sea::partition::blocks_mutex in order to relieve pressure on
btr_sea::partition::latch.
btr_sea::partition::prepare_insert(): Replaces
btr_search_check_free_space_in_heap().
btr_sea::partition::erase(): Replaces ha_search_and_delete_if_found().
btr_sea::partition::cleanup_after_erase(): Replaces the most part of
ha_delete_hash_node(). Unlike the previous implementation, we will
retain a spare block for prepare_insert().
This should reduce some contention on buf_pool.mutex.
btr_search.n_parts: Replaces btr_ahi_parts.
btr_search.enabled: Replaces btr_search_enabled. This must hold
whenever buf_block_t::index is set while a thread is holding a
btr_sea::partition::latch.
dict_index_t::search_info: Remove pointer indirection, and use
Atomic_relaxed or Atomic_counter for most fields.
btr_search_guess_on_hash(): Let the caller ensure that latch_mode is
BTR_MODIFY_LEAF or BTR_SEARCH_LEAF. Release btr_sea::partition::latch
before buffer-fixing the block. The page latch that we already acquired
is preventing buffer pool eviction. We must validate both
block->index and block->page.state while holding part.latch
in order to avoid race conditions with buffer page relocation
or buf_pool_t::resize().
btr_search_check_guess(): Remove the constant parameter
can_only_compare_to_cursor_rec=false.
ahi_node: Replaces ha_node_t.
This has been tested by running the regression test suite
with the adaptive hash index enabled:
./mtr --mysqld=--loose-innodb-adaptive-hash-index=ON
Reviewed by: Vladislav Lesin
log_t::persist(): Remove the parameter holding_latch, and assert
latch_holding_any(). We used to avoid acquiring a latch when log
resizing was not in progress. That allowed a race condition to occur
where log_t::write_checkpoint() has just completed log resizing.
In that case, we could wrongly invoke pmem_persist() on the old
log_sys.buf instead of the new one, which was shortly before known
as log_sys.resize_buf.
log_write_persist(): A non-inline wrapper function that will
invoke log_sys.persist() while holding a shared log_sys.latch.
By default, CMAKE_BUILD_TYPE RelWithDebInfo or Release implies -DNDEBUG,
which disables the assert() macro. MariaDB is deviating from that.
Let us be explicit to use assert() only in debug builds.
This fixes up 1b8358d943
The macros ut_ad() and DBUG_ASSERT() can evaluate their argument twice.
That is wrong for any read-modify-write arguments.
Thanks to Nikita Malyavin for pointing this out.
recv_sys_t::parse(): When parsing an OPTION record, invoke
l.copy_if_needed() before checking if the payload is OPT_PAGE_CHECKSUM
followed by a 32-bit page checksum.
This fixes up the merge 57d4a242da of
commit 4179f93d28 (MDEV-18976).
The impact of this can be observed by running a debug instrumented
build on the test encryption.recovery_memory. There should be over
5,000 invocations of log_phys_t::page_checksum(). Without this fix,
there should be less than 100 of them (when the OPT_PAGE_CHECKSUM
byte happens to encrypt to itself).
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich
This fixes another regression that had been introduced in
commit b249a059da (MDEV-34850).
This should prevent failures of mariadb-backup --backup of
the following type:
mariabackup: Failed to read undo log tablespace space id …
and there is no undo tablespace truncation redo record.
This error has not been hit by our internal testing, and we
currently have no regression test to cover this.
recv_sys_t::parse<storing=NO>(): Do invoke
fil_space_set_recv_size_and_flags() and do parse enough of page 0
to facilitate that.
This fixes a regression that had been introduced in
commit b249a059da (MDEV-34850).
In a multi-batch crash recovery, we would fail to invoke
fil_space_set_recv_size_and_flags() while parsing the remaining log,
before starting the first recovery batch.
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich
when testing MDEV-34539 create a table specifically for the test,
don't use a system table as a shortcut to save a couple of lines.
followup for 8d813f080b
use the same condition in
fill_schema_table_from_frm() when open_table_from_share() fails, as in
fill_schema_table_from_frm() when tdc_aquire_share() fails and as in
fill_schema_table_from_open() when open_table_from_share() fails
get_all_tables() skipped tables if the user has no privileges on
the schema itself and no granted privilege on any tables in the schema.
that is, it was skipping performance_schema tables (privileges
on them aren't explicitly granted, but internally hard-coded)
To fix:
* extend ACL_internal_table_access::check() method with
`bool any_combination_will_do`
* fix all perfschema privilege checks to take it into account.
* don't reuse table_acl_check object for all tables, initialize it
for every table otherwise GRANT_INTERNAL_INFO will leak
* remove incorrect privilege check from get_all_tables()
cannot have an assert in Warning_info::push_warning()
because SQL command SIGNAL can set an absolutely arbitrary
message, even an empty one or ending with '\n'
move the assert into push_warning() and my_message_sql().
followup for 9508a44c37
Most InnoDB functions do not throw any exceptions, not even indirectly
std::bad_alloc, which could be thrown by a C++ memory allocation function.
Let us annotate many functions with noexcept in order to reduce the code
footprint related to exception handling.
Reviewed by: Thirunarayanan Balathandayuthapani
The issue is caused by a logic error in Item_sum::get_tmp_table_item() method:
it resets arguments of the item to point to the result fields during
change_ref_to_tmp_fields() call. However, Item_sum arguments must not be modified.
It is enough for Item_sum objects to call ancestor's implementation
Item::get_tmp_table_item().
This fix is in accordance with MySQL commit 2e3dc09087c24798c90e05163ed3d931f6b93db3
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Add a simple test to verify the server behaves in a safe manner if configured
with ciphers that aren't compatible with the server certificate.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Add a simple test to verify that the server will fail to start up when no valid
cipher suites are passed to `ssl-cipher`.
As different TLS libraries and versions have differing cipher suite support, it
would be a good idea to ensure the server behaves in a safe manner if it is
configured with invalid cipher suites.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
The LOCK_global_system_variables must not be held when taking mutexes
such as LOCK_commit_ordered and LOCK_log, as this causes inconsistent
mutex locking order that can theoretically cause the server to
deadlock.
To avoid this, temporarily release LOCK_global_system_variables in two
system variable update functions, like it is done in many other
places.
Enforce the correct locking order at server startup, to more easily
catch (in debug builds) any remaining wrong orders that may be hidden
elsewhere in the code.
Note that when this is merged to 11.4, similar unlock/lock of
LOCK_global_system_variables must be added in update_binlog_space_limit()
as is done in binlog_checksum_update() and fix_max_binlog_size(), as this
is a new function added in 11.4 that also needs the same fix. Tests will
fail with wrong mutex order until this is done.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
ha_innobase::delete_table(): Clear trx->dict_operation_lock_mode
after, not before invoking trx->rollback(), so that
row_undo_mod_parse_undo_rec() will be invoked with dict_locked=true
and dict_sys_t::freeze() will not be invoked for loading a table
definition. Inside dict_sys_t::freeze(), an assertion !have_any()
would fail when the current thread is already holding the latch.
This fixes up commit c5fd9aa562 (MDEV-25919).
Reviewed by: Debarun Banerjee
during FLUSH PRIVILEGES, allow_all_hosts temporarily goes out of sync
with acl_check_hosts and acl_wild_hosts.
As it's tested in acl_check_host() without a mutex, let's re-test it
under a mutex to make sure the value is correct.
Note that it's just an optimization and it's ok to see outdated
allow_all_hosts value here.
* replace the message away in the test result
* remove "feedback plugin:" prefix, it's a server message, not plugin's
* downgrade to the warning, because
1) it's not a failure, no operation was aborted, server still works
2) it's something actionable, so not a [Note] either