In 69a4d6ae, an MTR test was added to verify that we handled multiple invalid
options. However, the logic to perform this test relied on a non-trivial regex
to filter out the noise in the logs.
Instead, we now just simply search for what we expect to be in the logs.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
In the case if some unique key fields are nullable, there can be
several records with the same key fields in unique index with at least
one key field equal to NULL, as NULL != NULL.
When transaction is resumed after waiting on the record with at least one
key field equal to NULL, and stored in persistent cursor record is
deleted, persistent cursor can be restored to the record with all key
fields equal to the stored ones, but with at least one field equal to
NULL. And such record is wrongly treated as a record with the same unique
key as stored in persistent cursor record one, what is wrong as
NULL != NULL.
The fix is to check if at least one unique field is NULL in restored
persistent cursor position, and, if so, then don't treat the record as
one with the same unique key as in the stored record key.
dict_index_t::nulls_equal was removed, as it was initially developed for
never existed in MariaDB "intrinsic tables", and there is no code, which
would set it to "true".
Reviewed by Marko Mäkelä.
The test failure in rpl.rpl_domain_id_filter_restart is caused by
MDEV-33887. That is, the test uses master_pos_wait() (called
indirectly by sync_slave_with_master) to try and wait for the
replica to catch up to the master. However, the waited on
transaction is ignored by the configured
CHANGE MASTER TO IGNORE_DOMAIN_IDS=()
As MDEV-33887 reports, due to the IO thread updating the binlog
coordinates and the SQL thread updating the GTID state, if the
replica is stopped in-between these updates, the replica state will
be inconsistent. That is, the test expects that the GTID state will
be updated, so upon restart, the replica will be up-to-date.
However, if the replica is stopped before the SQL thread updates its
GTID state, then upon restart, the replica will fetch the previously
ignored event, which is no longer ignored upon restart, and execute
it. This leads to the sporadic extra row in t2.
This patch changes master_pos_wait() to use master_gtid_wait() to
ensure the replica state is consistent with the master state.
When navigating through the existing links in the README, it is not
immediately obvious where to go to find instructions in building
and testing the source code. Since the README is often the first
thing people see when looking at a repository, this information
should be front and centre so that newcomers to the project can
get setup as quickly as possible.
The condition checked the value of the leftmost byte before checking if
at least one byte is still available in the buffer.
Changing the order in the condition: check for a byte availability before
checking the byte value.
increase the MASTER_CONNECT_RETRY time under valgrind,
otherwise the slave gives up retrying before the master is ready
also, cosmetic cleanup of rpl_semi_sync_master_shutdown.test
The crash at running mysqlbinlog on a SEQUENCE containing binlog file
was caused MDEV-29621 fixes that did not check which of the slave
or binlog applier executes a block introduced there.
The block is meaningful only for the parallel slave applier, so
it's safe to fix this bug with identified the actual applier and
skipping the block when it's the mysqlbinlog one.
TIME-alike string and numeric arguments to TIMEDIFF()
can get additional fractional seconds during the supported
TIME range adjustment in get_time().
For example, during TIMEDIFF('839:00:00','00:00:00') evaluation
in Item_func_timediff::get_date(), the call for args[0]->get_time()
returns MYSQL_TIME '838:59:59.999999'.
Item_func_timediff::get_date() did not handle these extra digits
and returned a MYSQL_TIME result with fractional digits outside
of Item_func_timediff::decimals. This mismatch could further be
caught by a DBUG_ASSERT() in various other pieces of the code,
leading to a crash.
Fix:
In case if get_time() returned MYSQL_TIMESTAMP_TIME,
let's truncate all extra digits using my_time_trunc(&l_time,decimals).
This guarantees that the rest of the code returns a MYSQL_TIME
with second_part not conflicting with Item_func_timediff::decimals.
In commit d74d95961a (MDEV-18543)
there was an error that would cause the hidden metadata record
to be deleted, and therefore cause the table to appear corrupted
when it is reloaded into the data dictionary cache.
PageConverter::update_records(): Do not delete the metadata record,
but do validate it.
RecIterator::open(): Make the API more similar to 10.6, to simplify
merges.
When the system variables @@debug_dbug was assigned to
some expression, Sys_debug_dbug::do_check() did not properly
convert the value from the expression character set to utf8.
So the value was erroneously re-interpretted as utf8 without
conversion. In case of a tricky expression character set
(e.g. utf16le), this led to unexpected results.
Fix:
Re-using Sys_var_charptr::do_string_check() in Sys_debug_dbug::do_check().
Same as MDEV-29579. For some reason, libodbc does not clean up
properly if unloaded too early with the dlclose() of spider. So we add
UNIQUE symbols to spider so the spider does not reload in dlclose().
This change, however, uncovers some hidden problems in the spider
codebase, for which we move the initialisation of some spider global
variables into the initialisation of spider itself.
Spider has some global variables. Their initialisation should be done
in the initialisation of spider itself, otherwise, if spider were
re-initialised without these symbol being unloaded, the values could
be inconsistent and causing issues.
One such issue is caused by the variables
spider_mon_table_cache_version and spider_mon_table_cache_version_req.
They are used for resetting the spider monitoring table cache and have
initial values of 0 and 1 respectively. We have that always
spider_mon_table_cache_version_req >= spider_mon_table_cache_version,
and when the relation is strict, the cache is reset,
spider_mon_table_cache_version is brought to be equal to
spider_mon_table_cache_version_req, and the cache is searched for
matching table_name, db_name and link_idx. If the relation is equal,
no reset would happen and the cache would be searched directly.
When spider is re-inited without resetting the values of
spider_mon_table_cache_version and spider_mon_table_cache_version_req
that were set to be equal in the previous cache reset action, the
cache was emptied in the previous spider deinit, which would result in
HA_ERR_KEY_NOT_FOUND unexpectedly.
An alternative way to fix this issue would be to call the spider udf
spider_flush_mon_cache_table(), which increments
spider_mon_table_cache_version_req thus making sure the inequality is
strict. However, there's no reason for spider to initialise these
global variables on dlopen(), rather than on spider init, which is
cleaner and "purer".
To reproduce this issue, simply revert the changes involving the two
variables and then run:
mtr --no-reorder spider.ha{,_part}
The signal handler thread can use various different runtime
resources when processing a SIGHUP (e.g. master-info information)
due to calling into reload_acl_and_cache(). Currently, the shutdown
process waits for the termination of the signal thread after
performing cleanup. However, this could cause resources actively
used by the signal handler to be freed while reload_acl_and_cache()
is processing.
The specific resource that caused MDEV-30260 is a race condition for
the hostname_cache, such that mysqld would delete it in
clean_up()::hostname_cache_free(), before the signal handler would
use it in reload_acl_and_cache()::hostname_cache_refresh().
Another similar resource is the active_mi/master_info_index. There
was a race between its deletion by the main thread in end_slave(),
and their usage by the Signal Handler as a part of
Master_info_index::flush_all_relay_logs.read(active_mi) in
reload_acl_and_cache().
This patch fixes these race conditions by relocating where server
shutdown waits for the signal handler to die until after
server-level threads have been killed (i.e., as a last step of
close_connections()). With respect to the hostname_cache, active_mi
and master_info_cache, this ensures that they cannot be destroyed
while the signal handler is still active, and potentially using
them.
Additionally:
1) This requires that Events memory is still in place for SIGHUP
handling's mysql_print_status(). So event deinitialization is moved
into clean_up(), but the event scheduler still needs to be stopped
in close_connections() at the same spot.
2) The function kill_server_thread is no longer used, so it is
deleted
3) The timeout to wait for the death of the signal thread was not
consistent with the comment. The comment mentioned up to 10 seconds,
whereas it was actually 0.01s. The code has been fixed to wait up to
10 seconds.
4) A warning has been added if the signal handler thread fails to
exit in time.
5) Added pthread_join() to end of wait_for_signal_thread_to_end()
if it hadn't ended in 10s with a warning. Note this also removes
the pthread_detached attribute from the signal_thread to allow
for the pthread_join().
Reviewed By:
===========
Vladislav Vaintroub <wlad@mariadb.com>
Andrei Elkin <andrei.elkin@mariadb.com>
What's happening:
1. Query_cache::insert() locks the QC and verifies that it's enabled
2. parallel thread tries to disable it. trylock fails (QC is locked)
so the status becomes DISABLE_REQUEST
3. Query_cache::insert() calls Query_cache::write_result_data()
which allocates a new block and unlocks the QC.
4. Query_cache::unlock() notices there are no more QC users and a
pending DISABLE_REQUEST so it disables the QC and frees all the
memory, including the new block that was just allocated
5. Query_cache::write_result_data() proceeds to write into the freed block
Fix: change m_cache_status under a mutex.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
These patches:
# commit 74891ed257
#
# MDEV-11514, MDEV-11497, MDEV-11554, MDEV-11555 - IN and CASE type aggregation problems
# commit 53499cd1ea
#
# MDEV-31303 Key not used when IN clause has both signed and usigned values
earlier fixed MDEV-18898.
Adding only an MTR case.
modified: mysql-test/main/func_in.result
modified: mysql-test/main/func_in.test
Problem was too tight condition on ha_commit_trans to not
allow non transactional storage engines participate 2pc
in Galera case. This is required because transaction
using e.g. procedures might read mysql.proc table inside
a trasaction and these tables use at the moment Aria
storage engine that does not support 2pc.
Fixed by allowing read only transactions to storage
engines that do not support two phase commit to participate
2pc transaction. These will be committed later separately.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Before patch, maintenance timer will tick every 0.4 seconds.
After this patch, timer will tick every 0.4 seconds when necessary(
there are delayed thread creation), switching off completely after 20
seconds of being idle.
Only locked will participate in the query in this case. Chances are
that not-locked partitions were not opened, which is the cause of the
crash in the added test case spider/bugfix.mdev_33731 without this
patch.
Commit 6dce6aeceb breaks out of a loop
in ha_partition::info when some partitions aren't opened, in which
case auto_increment_value assertion will fail. This commit patches
that hole.
The spider group by handler is created in
JOIN::make_aggr_tables_info(), by which time calls to
substitute_for_best_equal_field() should have already removed all the
multiple equalities (i.e. Item_equal, with MULT_EQUAL_FUNC func_type).
Therefore, if there is still such items, it is deemed as an optimizer
bug and should be skipped.
Also removed ITEM_FUNC_TIMESTAMPDIFF_ARE_PUBLIC.
Similar to pr#2225, with the testcase adapted from that patch:
--8<---------------cut here---------------start------------->8---
From 884f7c6df1 Mon Sep 17 00:00:00 2001
From: "Norio Akagi (norakagi)" <norakagi@amazon.com>
Date: Wed, 3 Aug 2022 23:30:34 -0700
Subject: [PATCH] [MDEV-28992] Push down TIMESTAMP_DIFF in spider
This changes so that TIMESTAMP_DIFF function in a query is pushed down and works natively in Spider.
Instead of directly accessing item's member, now we can rely on a public accessor method to make it work.
Unit tests are added under spider.pushdown_timestamp_diff.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
--8<---------------cut here---------------end--------------->8---
the value of 200 isn't enough for some tests anymore, this causes
some random threads to become not instrumented and any table operations
there are not reflected in the perfschema. If, say, a DROP TABLE
doesn't change perfschema state, perfschema tables might show
ghost tables that no longer exist in the server
- InnoDB reserves the free extents unnecessarily during blob
page allocation even though btr_page_alloc() can handle
reserving the extent when the existing ran out of pages to be used.
Its important to fail early and only contine with the include after
the boost version check succeeds.
Needs to succeed on ealier verisons too so can't just fail if too new.
As such, do a version check.
This commit fixes the following issues:
- memory leak checking enabled for mysqltest. This cover all cases except
calls to 'die()' that only happens in case of internal failures in
mysqltest. die() is not called anymore in the result files differs.
- One can now run mtr --embedded without failures (this crashed or hang
before)
- cleanup_and_exit() has a new parameter that indicates that it is called
from die(), in which case we should not do memory leak checks. We now
always call cleanup_and_exit() instead of exit() to be able to free up
memory and discover memory leaks.
- Lots of new assert to catch error conditions
- More DBUG statements.
- Fixed that all results are freed in mysqltest (Fixed a memory leak in
mysqltest when using prepared statements).
- Fixed race condition in do_stmt_close() that caused embedded server
to not free memory. (Memory leak in mysqltest with embedded server).
- Fixed two memory leaks in embedded server when using prepared statements.
These memory leaks caused timeout hangs in mtr when server was compiled
with safemalloc. This issue was not noticed (except as timeouts) as
memory report checking was done but output of it was disabled.
it seems that the test can get IO thread running or not,
there's a comment about it. Thus stop_slave_io.inc is told
to ignore errors.
Make stop_slave_io.inc also disable warnings in this case,
in particular "1255 Slave already has been stopped"
on a busy system it might take time for buffer_page_written_index_leaf
to reach the correct value. Wait for it.
also, tag identical statements to be different in the result file.