The reason for ASAN report was that the MERGE and MYISAM file
had different key definitions, which is not allowed.
Fixed by ensuring that the MERGE code is not copying more key stats
than what is in the MyISAM file.
Other things:
- Give an error if different MyISAM files has different number of
key parts.
This ensures that no mtr test can change install.db after it's initial
creation as changing it while as another thread is coping it will lead to
failures in at least InnoDB and Aria recovery.
Fixed spider/bugfix.mdev_30370 that was wrongly used install.db
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.
What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.
This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.
The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.
There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
maybe_null, which is required if the table rows should be regarded as
null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
function.
- join->clear() does not use a table_map argument to clear_tables(),
which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
was before clear_tables().
Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().
Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
result set.
- Changed return_zero_rows() to use pointers instead of references,
similar to the rest of the code.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
The problem seems to be a deadlock between KILL command execution
and BF abort issued by an applier, where:
* KILL has locked victim's LOCK_thd_kill and LOCK_thd_data.
* Applier has innodb side global lock mutex and victim trx mutex.
* KILL is calling innobase_kill_query, and is blocked by innodb
global lock mutex.
* Applier is in wsrep_innobase_kill_one_trx and is blocked by
victim's LOCK_thd_kill.
The fix in this commit removes the TOI replication of KILL command
and makes KILL execution less intrusive operation. Aborting the
victim happens now by using awake_no_mutex() and ha_abort_transaction().
If the KILL happens when the transaction is committing, the
KILL operation is postponed to happen after the statement
has completed in order to avoid KILL to interrupt commit
processing.
Notable changes in this commit:
* wsrep client connections's error state may remain sticky after
client connection is closed. This error message will then pop
up for the next client session issuing first SQL statement.
This problem raised with test galera.galera_bf_kill.
The fix is to reset wsrep client error state, before a THD is
reused for next connetion.
* Release THD locks in wsrep_abort_transaction when locking
innodb mutexes. This guarantees same locking order as with applier
BF aborting.
* BF abort from MDL was changed to do BF abort on server/wsrep-lib
side first, and only then do the BF abort on InnoDB side. This
removes the need to call back from InnoDB for BF aborts which originate
from MDL and simplifies the locking.
* Removed wsrep_thd_set_wsrep_aborter() from service_wsrep.h.
The manipulation of the wsrep_aborter can be done solely on
server side. Moreover, it is now debug only variable and
could be excluded from optimized builds.
* Remove LOCK_thd_kill from wsrep_thd_LOCK/UNLOCK to allow more
fine grained locking for SR BF abort which may require locking
of victim LOCK_thd_kill. Added explicit call for
wsrep_thd_kill_LOCK/UNLOCK where appropriate.
* Wsrep-lib was updated to version which allows external
locking for BF abort calls.
Changes to MTR tests:
* Disable galera_bf_abort_group_commit. This test is going to
be removed (MDEV-30855).
* Record galera_gcache_recover_manytrx as result file was incomplete.
Trivial change.
* Make galera_create_table_as_select more deterministic:
Wait until CTAS execution has reached MDL wait for multi-master
conflict case. Expected error from multi-master conflict is
ER_QUERY_INTERRUPTED. This is because CTAS does not yet have open
wsrep transaction when it is waiting for MDL, query gets interrupted
instead of BF aborted. This should be addressed in separate task.
* A new test galera_kill_group_commit to verify correct behavior
when KILL is executed while the transaction is committing.
Co-authored-by: Seppo Jaakola <seppo.jaakola@iki.fi>
Co-authored-by: Jan Lindström <jan.lindstrom@galeracluster.com>
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add .gitlab-ci.yml file to earliest supported branch to enable
automated building and testing for all MariaDB major branches.
Note to mergers:
GitLab CI is available for branches >= 10.6. This commit includes a
GitLab CI file identical to that in branches >= 10.6, except for the
MARIADB_MAJOR_VERSION variable which should reflect the branch version.
A modified CI will be included in branches 10.4 with PR !2418.
Also changed is the `allow_failure: true` for the MSAN build,
which should be merged up to later branches.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
There is room between mutex_exit(&fil_system.mutex) and
mutex_enter(&fil_system.mutex) calls in fil_node_open_file(). During this
room another thread can open the node, and ut_ad(!node->is_open())
assertion in fil_node_open_file_low() can fail.
The fix is not to open node if it was already opened by another thread.
trx_purge_truncate_history(): Only call trx_purge_truncate_rseg_history()
if the rollback segment is safe to process. This will avoid leaking undo
log pages that are not yet ready to be processed. This fixes a regression
that was introduced in
commit 0de3be8cfd (MDEV-30671).
trx_sys_t::any_active_transactions(): Separately count XA PREPARE
transactions.
srv_purge_should_exit(): Terminate slow shutdown if the history size
does not change and XA PREPARE transactions exist in the system.
This will avoid a hang of the test innodb.recovery_shutdown.
Tested by: Matthias Leich
The CODEOWNERS was added almost 3 years ago but never saw any adoption.
Only one person used it (me) to mark what files I maintain and for which
I wish to review commits. No other maintainers or code paths were added,
so clean it away for clarity.
buf_read_page_low(): Remove an error message and a debug assertion
that can be triggered when using innodb_page_size=4k and
innodb_file_per_table=0. In that case, buf_read_ahead_linear()
may be invoked on page 255, which is one less than the first
page of the doublewrite buffer (256).
fil_space_t::flush_freed(): Renamed from buf_flush_freed_pages();
this is a backport of aa45850687 from 10.6.
Invoke log_write_up_to() on last_freed_lsn, instead of avoiding
the operation when the log has not yet been written.
A more costly alternative would be that log_checkpoint() would invoke
this function on every affected tablespace.
buf_read_ahead_linear(): Correct some calculations that were broken
in commit b1ab211dee (MDEV-15053).
Thanks to Daniel Black for providing a test case and initial debugging.
Tested by: Matthias Leich
The problem, introduced in patch for MDEV-26301:
When check_join_cache_usage() decides not to use join buffer, it must
adjust the access method accordingly. For BNL-H joins this means switching
from pseudo-"ref access"(with index=MAX_KEY) to some other access method.
Failing to do this will cause assertions down the line when code that is
not aware of BNL-H will try to initialize index use for ref access with
index=MAX_KEY.
The fix is to follow the regular code path to disable the join buffer for
the join_tab ("goto no_join_cache") instead of just returning from
check_join_cache_usage().
RocksDB (in a submodule) has to include <cstdint> to use uint64_t
but it doesn't. Until the submodule is upgraded, let's replace
problematic types with something that's available
select_insert::store_values() must reset
has_value_set bitmap before every row, just like mysql_insert() does.
because ON DUPLICATE KEY UPDATE and triggers modify it
This patch optimizes the number of refills for the lateral derived table
to which a materialized derived table subject to split optimization is
is converted. This optimized number of refills is now considered as the
expected number of refills of the materialized derived table when searching
for the best possible splitting of the table.
1. log_event.cc stuff should go into log_event_server.cc
2. the test's wait condition is textually different in 10.5, fixed.
3. pre-exec 'optimistic' global var value is correct for 10.5 indeed.
- Update wsrep-lib which contains fix for the assertion
- Fix error handling for appending fragment to streaming log,
make sure tables are closed after rollback.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Adding virtual methods to class Schema:
make_item_func_replace()
make_item_func_substr()
make_item_func_trim()
This is a non-functional preparatory change for MDEV-27744.
Variant #2.
When Histogram::point_selectivity() sees that the point value of interest
falls into one bucket, it tries to guess whether the bucket has many
different (unpopular) values or a few popular values. (The number of
rows is fixed, as it's a Height-balanced histogram).
The basis for this guess is the "width" of the value range the bucket
covers. Buckets covering wider value ranges are assumed to contain
values with proportionally lower frequencies.
This is just a [brave] guesswork. For a very narrow bucket, it may
produce an estimate that's larger than total #rows in the bucket
or even in the whole table.
Remove the guesswork and replace it with basic logic: return
either the per-table average selectivity of col=const, or selectivity
of one bucket, whichever is lower.