DBUG_ASSERT(0) was added by MDEV-17554 (auto-create history
partitions) as an experimental measure. Testing has shown this
conditional branch of can_recover_from_failed_open() can be possible
due to MDL deadlock.
The fix replaces DBUG_ASSERT with more specific one for
!OT_ADD_HISTORY_PARTITION.
Test case was synchronized to reproduce deadlock always and commented
with execution path of MDL locking for Good (no deadlock) and Bad
(deadlock). The logging was done with the help of preceding patch for
debug MDL tracing.
Ever since commit 9608773f75
the InnoDB persistent statistics are enabled on all InnoDB tables
by default. We must filter out any output that indicates that the
statistics tables are being internally accessed by InnoDB.
The test was reported to fail sporadicaly with this diff:
--- mysql-test/main/information_schema_tables.result
+++ mysql-test/main/information_schema_tables.reject
@@ -21,6 +21,8 @@
disconnect con1;
connection default;
DROP VIEW IF EXISTS vv;
+Warnings:
+Note 4092 Unknown VIEW: 'test.vv'
in the "The originally reported non-deterministic test" part.
Disabling warnings around the DROP VIEW statement.
Now INSERT, UPDATE, ALTER statements involving incompatible data type pairs, e.g.:
UPDATE TABLE t1 SET col_inet6=col_int;
INSERT INTO t1 (col_inet6) SELECT col_in FROM t2;
ALTER TABLE t1 MODIFY col_inet6 INT;
consistently return an error at the statement preparation time:
ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
and abort the statement before starting interating rows.
This error is the same with what is raised for queries like:
SELECT col_inet6 FROM t1 UNION SELECT col_int FROM t2;
SELECT COALESCE(col_inet6, col_int) FROM t1;
Before this change the error was caught only during the execution time,
when a Field_xxx::store_xxx() was called for the very firts row.
The behavior was not consistent between various statements and could do different things:
- abort the statement
- set a column to the data type default value (e.g. '::' for INET6)
- set a column to NULL
A typical old error was:
ERROR 22007: Incorrect inet6 value: '1' for column `test`.`t1`.`a` at row 1
EXCEPTION:
Note, there is an exception: a multi-row INSERT..VALUES, e.g.:
INSERT INTO t1 (col_a,col_b) VALUES (a1,b1),(a2,b2);
checks assignment compability at the preparation time for the very first row only:
(col_a,col_b) vs (a1,b1)
Other rows are still checked at the execution time and return the old warnings
or errors in case of a failure. This is done because catching all rows at the
preparation time would change behavior significantly. So it still works
according to the STRICT_XXX_TABLES sql_mode flags and the table transaction ability.
This is too late to change this behavior in 10.7.
There is no a firm decision yet if a multi-row INSERT..VALUES
behavior will change in later versions.
Take into account that in preparation of a simple key cache for resizing no disk blocks might be assigned to it.
Reviewer: IgorBabaev <igor@mariadb.com>
Histogram_json_hb::range_selectivity() may return small negative
numbers due to rounding errors in the histogram.
Make sure the returned value is non-negative.
Add an assert to catch negative values that are not small.
(attempt #2)
it starts an EXPLAIN of a multi-table join and tries to KILL it.
no sync points.
depending on how fast the hareware is and optimizer development
it might kill EXPLAIN at some random point in time (generally unrelated
to the Bug#28598 it was supposed to test) or EXPLAIN might finish
before the KILL and the test will fail.
Work around MDEV-28718 for now, but also optimize the interation
of information_schema.SYSTEM_VARIABLES.
Add test case to show that tzinfo data into bootstrap is
desired functionality.
Bug report thanks to Dan Lenski of AWS.
The bug was that build_notnull_conds_for_range_scans() did not take into
account the join_tab is not yet sorted with constant tables first.
Fixed the bug by testing explicitely if a table is a const table.
This patch fixes the following issues in Aria error reporting in case
of read errors & crashed tables:
- Added the table name to the most error messages, including in case of
read errors or when encrypting/decrypting a table. The format for
error messages was changed sligtly to accomodate logging of errors
from lower level routines.
- If we got an read error from storage (hard disk, ssd, S3 etc) we only
reported 'table is crashed'. Now the error number from the storage
is reported.
- Added checking of read failure from records_in_range()
- Calls to ma_set_fatal_error() did not inform the SQL level of
errors (to not spam the user with multiple error messages).
Now the first error message and any fatal error messages are reported
to the user.
Part of:
MDEV-28073 Slow query performance in MariaDB when using many tables
s->key_dependent has a list of tables that are compared with key fields
in the current table. However it does not take into account if a key
field could be resolved by another table.
This is because MariaDB expands 'join_tab->keyuse' to include all generated
comparisons.
For example:
SELECT * from t1,t2,t3 where t1.key=t2.key and t2.key=t3.key
In this case keyuse for t1 includes t2.key and t3.key and key_dependent
contains 't2.map | t3.map'
If we in best_extension_by_limited_search() consider t2,t1 then t1's
key is fully defined, but we cannot do any prune of plans as
s->key_dependent indicates that t3 is still needed.
Fixed by calculating in best_access_patch the current key_dependent map
of tables that is needed to satisfy all keys. This allows us to prune
more bad plans earlier as soon as all keys can be used.
We also set key_dependent to 0 if we found an EQ_REF key, as this an
optimal key for the table and there is no reason to check more keys.
best_extension_by_limited_search() assumes that tables should be sorted
according to size to be able to quickly disregard bad plans. However the
current usage of swap_variables() will change the table order to a not
sorted one for the next recursive call. This breaks the assumtion and
causes performance issues when using many tables (we have to examine
many more plans).
This patch fixes this by ensuring that the original table order is kept
for the not yet used tables when best_extension_by_limited_search() is
called.
This was done by always calling swap_variables() for each table and
restoring the original table order at exit.
Some test changed:
- In a majority of the test the change was that two "identical tables"
where swapped and the optimzer is now using the first/smaller table
- In few test the table order was changed. The new plan looks identical
or slighly better than the original.
(Try 2)
The code that updates semi-join optimization state for a join order prefix
had several bugs. The visible effect was bad optimization for FirstMatch or
LooseScan strategies: they either weren't considered when they should have
been, or considered when they shouldn't have been.
In order to hit the bug, the optimizer needs to consider several different
join prefixes in a certain order. Queries with "obvious" query plans which
prune all join orders except one are not affected.
Internally, the bugs in updates of semi-join state were:
1. restore_prev_sj_state() assumed that
"we assume remaining_tables doesnt contain @tab"
which wasn't true.
2. Another bug in this function: it did remove bits from
join->cur_sj_inner_tables but never added them.
3. greedy_search() adds tables into the join prefix but neglects to update
the semi-join optimization state. (It does update nested outer join
state, see this call:
check_interleaving_with_nj(best_table)
but there's no matching call to update the semi-join state.
(This wasn't visible because most of the state is in the POSITION
structure which is updated. But there is also state in JOIN, too)
The patch:
- Fixes all of the above
- Adds JOIN::dbug_verify_sj_inner_tables() which is used to verify the
state is correct at every step.
- Renames advance_sj_state() to optimize_semi_joins().
= Introduces update_sj_state() which ideally should have been called
"advance_sj_state" but I didn't reuse the name to not create confusion.