(Based on original patch by Oleksandr Byelkin)
Multi-table DELETE can execute via "buffered" mode: at phase #1 it collects
rowids of rows to be deleted, then at phase #2 in multi_delete::do_deletes()
it calls handler->rnd_pos() to read rows to be deleted and deletes them.
The problem occurred when phase #1 used Rowid Filter on the table that
phase #2 would be deleting from.
In InnoDB, h->rnd_init(scan=false) and h->rnd_pos() is an index scan over PK
under the hood. So, at phase #2 ha_innobase::rnd_init() would try to use the
Rowid Filter and hit an assertion inside ha_innobase::rnd_init().
Note that multi-table UPDATE works similarly but was not affected, because
patch for MDEV-7487 added code to disable rowid filter for phase #2 in
multi_update::do_updates().
This patch changes the approach:
- It makes InnoDB not use Rowid Filter in rnd_pos() scans: it is disabled in
ha_innobase::rnd_init() and enabled back in ha_innobase::rnd_end().
- multi_update::do_updates() no longer disables Rowid Filter for phase#2 as
it is no longer necessary.
the test failed almost always in release (but not in debug) builds with
--- galera_sst_mariabackup.result
+++ galera_sst_mariabackup.reject
@@ -5,7 +5,7 @@
connection node_1;
select @@innodb_undo_tablespaces;
@@innodb_undo_tablespaces
-0
+3
connection node_2;
select @@innodb_undo_tablespaces;
@@innodb_undo_tablespaces
and
[Warning] InnoDB: Cannot change innodb_undo_tablespaces=0 because previous shutdown was not with innodb_fast_shutdown=0
because mariadbd *before this test* wasn't using innodb_fast_shutdown=0
Fix the bootstrap to use innodb_fast_shutdown=0 (and the bootstrap
creates a starting point for any test that uses a .cnf file)
followup for cac0fc97cc
also, remove redundant include/have_innodb.inc
Workaround patch: Do not remove GROUP BY clause when it has
subquer(ies) in it.
remove_redundant_subquery_clauses() removes redundant GROUP BY clause
from queries in form:
expr IN (SELECT no_aggregates GROUP BY ...)
expr {CMP} {ALL|ANY|SOME} (SELECT no_aggregates GROUP BY ...)
This hits problems when the GROUP BY clause itself has subquer(y/ies).
This patch is just a workaround: it disables removal of GROUP BY clause
if the clause has one or more subqueries in it.
Tests:
- subselect_elimination.test has all known crashing cases.
- subselect4.result, insert_select.result are updated.
Note that in some cases results of SELECT are changed too (not just
EXPLAINs). These are caused by non-deterministic SQL: when running a
query like:
x > ANY( SELECT col1 FROM t1 GROUP BY constant_expression)
without removing the GROUP BY, the executor is free to pick the value
of t1.col1 from any row in the GROUP BY group (denote it $COL1_VAL).
Then, it computes x > ANY(SELECT $COL1_VAL).
When running the same query and removing the GROUP BY:
x > ANY( SELECT col1 FROM t1)
the executor will actually check all rows of t1.
The problem was two fold:
- REPAIR TABLE t1 USE_FRM did not work for transactional
Aria tables (Table was thought to be repaired, which it was not) which
caused issues in later usage of the table.
- When swapping tmp_data file to data file, sort_info files where not
updated. This caused problems if there was several unique keys and
there was a duplicate for the second key.
Due to this bug a wrong result might be expected from queries with
an IN subquery predicate in the WHERE clause and a derived table in the
FROM clause to which split optimization could be applied.
The function JOIN::fix_all_splittings_in_plan() used the value of the
bitmap JOIN::sjm_lookup_tables() such as it had been left after the
search for the best plan for the select containing the splittable
derived table. That value could not be guaranteed to be correct. So the
recalculation of this bitmap is needed to exclude the plans with key
accesses from SJM lookup tables.
Approved by Igor Babaev <igor@maridb.com>
Reset the connection_name to contain a null string, if the pointer
points to the same space as that of the system variable
default_master_connection.
We do this because the system variable may be updated which could free
the pointer and create a new one, causing use-after-free for
re-execution of prepared statements and stored procedures where the
LEX may be reused.
This allows connection_name to be set again be to the system variable
pointer in the next call of this function (see earlier in this
function), after any possible updates to the system variable.
In the absence of insight of the cause of spider.spider_fixes_part
failure as described in MDEV-30929, This is a workaround, which could
help narrow the possibility down to whether slave SQL thread attempts
to read from file that maybe not yet on disk. It does not otherwise
affect the coverage of the test.
I have pushed this commit 4 times, but have yet to encounter the
failure as described in MDEV-30929, so it could also fix the test and
stop the CI pollution.
Also replaced START SLAVE; with --source include/start_slave.inc
inside the slave_test_init.inc files.
The optimization code replacing DATETIME comparison to TIMESTAMP comparison
in conditions like:
- WHERE timestamp_col=const_expr
- WHERE const_expr IN (SELECT timestamp_column FROM t1)
worked as follows:
- Install an internal condition handler (suppressing and counting warnings).
- Convert const_expr from its data type to TIMESTAMP
- Check the warning count collected by the internal condition handler:
* If any warnings happened during the constant conversion,
then continue with DATETIME comparison.
* Otherwise, go to the next stage of switching to TIMESTAMP comparison.
This scenario did not take into account that in some cases warnings
are escalated to errors. Errors were not caught by the internal handler,
so Type_handler_datetime_common::convert_item_for_comparison()
returned with an SQL error in the diagnostics area.
The calling code did not expect this.
Fixing the code to suppress and count both errors and warnings, to make sure
Type_handler_datetime_common::convert_item_for_comparison() returns without
adding any errors to DA if the conversion to TIMESTAMP fails and it decides
to go with DATETIME comparison.
safety first - tell mariadb client not to execute dangerous
cli commands, they cannot be present in the dump anyway.
wrapping the command in /*!999999 ..... */ guarantees that
if a non-mariadb-cli client loads the dump and sends it to the
server - the server will ignore the command it doesn't understand
mysql --sandbox
disables system (\!), tee (\T), pager with an argument(\P foo), source (\.)
does *not* disable edit (\e). Use EDITOR=/bin/false to disable
or, for example, EDITOR=rnano for something more useful
does *not* disable pager (\P) without an argument. Use
PAGER=cat or, for example PAGER=less LESSSECURE=1 for something
more useful
using a disabled command is an error, which can be ignored with --force
Also, a "sandbox" command (\-) - enables the sandbox mode until EOF
(current file or the session, if interactive)
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
This commit fixes sporadic failures in galera_3nodes_sr.GCF-336
test. The following changes have been made here:
1) A small addition to the test itself which should make
it more deterministic by waiting for non-primary state
before COMMIT;
2) More careful handling of the wsrep_ready variable in
the server code (it should always be protected with mutex).
No additional tests are required.
dgcov.pl was putting gcov execution counters on wrong lines
in the report (*.dgcov files were correct), because it was
incrementing the new line number for diff lines starting from '-'
(lines from the old file, not present in the new)