Due to a premature cleanup of the unit that specified a recursive CTE
used in the second operand of union the server fell into an infinite
loop in the reported test case. In other cases this premature cleanup
could cause other problems.
The bug is the result of a not quite correct fix for MDEV-17024. The
unit that specifies a recursive CTE has to be cleaned only after the
cleanup of the last external reference to this CTE. It means that
cleanups of the unit triggered not by the cleanup of a external
reference to the CTE must be blocked.
Usage of local table chains in selects to get external references to
recursive CTEs was not correct either because of possible merges of
some selects.
Also fixed a minor bug in st_select_lex::set_explain_type() that caused
typing 'RECURSIVE UNION' instead of 'UNION' in EXPLAIN output for external
references to a recursive CTE.
Analysis:
========
Writes to 'rli->log_space_total' needs to be synchronized, otherwise both
SQL_THREAD and IO_THREAD can try to modify the variable simultaneously
resulting in incorrect rli->log_space_total. In the current test scenario
SQL_THREAD is trying to decrement 'rli->log_space_total' in 'purge_first_log'
and IO_THREAD is trying to increment the 'rli->log_space_total' in
'queue_event' simultaneously. Hence test occasionally fails with result
mismatch.
Fix:
===
Convert 'rli->log_space_total' variable to atomic type.
A follow-up fix, for original fix for MDEV-21577, which did not handle well
temporary tables.
OPTIMIZE and REPAIR TABLE statements can take a list of tables as argument,
and some of the tables may be temporary. Proper handling of temporary tables
is to skip them and continue working on the real tables. The bad version, skipped all tables,
if a single temporary table was in the argument list. And this resulted so that FK parent
tables were not scnanned for the remaining real tables. Some mtr tests, using OPTIMIZE or REPAIR
for temporary tables caused regressions bacause of this, e.g. galera.galera_optimize_analyze_multi
The fix in this PR opens temporary and real tables first, and in the table handling loop skips
temporary tables, FK parent scanning is done only for real tables.
The test has new scenario for OPTIMIZE and REPAIR issued for two tables of which one is temporary table.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This PR fixes same issue as MDEV-21577 for TRUNCATE TABLE.
MDEV-21577 fixed TOI replication for OPTIMIZE, REPAIR and ALTER TABLE
operating on FK child table. It was later found out that also TRUNCATE
has similar problem and needs a fix.
The actual fix is to do FK parent table lookup before TRUNCATE TOI
isolation and append found FK parent table names in certification key
list for the write set.
PR contains also new test scenario in galera_ddl_fk_conflict test where
FK child has two FK parent tables and there are two DML transactions operating
on both parent tables.
For development convenience, new TO isolation macro was added:
WSREP_TO_ISOLATION_BEGIN_IF and WSREP_TO_ISOLATION_BEGIN_ALTER macro was changed
to skip the goto statement.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Let us introduce the parameter innodb_read_only_compressed
that is ON by default, making any ROW_FORMAT=COMPRESSED tables
read-only.
I developed the ROW_FORMAT=COMPRESSED format based on
Heikki Tuuri's rough design between 2005 and 2008. It might
have been a good idea back then, but no proper benchmarks were
ever run to validate the design or the implementation.
The format has been more or less obsolete for years.
It limits innodb_page_size to 16384 bytes (the default),
and instant ALTER TABLE is not supported.
This is the first step towards deprecating and removing
write support for ROW_FORMAT=COMPRESSED tables.
This bug could manifest itself for a query with WHERE condition containing
top level OR formula such that each conjunct contained a single-range
condition supported by the same index. One of these range conditions must
be fully covered by another range condition that is used later in the OR
formula. Additionally at least one of these condition should be ANDed with
a sargable range condition supported by a different index.
There were several attempts to fix related problems for OR conditions after
the backport of range optimizer code from MySQL (commit
0e19f3e36f). Unfortunately the first of these
fixes contained typo remained unnoticed until recently. This typo bug led
to rejection of valid range accesses. This patch fixed this typo bug.
The fix revealed another two bugs: one in a constructor for SEL_ARG,
the other in the function tree_or(). Both are fixed in this patch.
Part#1: Revert the patch that caused it:
commit 291be49474
Author: Igor Babaev <igor@askmonty.org>
Date: Thu Sep 24 22:02:00 2020 -0700
MDEV-23811: With large number of indexes optimizer chooses an inefficient plan
- mysqlnd from PHP < 7.3
- mysql-connector-python any version
- mysql-connector-java any version
Relaxed check about garbage at the end of the packet in case of no parameters.
Added check for array binding.
Fixed test according to the new paradigm (allow junk at the end of the packet)
The parser of CREATE USER accepts ACCOUNT LOCK before PASSWORD
EXPIRE but not the other way around.
This just changes the SHOW CREATE USER to output a sql syntax that
is valid.
Thanks to Robert Bindar for analysis.
During graceful shutdowns, client connections are closed and
eventually and THD::awake() acquires LOCK_thd_data mutex which is
required later on in wsrep_thd_is_aborting(). Make sure LOCK_thd_data
is acquired, even if global wsrep_on is disabled.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Some DDL statements appear to acquire MDL locks for a table referenced by
foreign key constraint from the actual affected table of the DDL statement.
OPTIMIZE, REPAIR and ALTER TABLE belong to this class of DDL statements.
Earlier MariaDB version did not take this in consideration, and appended
only affected table in the certification key list in write set.
Because of missing certification information, it could happen that e.g.
OPTIMIZE table for FK child table could be allowed to apply in parallel
with DML operating on the foreign key parent table, and this could lead to
unhandled MDL lock conflicts between two high priority appliers (BF).
The fix in this patch, changes the TOI replication for OPTIMIZE, REPAIR and
ALTER TABLE statements so that before the execution of respective DDL
statement, there is foreign key parent search round. This FK parent search
contains following steps:
* open and lock the affected table (with permissive shared locks)
* iterate over foreign key contstraints and collect and array of Fk parent
table names
* close all tables open for the THD and release MDL locks
* do the actual TOI replication with the affected table and FK parent
table names as key values
The patch contains also new mtr test for verifying that the above mentioned
DDL statements replicate without problems when operating on FK child table.
The mtr test scenario #1, which can be used to check if some other DDL
(on top of OPTIMIZE, REPAIR and ALTER) could cause similar excessive FK
parent table locking.
Reviewed-by: Aleksey Midenkov <aleksey.midenkov@mariadb.com>
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Prepared statements which were run over binary protocol crashed
a server if the statement did not have CF_PS_ARRAY_BINDING_OPTIMIZED
flag and the statement was executed in bulk mode and a BF abort occrurred.
This was because the bulk execution resulted in several statements without
calling wsrep_after_statement() between, which confused wsrep transaction
state tracking.
As a fix, call wsrep_after_statement() in bulk loop after each execution
if CF_PS_ARRAY_BINDING_OPTIMIZED is not set.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Bugs fixed:
- prepare_for_repair() didn't close all open files if table opened failed
because of out-of-memory
- If dd_recreate_table() failed, the data file was not properly restored
from it's temporary name
- Aria repair initializing code didn't properly clear all used structs
before calling error, which caused crashed in memory-free calls.
- maria_delete_table() didn't register if table open failed. This could
calls my_error() to be called without returning 1 to the caller, which
cased failures in my_ok()
Note when merging to 10.5:
- Remove the #if MYSQL_VERSION from sql_admin.cc
This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.
After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
Though this is an error message task, the problem was deep in the
`mysql_prepare_create_table` implementation. The problem is described as
follows:
1. `append_system_key_parts` was called before
`mysql_prepare_create_table`, though key name generation was done close to
the latest stage of the latter.
2. We can't move `append_system_key_parts` in the end, because system keys
should be appended before some checks done.
3. If the checks from `append_system_key_parts` are moved to the end of
`mysql_prepare_create_table`, then some other inappropriate errors are
issued. like `ER_DUP_FIELDNAME`.
To have key name specified in error message, name generation should be done
before the checks, which consequenced in more changes.
The final design for key initialization in `mysql_prepare_create_table`
follows. The initialization is done in three phases:
1. Calculate a total number of keys created with respect to keys ignored.
Allocate KEY* buffer.
2. Generate unique names; calculate a total number of key parts.
Make early checks. Allocate KEY_PART_INFO* buffer.
3. Initialize key parts, make the rest of the checks.
After Sergei's cleanup this assertion is not actual anymore -- we can't
predict if the handler was used for lookup, especially in multi-update
scenario.
`position(old_data)` is made earlier in `ha_check_overlaps`, therefore it
is guaranteed that we compare right refs.
The problem here was that ha_check_overlaps internally uses ha_index_read,
which in case of fail overwrites table->status. Even though the handlers
are different, they share a common table, so the value is anyway spoiled.
This is bad, and table->status is badly designed and overweighted by
functionality, but nothing can be done with it, since the code related to
this logic is ancient and it's impossible to extract it with normal effort.
So let's just save and restore the value in ha_update_row before and after
the checks.
Other operations like INSERT and simple UPDATE are not in risk, since they
don't use this table->status approach.
DELETE does not do any unique checks, so it's also safe.
1. Subtracting table->record[0] from record is UB (non-contiguous buffers)
2. It is very popular to use move_field_offset, which changes Field::ptr,
but leaves table->record[0] unchanged. This makes a ptr_in_record result
incorrect, since it relies on table->record[0] value.
The check ensures the result is within the queried record boundaries.
Per b9f3f06857, mysql_system_tables_data.sql creates
a mysql_native_password with a salted hash of "invalid" so that `set password`
will detect a native password can be applied:.
SHOW CREATE USER; diligently uses this value in its output
generating the SQL:
MariaDB [(none)]> show create user;
+---------------------------------------------------------------------------------------------------+
| CREATE USER for dan@localhost |
+---------------------------------------------------------------------------------------------------+
| CREATE USER `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket |
+---------------------------------------------------------------------------------------------------+
Attempting to execute this before this patch results in:
MariaDB [(none)]> CREATE USER `dan2`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket;
ERROR 1372 (HY000): Password hash should be a 41-digit hexadecimal number
As such, deep the implementation of mysql_native_password we make "invalid" valid (pun intended)
such that the above create user will succeed. We do this by storing
"*THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE" (credit: Oracle MySQL), that is of an INCORRECT
length for a scramble.
In native_password_authenticate we check the length of this cached value
and immediately fail if it is anything other than the scramble length.
native_password_get_salt is only called in the context of set_user_salt, so all setting of native
passwords to hashed content of 'invalid', quite literally create an invalid password.
So other forms of "invalid" are valid SQL in creating invalid passwords:
MariaDB [(none)]> set password = 'invalid';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> alter user dan@localhost IDENTIFIED BY PASSWORD 'invalid';
Query OK, 0 rows affected (0.000 sec)
closes#1628
Reviewer: serg@mariadb.com