The crash happens because a double free in the case CREATE TABLE fails
because there is a conflicting tables on disk.
Fixed by ensuring that the double free can't happen.
The problem was that opt_sum_query() was, as part of MIN/MAX optimization,
doing read operations on constant tables that where already closed
Fixed by ensuring we don't try to read from tables that are closed.
Implement a different fix for
"MDEV-19232: Floating point precision / value comparison problem"
Instead of truncating decimal values after every division,
truncate them for comparison purposes.
This reverts commit 62d73df6b2 but keeps the test.
mariabackup deallocated uninitialized
write_filt_ctxt.u.wf_incremental_ctxt in xtrabackup_copy_datafile() when
some table should be skipped due to parsed DDL redo log record.
A wsrep transaction was started for EXECUTE IMMEDIATE, which
caused assertion failure when the executed statement was
CREATE TABLE which should be executed in TOI mode.
As a fix, don't start wsrep transaction for EXECUTE IMMEDIATE
to let the wsrep state logic to be handled from inside stored
procedure codepath.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Fix assertion `thd->in_active_multi_stmt_transaction() ||
thd->m_transaction_psi == __null' failed on MTR test
galera_sr.GCF-1051.
Add a new MTR test MDEV-23623 that reproduces the issue
deterministically and update wsrep-lib submodule, containing
the actual fix.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Patch removes dict_index_t::stats_latch. Table/index statistics now
protected with dict_sys->mutex. That way statistics computation can
happen in parallel in several threads and dict_sys->mutex will be locked
only for a short period of time.
This patch is a joint work with Marko Mäkelä
dict_index_t:🔒 make mutable which allows to pass const pointer
when only lock is touched in an object
btr_height_get()
btr_get_size(): make index argument const for better type safety
btr_estimate_number_of_different_key_vals(): now returns computed values
instead of setting fields in dict_index_t directly
remove everything related to dict_index_t::stats_latch
dict_stats_index_set_n_diff(): now returns computed values instead
of setting fields in dict_index_t directly
dict_stats_analyze_index(): now returns computed values instead
of setting fields in dict_index_t directly
Reviewed by: Marko Mäkelä
Let us introduce a dummy variable innodb_max_purge_lag_wait
for waiting that the InnoDB history list length is below
the user-specified limit. Specifically,
SET GLOBAL innodb_max_purge_lag_wait=0;
should wait for all history to be purged. This could be useful
when upgrading from an older version to MariaDB 10.3 or later,
to avoid hitting MDEV-15912.
Note: the history cannot be purged if there exist transactions
that may see old versions.
Reviewed by: Vladislav Vaintroub
session_track_system_variables and max_relay_log_size.
lock LOCK_global_system_variables around the get_one_variable() call
in the Session_sysvars_tracker::store_variable().
For debug build of MariaDB server running of the following test case
will hit the assert `thd->lex->sql_command == SQLCOM_UPDATE' in the function
check_fields() on attempt to execute the UPDATE statement.
CREATE TABLE t1 (a INT);
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
Stack trace to the fired assert statement
DBUG_ASSERT(thd->lex->sql_command == SQLCOM_UPDATE)
listed below:
mysql_execute_command() ->
mysql_multi_update_prepare() -->
Multiupdate_prelocking_strategy::handle_end() -->
check_fiels()
It's worth to note that this stack trace looks like a multi update
statement is being executed. The fired assert is checked inside the
function check_fields() in case table->has_period() returns the value
true that in turns happens when temporal period specified in the UPDATE
statement. Condition specified in the DEBUG_ASSERT statement returns
the false value since the data member thd->lex->sql_command have the
value SQLCOM_UPDATE_MULTI. So, the main question is why a program control
flow go to the path prescribed for handling MULTI update statement
despite of the fact that the ordinary UPDATE statement being executed.
The answer is a way that SQL grammar rules written.
When the statement
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
being parsed an action for the rule 'table_primary_ident' (part of this action
is listed below to simplify description) is invoked to handle the table
name 't1' specified in the clause 'SELECT 1 FROM t1'.
table_primary_ident:
table_ident opt_use_partition opt_for_system_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
This action calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
Later, an action for the following grammar rule
update_table_list:
table_ident opt_use_partition for_portion_of_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
is invoked to handle the clause 't1 FOR PORTION OF APPTIME FROM ... TO 2'.
This action also calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
In result the table name 't1' contained twice in this list.
Presence of duplicate names for the table 't1' in a list of table used by
a statement leads to the fact that the function unique_table() called
from the function mysql_update() returns the value true that forces
implementation of the function mysql_update() to return the value 2 as
a signal to fall through the case boundary of the switch statement placed
in the function mysql_execute_statement() and start handling of the case
for sql_command SQLCOM_UPDATE_MULTI. The compound statement block for the
case SQLCOM_UPDATE_MULTI invokes the function mysql_multi_update_prepare()
that executes the statement
set thd->lex->sql_command= SQLCOM_UPDATE_MULTI;
and after that calls the method
Multiupdate_prelocking_strategy::handle_end(). Finally, this method
invokes the check_field() function and assert is fired.
The above analysis shows that update for a table that simultaneously specified
both as a destination table of UPDATE statement and as a table taking part in
subquery is actually treated by MariaDB server as multi-update statement.
Taking into account that multi-update statement for temporal period
table is not supported yet by MariaDB, correct way to fix the bug is to return
the error ER_NOT_SUPPORTED_YET for this case.
Deadlock is possible between applier thread and local committing thread with active FLUSH TABLE.
Applier thread should skip table share checks and locks when opening table.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
innobase_rename_indexes_cache(): fix corruption of index cache. Index ids
help distinguish indexes when their names clash.
innobase_rename_indexes_cache(): fix corruption of index statistics table.
Use unique temporary names to avoid names clashing.
Reviewed by: Marko Mäkelä
MariaDB 10.2.2 inherited from MySQL 5.7 a perceived optimization
of ALTER TABLE, which skips the writing of redo log records.
In MDEV-16809 we introduced a parameter that allows the redo log to
be written, so that Mariabackup would not be impacted, but we kept
the MySQL 5.7 behaviour enabled by default (innodb_log_optimize_ddl=ON).
As noted in MDEV-19747 (Deprecate and ignore innodb_log_optimize_ddl,
implemented in MariaDB 10.5.1), omitting the redo log writes can
actually reduce performance, because we will have to wait for the data
pages to be written out. When the redo log file is configured to be
large enough, it actually can be much faster to write the redo log and
avoid the extra page flushing.
When the redo log is omitted (innodb_log_optimize_ddl=ON), also
Mariabackup may have to perform a lot of extra work, to re-copy the
entire data file if it is possible that any log was omitted during
the backup.
Starting with MariaDB 10.5.1, the parameter innodb_log_optimize_ddl
is deprecated and ignored. We hereby deprecate (but will not ignore)
the parameter in earlier versions as well.
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.
fix printing precedence for BETWEEN, LIKE/ESCAPE, REGEXP, IN
don't use precedence for printing CASE/WHEN/THEN/ELSE/END
fix parsing precedence of BETWEEN, LIKE/ESCAPE, REGEXP, IN
support predicate arguments for IN, BETWEEN, SOUNDS LIKE, LIKE/ESCAPE,
REGEXP
use %nonassoc for unary operators
fix parsing of IS TRUE/FALSE/UNKNOWN/NULL
remove parser_precedence test as superseded by the precedence test
The problem:
When incremental backup is taken, delta files are created for innodb tables
which are marked as new tables during innodb ddl tracking. When such
tablespace is tried to be opened during prepare in
xb_delta_open_matching_space(), it is "created", i.e.
xb_space_create_file() is invoked, instead of opening, even if
a tablespace with the same name exists in the base backup directory.
xb_space_create_file() writes page 0 header the tablespace.
This header does not contain crypt data, as mariabackup does not have
any information about crypt data in delta file metadata for
tablespaces.
After delta file is applied, recovery process is started. As the
sequence of recovery for different pages is not defined, there can be
the situation when crypt data redo log event is executed after some
other page is read for recovery. When some page is read for recovery, it's
decrypted using crypt data stored in tablespace header in page 0, if
there is no crypt data, the page is not decryped and does not pass corruption
test.
This causes error for incremental backup --prepare for encrypted
tablespaces.
The error is not stable because crypt data redo log event updates crypt
data on page 0, and recovery for different pages can be executed in
undefined order.
The fix:
When delta file is created, the corresponding write filter copies only
the pages which LSN is greater then some incremental LSN. When new file
is created during incremental backup, the LSN of all it's pages must be
greater then incremental LSN, so there is no need to create delta for
such table, we can just copy it completely.
The fix is to copy the whole file which was tracked during incremental backup
with innodb ddl tracker, and copy it to base directory during --prepare
instead of delta applying.
There is also DBUG_EXECUTE_IF() in innodb code to avoid writing redo log
record for crypt data updating on page 0 to make the test case stable.
Note:
The issue is not reproducible in 10.5 as optimized DDL's are deprecated
in 10.5. But the fix is still useful because it allows to decrease
data copy size during backup, as delta file contains some extra info.
The test case should be removed for 10.5 as it will always pass.
- Original patch was contributed by Jani Tolonen <jani.k.tolonen@gmail.com>
https://github.com/an3l/server/commits/bb-10.3-anel-MDEV-21786-dump-sequence
which distinguishes data structure (linked list) of sequences from
tables.
- Added standard sql output to prevent future changes
of sequences and disabled locks for sequences.
- Added test case for `MDEV-20070: mysqldump won't work correct on
sequences` where table column depends on sequence value.
- Restore sequence last value in the following way:
- Find `next_not_cached_value` and use it to `setval()`
- We just need for logical restore, so don't execute `setval()`
- `setval()` should be showed also in case of `--no-data` option.
Reviewed-by: daniel@mariadb.org
In commit 7eda556196 we removed a
valid debug assertion.
AddressSanitizer would trip when writing undo log for an INSERT
operation that follows the problematic ALTER TABLE because the
v_indexes would refer to an index object that was freed during
the execution of ALTER TABLE.
The operation to remove NOT NULL attribute from a column should
lead into any affected indexes to be dropped and re-created.
But, the SQL layer set the ha_alter_info->handler_flags to
HA_INPLACE_ADD_UNIQUE_INDEX_NO_WRITE|ALTER_COLUMN_NULLABLE
(that is, InnoDB was asked to add an index and change the column,
but not to drop the index that depended on the column).
Let us restore the debug assertion that catches this problem
outside AddressSanitizer, and 'defuse' the offending ALTER TABLE
statement in the test until something has been fixed in the SQL layer.
problem:
========
mysqltest: In included file "./include/assert.inc":
included from mysql-test/suite/sys_vars/t/rpl_init_slave_func.test at line 69:
Assertion text: '@@global.max_connections = @start_max_connections'
Assertion result: '0'
mysqltest: In included file "./include/assert.inc":
included from mysql-test/suite/sys_vars/t/rpl_init_slave_func.test at line 86:
Assertion text: '@@global.max_connections = @start_max_connections + 1'
Assertion result: '0'
Analysis:
=========
A slave SQL thread sets its Running state to Yes very early in its
initialisation, before the majority of initialisation actions, including
executing the init_slave command, are done. Thus the testcase has a race
condition where the initial replication setup might finish executing later
than the testcase SET GLOBAL init_slave, making the testcase see its effect
where it checks for its absence.
Fix:
===
Include 'sync_slave_sql_with_master.inc' at the beginning of the test to
ensure that slave applier has completed the execution of 'init_slave' command
and proceeded to event application. Replace the apparently needless RESET
MASTER / RESET SLAVE etc.
Patch is based on:
https://github.com/percona/percona-server/pull/1464/commits/b91e2e6f90611aa299c302929fb8b068e8ac0dee
Author: laurynas-biveinis
Change xarecover_handlerton so that transaction with WSREP prefixed
xids are rolled back when Galera is disabled.
Reviewd-by: Jan Lindström <jan.lindstrom@mariadb.com>
fts_query_t::nested_sub_exp: Keep track of nested
fts_ast_visit_sub_exp() calls.
fts_ast_visit_sub_exp(): Return DB_OUT_OF_MEMORY if the
maximum recursion depth is exceeded.
This is motivated by a change in MySQL 5.6.50:
mysql/mysql-server@e2a46b4834
Bug #29929684 USING MANY NESTED ARGUMENTS WITH BOOLEAN FTS CAN LEAD
TO TERMINATE SERVER
To fix this, it is necessary to add an option to exclude the
database with the name "lost+found" from processing (the database
name will be checked by the check_if_skip_database_by_path() or
by the check_if_skip_database() function, and as a result
"lost+found" will be skipped).
In addition, it is necessary to slightly modify the verification
logic in the check_if_skip_database() function.
Also added a new test galera_sst_mariabackup_lost_found.test