When "FLUSH TABLE ... FOR EXPORT" fails, the SQL layer should rollback
the statement. Otherwise we hit an assert when we try to close the
tables while having a non-empty list of statement transaction participants.
- compile_flags already include from top CMakeLists.txt
- MY_CHECK_CXX_COMPILER_FLAG() accepts only one parameter
- output variable of MY_CHECK_CXX_COMPILER_FLAG() is have_CXX__Wa__nH
- same check for mariabackup
Based on contribution by satanson (PR#466).
dict_foreign_qualify_index(): Avoid a redundant and harmful
computation of col_name of a virtual column. This fixes the
assertion failure.
dict_foreign_push_index_error(): Do not call dict_table_get_col_name()
on a virtual column. (It is unclear if this condition is actually
reachable.)
- recovered_lsn shouldn't be initialized during xtrabackup_copy_logfile().
If partial redo log read during the end of xtrabackup_copy_logfile() then
recovered_lsn will be different from scanned_lsn. Re-initialization of
recovered_lsn could lead to partial read again. It is a regression of
MDEV-14545
Item::derived_field_transformer_for_having
The crash occurred due to an inappropriate handling of multiple equalities
when pushing conditions into materialized views/derived tables. If equalities
extracted from a multiple equality can be pushed into a materialized
view/derived table they should be plainly conjuncted with other pushed
predicates rather than form a separate AND sub-formula.
concurrently.
There is a deadlock between
C1 mariabackup's connection that holds MDL locks
C2 Online ALTER TABLE that wants to have MDL exclusively
and tries to upgrade its mdl lock.
C3 another mariabackup's connection that does FLUSH TABLES (or FTWRL)
C3 waits waits for C2, which waits for C1, which waits for C3,
thus the deadlock.
MDL locks cannot be released until FLUSH succeeds, because
otherwise it would allow ALTER to sneak in, causing backup to abort and
breaking lock-ddl-per-table's promise.
The fix here workarounds the deadlock, by killing connections in
"Waiting for metadata lock" status (i.e ALTER). This killing continues
until FTWRL succeeds.
Killing connections is skipped in case --no-locks parameter
was passed to backup, because there won't be a FLUSH.
For the reference,in Percona's xtrabackup --lock-ddl-per-connection
silently implies --no-lock ie FLUSH is always skipped there.
A rather large part of fix is introducing DBUG capability to start
a query the new connection at the right moment of backup
compensating somewhat for mariabackup' lack of send_query or DBUG_SYNC.
It does not hurt to delete non-existing records from SYS_TABLESPACES
and SYS_DATAFILES. Because MariaDB does not support CREATE TABLESPACE,
only the system tablespace (space_id=0) can contain multiple tables.
But, there are no entries for the system tablespace in these tables
(which actually are stored inside the system tablespace).
MariaDB differs from the upstream for "DDL-like" command. For these,
it sets binlog_format=STATEMENT for the duration of the statement.
This doesn't play well with MyRocks, which tries to prevent DML
commands with binlog_format!=ROW.
Also, if Locked_tables_list::reopen_tables() returned an error, then
close_cached_tables should propagate the error condition and not silently
consume it (it's difficult to have test coverage for this because this
error condition is rare)
Skip the test mariabackup.unsupported_redo if a checkpoint occurred
before mariabackup --backup completed. Remove the slow shutdowns
and restarts which were attempting to prevent the checkpoints from
occurring.
The purpose of the InnoDB buffer pool dump is to allow InnoDB to be
restarted with the same persistent data pages in the buffer pool.
The InnoDB temporary tablespace that was introduced in MariaDB 10.2.2
is always reinitialized on restart. Therefore, it does not make sense
to attempt to dump or restore any pages of the temporary tablespace.
ha_innobase::check_if_supported_inplace_alter(): Only check for
high_level_read_only. Do not unnecessarily refuse
ALTER TABLE...ALGORITHM=INPLACE if innodb_force_recovery was
specified as 1, 2, or 3.
innobase_start_or_create_for_mysql(): Block all writes from SQL
if the system tablespace was initialized with 'newraw'.
- Disallow loading of MyRocks (or any auxilary) plugins after it has been
unloaded.
- Do it carefully - Plugin's system variables may be accesssed (e.g. default
value is set) after the first rocksdb_done_func() call but before
the secon rocksdb_init_func() call.
Problem:
=======
During validation of missing tablespace, missing tablespace id is
being compared with hash table of redo logs (recv_sys->addr_hash). But if the
hash table ran out of memory then there is a possibility that it will not contain
the redo logs of all tablespace. In that case, Server will load the InnoDB
even though there is a missing tablespace.
Solution:
========
If the recv_sys->addr_hash hash table ran out of memory then InnoDB needs
to scan the remaining redo log again to validate the missing tablespace.
When the plugin is unloaded, walk the s_trx_list and delete the left over
Rdb_transaction objects.
It is responsibility of the SQL layer to make sure that the storage engine
has no open tables when the plugin is being unloaded.
Changed "local" datasink logic to detect page compressed Innodb tables.
Whenever such table is detected, holes in the copied files are created by
skipping over binary zeros at the end of each compressed page.
time and don't let server shut down
Queries from I_S in "Filling schema table" state didn't check killed
flag. For large tables this phase may take a while to complete.
Fixed by adding thd->killed flag check for each processed row.
galera SST tests have a debug part, but we don't want to limit them
to fulltest2 builder. So, add support for test files that
have a debug part:
* add maybe_debug.inc and maybe_debug.combinations
* 'debug' combination is run when debug is available
* 'release' combination is run otherwise
* test wraps debug parts in if($with_debug) { ... }
* and creates ,debug.rdiff for debug results
* make galera.galera_sst_xtrabackup* not big
* auto-select between socat and nc, whatever available
* auto-skip xtrabackup tests if no xtrabackup or neither socat nor nc
fix galera.galera_sst_mysqldump test to work:
* must connect to 127.0.0.1, where mysqld is listening
* disable wsrep_sync_wait in wsrep_sst_mysqldump, otherwise
sst can deadlock
* allow 127.0.0.1 for bind_address and wsrep_sst_receive_address.
(it's useful in tests, or when two nodes are on the same box,
or when nodes are on different boxes, but the connection is
tunelled, or whatever. Don't judge user's setup). MDEV-14070
* don't wait for client connections to die when doing
mysqldump sst. they'll die in a due time, and if needed mysql
will wait on locks until they do. MDEV-14069
Also don't mark it big, to make sure it's sufficiently tested