The use of the xtrabackup and xtrabackup-v2 methods for SST
has been declared obsolete since version 10.2, now it cannot
be used because of the different redo log format. Accordingly,
we need to remove the xtrabackup-related scripts and dynamically
replace the call to xtrabackup[-v2] to the mariabackup (with a
corresponding warning in the log) when the server performs SST.
https://jira.mariadb.org/browse/MDEV-17835
- This is a regression of commit b26e603aeb. While dropping
the incompletely created table, InnoDB shouldn't consider that operation as non-atomic one.
When the with clause of a query contains a recursive CTE that is not used
then processing of EXPLAIN for this query does not require optimization
of the unit specifying this CTE. In this case if 'derived' is the
TABLE_LIST object created for this CTE then derived->derived_result is NULL
and any assignment to derived->derived_result->table causes a crash.
After fixing this problem in the code of st_select_lex_unit::prepare()
EXPLAIN for such a query worked without crashes. Yet an execution
plan for the recursive CTE appeared there. The cause of this problem was
an incorrect condition used in JOIN::save_explain_data_intern() that
determined whether CTE was to be optimized or not. A similar condition was
used in select_describe() and this patch has corrected it as well.
In 10.3, rec_is_metadata() takes a pointer, while in 10.4 it
takes a reference as a parameter. I ported this patch from
10.4 to 10.3, and then only ran a release build, not debug build.
The special flag REC_INFO_MIN_REC_FLAG used to be only set on the
first record in the leftmost node pointer page of each level of the tree.
It was never set on leaf pages.
MDEV-11369 Instant ADD COLUMN in MariaDB Server 10.3 repurposed the flag
to identify a hidden metadata record, which is stored in the first record
on the leftmost leaf page.
If the adaptive hash index points to records in the leftmost leaf page
after instant ALTER TABLE, we would have such a metadata record in the
table, an assertion could fail when trying to validate the index record.
In a release build, we might wrongly qualify the hidden metadata record
and thus return garbage results.
cmp_dtuple_rec_with_match_bytes(): If the REC_INFO_MIN_REC_FLAG is
set on the record, assert that this is the first record on the
leftmost page and that the record is a metadata record, and finally
return 1, because by definition, anything is greater than the
minimum record.
Also, related to MDEV-15522, MDEV-17304, MDEV-17835,
remove the Galera xtrabackup tests, because xtrabackup never worked
with MariaDB Server 10.3 due to InnoDB redo log format changes.
dict_create_add_foreigns_to_dictionary(): Do not commit the transaction.
The operation can still fail in dict_load_foreigns(), and we want
to be able to roll back the transaction.
create_table_info_t::create_table(): Never reset m_drop_before_rollback,
and never commit the transaction. We use a single point of rollback
in ha_innobase::create(). Merge the logic from
row_table_add_foreign_constraints().
This is a regression due to MDEV-17816.
When creating a table fails, we must roll back the dictionary
transaction. Because the rollback may rename tables, and because
InnoDB lacks proper undo logging for CREATE operations, we must
drop the incompletely created table before rolling back the
transaction, which could include a RENAME operation.
But, we must not blindly drop the table by name; after all,
the operation could have failed because another table by the
same name already existed.
create_table_info_t::m_drop_before_rollback: A flag that is set
if the table needs to be dropped before transaction rollback.
create_table_info_t::create_table(): Remove some duplicated
error handling.
ha_innobase::create(): On error, only drop the table if it was
actually created.
fil_space_t::add(): Replaces fil_node_create(), fil_node_create_low().
Let the caller pass fil_node_t::handle, to avoid having to close and
re-open files.
fil_node_t::read_page0(): Refactored from fil_node_open_file().
Read the first page of a data file.
fil_node_open_file(): Open the file only once.
srv_undo_tablespace_open(): Set the file handle for the opened
undo tablespace. This should ensure that ut_ad(file->is_open())
no longer fails in recv_add_trim().
xtrabackup_backup_func(): Remove some dead code.
xb_fil_cur_open(): Open files only if needed. Undo tablespaces
should already have been opened.
When dropping a partially created table due to failure,
use SQLCOM_TRUNCATE instead of SQLCOM_DROP_DB, so that
no foreign key constraints will be touched. If any
constraints were added as part of the creation, they would
be reverted as part of the transaction rollback.
We need an explicit call to row_drop_table_for_mysql(),
because InnoDB does not do proper undo logging for CREATE TABLE,
but would only drop the table at the end of the rollback.
This would not work if the transaction combines both
RENAME and CREATE, like TRUNCATE now does.
If a table had a KEY_BLOCK_SIZE attribute, but no ROW_FORMAT,
it would be created as ROW_FORMAT=COMPRESSED in InnoDB.
However, TRUNCATE TABLE would lose the KEY_BLOCK_SIZE attribute
and create the table with the innodb_default_row_format (DYNAMIC).
This is a regression that was introduced by MDEV-13564.
update_create_info_from_table(): Copy also KEY_BLOCK_SIZE.
The error handling in the MDEV-13564 TRUNCATE TABLE was broken
when an error occurred during table creation.
row_create_index_for_mysql(): Do not drop the table on error.
fts_create_one_common_table(), fts_create_one_index_table():
Do drop the table on error.
create_index(), create_table_info_t::create_table():
Let the caller handle the index creation errors.
ha_innobase::create(): If create_table_info_t::create_table()
fails, drop the incomplete table, roll back the transaction,
and finally return an error to the caller.
lock_rec_queue_validate(): Assert page_rec_is_leaf(rec), except when
the record is a page infimum or supremum.
lock_rec_validate_page(): Relax the assertion that failed.
The assertion was reachable when the record lock bitmap was empty.
lock_rec_insert_check_and_lock(): Assert page_is_leaf().
Problem was that controlling connection i.e. connection that
executed the query SET GLOBAL wsrep_reject_queries = ALL_KILL;
was also killed but server would try to send result from that
query to controlling connection resulting a assertion
mysqld: /home/jan/mysql/10.2-sst/include/mysql/psi/mysql_socket.h:738: inline_mysql_socket_send: Assertion `mysql_socket.fd != -1' failed.
as socket was closed when controlling connection was closed.
wsrep_close_client_connections()
Do not close controlling connection and instead of
wsrep_close_thread() we do now soft kill by THD::awake
wsrep_reject_queries_update()
Call wsrep_close_client_connections using current thd.
recv_addr_trim(): Do not try to detach the hash bucket, because
the code for doing that does not always work.
recv_apply_hashed_log_recs(): Do not attempt to read pages for which
there exist no redo log records.