recv_sys_t::parse(): Correctly handle the storing==BACKUP case,
and simplify some logic around storing==YES as well.
The added test mariabackup.undo_truncate is based on an idea of
Thirunarayanan Balathandayuthapani. It nondeterministically (not on
every run) covers this logic, including the function backup_undo_trunc(),
for both innodb_encrypt_log=ON and innodb_encrypt_log=OFF.
Reviewed by: Debarun Banerjee
log_t::resize_start(): If the ib_logfile101 cannot be created,
be sure to reset log_sys.resize_lsn.
log_t::resize_abort(): In case SET GLOBAL innodb_log_file_size is
aborted, delete the ib_logfile101.
innodb_log_file_mmap: Use a constant documentation string that
refers to persistent memory also when it is not available in the build.
HAVE_INNODB_MMAP: Remove, and unconditionally enable this code.
log_mmap(): On 32-bit systems, ensure that the size fits in 32 bits.
log_t::resize_start(), log_t::resize_abort(): Only handle memory-mapping
if HAVE_PMEM is defined. The generic memory-mapped interface is only for
reading the log in recovery. Writable memory mappings are only for
persistent memory, that is, Linux file systems with mount -o dax.
Reviewed by: Debarun Banerjee, Otto Kekäläinen
recv_sys_t::parse<storing=NO>(): Do invoke
fil_space_set_recv_size_and_flags() and do parse enough of page 0
to facilitate that.
This fixes a regression that had been introduced in
commit b249a059da (MDEV-34850).
In a multi-batch crash recovery, we would fail to invoke
fil_space_set_recv_size_and_flags() while parsing the remaining log,
before starting the first recovery batch.
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich
get_all_tables() skipped tables if the user has no privileges on
the schema itself and no granted privilege on any tables in the schema.
that is, it was skipping performance_schema tables (privileges
on them aren't explicitly granted, but internally hard-coded)
To fix:
* extend ACL_internal_table_access::check() method with
`bool any_combination_will_do`
* fix all perfschema privilege checks to take it into account.
* don't reuse table_acl_check object for all tables, initialize it
for every table otherwise GRANT_INTERNAL_INFO will leak
* remove incorrect privilege check from get_all_tables()
ha_innobase::delete_table(): Clear trx->dict_operation_lock_mode
after, not before invoking trx->rollback(), so that
row_undo_mod_parse_undo_rec() will be invoked with dict_locked=true
and dict_sys_t::freeze() will not be invoked for loading a table
definition. Inside dict_sys_t::freeze(), an assertion !have_any()
would fail when the current thread is already holding the latch.
This fixes up commit c5fd9aa562 (MDEV-25919).
Reviewed by: Debarun Banerjee
- InnoDB fails to recover the full crc32 encrypted page from
doublewrite buffer. The reason is that buf_dblwr_t::recover()
fails to identify the space id from the page because the page has
been encrypted from FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION bytes.
Fix:
===
buf_dblwr_t::recover(): preserve any pages whose space_id
does not match a known tablespace. These could be encrypted pages
of tablespaces that had been created with
innodb_checksum_algorithm=full_crc32.
buf_page_t::read_complete(): If the page looks corrupted and the
tablespace is encrypted and in full_crc32 format, try to
restore the page from doublewrite buffer.
recv_dblwr_t::recover_encrypted_page(): Find the page which
has the same page number and try to decrypt the page using
space->crypt_data. After decryption, compare the space id.
Write the recovered page back to the file.
Mariabackup (mariadb-backup) supports the --use-memory option that
sets the buffer pool size for innodb. However, current SST scripts
do not use this option. This commit adds support for this option,
the value for which can be specified via the "use_memory" parameter
in the configuration file in the [sst], [mariabackup] or [xtrabackup]
sections (supported only for compatibility with old configurations).
In addition, if the innodb_buffer_pool_size option is specified in
the user configuration (in the main server configuration sections)
or passed to the SST scripts or the server via arguments, its value
is also passed to mariadb-backup as the value for the --use-memory
option.
A new section name [mariabackup] has also been added, which can
be used instead of the deprecated [xtrabackup] (the section name
"mariabackup" was specified in the documentation, but was not
actually supported by SST scripts before this commit).
CURRENT_TEST: binlog_encryption.rpl_parallel_gco_wait_kill
mysqltest: In included file "./suite/rpl/t/rpl_parallel_gco_wait_kill.test":
included from /home/buildbot/amd64-ubuntu-2004-debug/build/mysql-test/suite/binlog_encryption/rpl_parallel_gco_wait_kill.test at line 2:
At line 334: Can't initialize replace from 'replace_result $thd_id THD_ID'
An sql thread can reach the "Slave has read all relay log" state
and then start reading relay log again. Let's use a more generic
pattern to retrieve the sql thread ID even if it's not
in the "read all relay log" state.
MDEV-29533 Crash when MariaDB is replica of MySQL 8.0
MySQL 8.0 has added the following new events in the MySQL binary log
PARTIAL_UPDATE_ROWS_EVENT
TRANSACTION_PAYLOAD_EVENT
HEARTBEAT_LOG_EVENT_V2
- PARTIAL_UPDATE_ROWS_EVENT is used by MySQL to generate update
statements using JSON_SET, JSON_REPLACE and JSON_REMOVE to make
update of JSON columns more efficient. These events can be
disabled by setting 'binlog-row-value-options=""'
- TRANSACTION_PAYLOAD_EVENT is used by MySQL to signal that a
row event is compressed. It an be disably by setting
'binlog_transaction_compression=0'.
- HEARTBEAT_LOG_EVENT_V2 is written to the binary log many times
per seconds. It can be ignored by the server.
What this patch does:
- If PARTIAL_UPDATE_ROWS_EVENT or TRANSACTION_PAYLOAD_EVENT is found,
the server will stop with an error message of how to disable the
MySQL server to generate such events.
- HEARTBEAT_LOG_EVENT_V2 events are ignored.
- mariadb-binlog will write the name of the new events.
- mariadb-binlog will stop if PARTIAL_UPDATE_ROWS_EVENT or
TRANSACTION_PAYLOAD_EVENT is found, unless --force is given.
- Fixes a crash in mariadb-binlog if a character set unknown to
MariaDB is found. (MDEV-29533)
From Kristian Nielsen:
- Add test case for MySQL 8.0 to MariaDB replication and fixed a
a small typo in post_header_len initialization.
Reviewer: knielsen@mariadb.org
The test with memory restrictions randomly works or fails in buildbot
depending on server configurations. On my machine the original test
worked.
As the test was there to just check if the server crashes when run with
small memory configurations, I disabled testing if the query would fail
or not. The test still has its original purpose.
Discussed with: Sergei Golubchik <serg@mariadb.org>
This commit updates default memory allocations size used with MEM_ROOT
objects to minimize the number of calls to malloc().
Changes:
- Updated MEM_ROOT block sizes in sql_const.h
- Updated MALLOC_OVERHEAD to also take into account the extra memory
allocated by my_malloc()
- Updated init_alloc_root() to only take MALLOC_OVERHEAD into account as
buffer size, not MALLOC_OVERHEAD + sizeof(USED_MEM).
- Reset mem_root->first_block_usage if and only if first block was used.
- Increase MEM_ROOT buffers sized used by my_load_defaults, plugin_init,
Create_tmp_table, allocate_table_share, TABLE and TABLE_SHARE.
This decreases number of malloc calls during queries.
- Use a small buffer for THD->main_mem_root in THD::THD. This avoids
multiple malloc() call for new connections.
I tried the above changes on a complex select query with 12 tables.
The following shows the number of extra allocations that where used
to increase the size of the MEM_ROOT buffers.
Original code:
- Connection to MariaDB: 9 allocations
- First query run: 146 allocations
- Second query run: 24 allocations
Max memory allocated for thd when using with heap table: 61,262,408
Max memory allocated for thd when using Aria tmp table: 419,464
After changes:
Connection to MariaDB: 0 allocations
- First run: 25 allocations
- Second run: 7 allocations
Max memory allocated for thd when using with heap table: 61,347,424
Max memory allocated for thd when using Aria table: 529,168
The new code uses slightly more memory, but avoids memory fragmentation
and is slightly faster thanks to much fewer calls to malloc().
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Heap tables are allocated blocks to store rows according to
my_default_record_cache (mapped to the server global variable
read_buffer_size).
This causes performance issues when the record length is big
(> 1000 bytes) and the my_default_record_cache is small.
Changed to instead split the default heap allocation to 1/16 of the
allowed space and not use my_default_record_cache anymore when creating
the heap. The allocation is also aligned to be just under a power of 2.
For some test that I have been running, which was using record length=633,
the speed of the query doubled thanks to this change.
Other things:
- Fixed calculation of max_records passed to hp_create() to take
into account padding between records.
- Updated calculation of memory needed by heap tables. Before we
did not take into account internal structures needed to access rows.
- Changed block sized for memory_table from 1 to 16384 to get less
fragmentation. This also avoids a problem where we need 1K
to manage index and row storage which was not counted for before.
- Moved heap memory usage to a separate test for 32 bit.
- Allocate all data blocks in heap in powers of 2. Change reported
memory usage for heap to reflect this.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Problem:
=======
- insert..select statement on partition table fails to use
bulk insert for the transaction.
Solution:
========
- Enable the bulk insert operation for insert..select
statement for partition table.
opt_calc_index_goodness(): Correct an inaccurate condition.
We can very well use a clustered index of a table that is subject
to online rebuild. But we must not choose an index that has not been
committed (it is a secondary index that was not fully created)
or that is corrupted or not a normal B-tree index.
opt_search_plan_for_table(): Remove some redundant code, now that
opt_calc_index_goodness() checks against corrupted indexes.
The test case allows this code to be exercised. The main observation
in the following:
./mtr --rr innodb.stats_persistent
rr replay var/log/mysqld.1.rr/latest-trace
should be that when opt_search_plan_for_table() is being invoked by
dict_stats_update_persistent() on the being-altered statistics table
in the 2nd call after ha_innobase::inplace_alter_table(),
and the fix in opt_calc_index_goodness() is absent,
it would choose the code path if (n_fields == 0), that is, a full
table scan, instead of searching for the record. The GDB commands to
execute in "rr replay" would be as follows:
break ha_innobase::inplace_alter_table
continue
break opt_search_plan_for_table
continue
continue
next
next
…
Reviewed by: Vladislav Lesin
Partition tests requiring lower_case_table_names = 2 (default on macOS)
fail on mac because the product has changed over time but the tests were
not run regularly enough to observe their breakage.
The limit of socket length on unix according to libc is 108, see
sockaddr_un::sun_path, but in the table it is a string of max length
64, which results in truncation of socket and failure to connect by
plugins using servers such as spider.
The test turns out to be senstive to @@global.gtid_cleanup_batch_size.
With a rather small default value of the latter
SELECTing from mysql.gtid_slave_pos may not be deterministic: tests
that run before may increase a pending for automitic deletion batch.
The test is refined to set its own value for the batch size which
is virtually unreachable.
Thanks to Kristian Nielsen for the analysis.
zlib-ng results in different compression length. The compression
length isn't that important as the test output examines the uncompressed
results.
fixes for zlib-ng
backport of 75488a57f2
Under unknown circumstances, the SQL layer may wrongly disregard an
invocation of thd_mark_transaction_to_rollback() when an InnoDB
transaction had been aborted (rolled back) due to one of the following errors:
* HA_ERR_LOCK_DEADLOCK
* HA_ERR_RECORD_CHANGED (if innodb_snapshot_isolation=ON)
* HA_ERR_LOCK_WAIT_TIMEOUT (if innodb_rollback_on_timeout=ON)
Such an error used to cause a crash of InnoDB during transaction commit.
These changes aim to catch and report the error earlier, so that not only
this crash can be avoided but also the original root cause be found and
fixed more easily later.
The idea of this fix is from Michael 'Monty' Widenius.
HA_ERR_ROLLBACK: A new error code that will be translated into
ER_ROLLBACK_ONLY, signalling that the current transaction
has been aborted and the only allowed action is ROLLBACK.
trx_t::state: Add TRX_STATE_ABORTED that is like
TRX_STATE_NOT_STARTED, but noting that the transaction had been
rolled back and aborted.
trx_t::is_started(): Replaces trx_is_started().
ha_innobase: Check the transaction state in various places.
Simplify the logic around SAVEPOINT.
ha_innobase::is_valid_trx(): Replaces ha_innobase::is_read_only().
The InnoDB logic around transaction savepoints, commit, and rollback
was unnecessarily complex and might have contributed to this
inconsistency. So, we are simplifying that logic as well.
trx_savept_t: Replace with const undo_no_t*. When we rollback to
a savepoint, all we need to know is the number of undo log records
that must survive.
trx_named_savept_t, DB_NO_SAVEPOINT: Remove. We can store undo_no_t
directly in the space allocated at innobase_hton->savepoint_offset.
fts_trx_create(): Do not copy previous savepoints.
fts_savepoint_rollback(): If a savepoint was not found, roll back
everything after the default savepoint of fts_trx_create().
The test innodb_fts.savepoint is extended to cover this code.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
The segfault in wsrep_check_sequence is due to a
null pointer deference on:
db_type= thd->lex->create_info.db_type->db_type;
Where create_info.db_type is null. This occured under
a used_engine==true condition which is set in the calling
function based on create_info.used_fields==HA_CREATE_USED_ENGINE.
However the create_info.used_fields was a left over
from the parsing of the previous failed CREATE TABLE where
because of its failure, db_type wasn't populated.
This is corrected by cleaning the create_info when we start
to parse ALTER SEQUENCE statements.
Other paths to wsrep_check_sequence is via CREATE SEQUENCE
and CREATE TABLE LIKE which both initialize the create_info
correctly.
Problem:
=======
InnoDB wrongly stores the primary key field in externally
stored off page during bulk insert operation. This leads
to assert failure.
Solution:
========
row_merge_buf_blob(): Should store the primary key fields
inline. Store the variable length field data externally
based on the row format of the table.
row_merge_buf_write(): check whether the record size exceeds
the maximum record size.
row_merge_copy_blob_from_file(): Construct the tuple based on
the variable length field
Disallow changing @@gtid_domain_id while a temporary table is open in
STATEMENT or MIXED binlog mode. Otherwise, a slave may try to replicate
events refering to the same temporary table in parallel, using domain-based
out-of-order parallel replication. This is not valid, temporary tables are
only available for use within a single thread at a time.
One concrete consequence seen from this bug was a ROLLBACK on an
InnoDB temporary table running in one domain in parallel with DROP
TEMPORARY TABLE in another domain, causing an assertion inside InnoDB:
InnoDB: Failing assertion: table->get_ref_count() == 0 in
dict_sys_t::remove.
Use an existing error code that's somewhat close to the real issue
(ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_GTID_DOMAIN_ID_SEQ_NO), to not add a
new error code in a GA release. When this is merged to the next GA release,
we could optionally introduce a new and more precise error code for an
attempt to change the domain_id while temporary tables are open.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>