The problem:
When incremental backup is taken, delta files are created for innodb tables
which are marked as new tables during innodb ddl tracking. When such
tablespace is tried to be opened during prepare in
xb_delta_open_matching_space(), it is "created", i.e.
xb_space_create_file() is invoked, instead of opening, even if
a tablespace with the same name exists in the base backup directory.
xb_space_create_file() writes page 0 header the tablespace.
This header does not contain crypt data, as mariabackup does not have
any information about crypt data in delta file metadata for
tablespaces.
After delta file is applied, recovery process is started. As the
sequence of recovery for different pages is not defined, there can be
the situation when crypt data redo log event is executed after some
other page is read for recovery. When some page is read for recovery, it's
decrypted using crypt data stored in tablespace header in page 0, if
there is no crypt data, the page is not decryped and does not pass corruption
test.
This causes error for incremental backup --prepare for encrypted
tablespaces.
The error is not stable because crypt data redo log event updates crypt
data on page 0, and recovery for different pages can be executed in
undefined order.
The fix:
When delta file is created, the corresponding write filter copies only
the pages which LSN is greater then some incremental LSN. When new file
is created during incremental backup, the LSN of all it's pages must be
greater then incremental LSN, so there is no need to create delta for
such table, we can just copy it completely.
The fix is to copy the whole file which was tracked during incremental backup
with innodb ddl tracker, and copy it to base directory during --prepare
instead of delta applying.
There is also DBUG_EXECUTE_IF() in innodb code to avoid writing redo log
record for crypt data updating on page 0 to make the test case stable.
Note:
The issue is not reproducible in 10.5 as optimized DDL's are deprecated
in 10.5. But the fix is still useful because it allows to decrease
data copy size during backup, as delta file contains some extra info.
The test case should be removed for 10.5 as it will always pass.
fts_query_t::nested_sub_exp: Keep track of nested
fts_ast_visit_sub_exp() calls.
fts_ast_visit_sub_exp(): Return DB_OUT_OF_MEMORY if the
maximum recursion depth is exceeded.
This is motivated by a change in MySQL 5.6.50:
mysql/mysql-server@e2a46b4834
Bug #29929684 USING MANY NESTED ARGUMENTS WITH BOOLEAN FTS CAN LEAD
TO TERMINATE SERVER
To fix this, it is necessary to add an option to exclude the
database with the name "lost+found" from processing (the database
name will be checked by the check_if_skip_database_by_path() or
by the check_if_skip_database() function, and as a result
"lost+found" will be skipped).
In addition, it is necessary to slightly modify the verification
logic in the check_if_skip_database() function.
Also added a new test galera_sst_mariabackup_lost_found.test
Problem:
=======
InnoDB allows virtual index to be referenced index in foreign key
relations. While dropping the virtual column, Inplace alter does
allow the table to be closed and open it using table name to
update dict_table_t::v_cols. While loading the table, it doesn't
allow any error to be ignored. InnoDB can't find the referenced
virtual index and fails to load the table during Inplace alter.
Solution:
=========
During inplace alter, ignore the foreign key error while loading
the table.
Reviewed-by: Marko Mäkelä
Instead of pointlessly waiting for a page flush to occur, take
the matter into our own hands and request an explicit flush.
Also, test with the minimum necessary amount of data (0 or 1 rows)
so that both page encryption and decryption will be exercised.
This patch fixes several flaws in the SST scripts that cause
failures while running tests that use version 6 IP addresses
for cluster nodes.
First, if the netcat utility is used for streaming (but not socat),
then in accordance with its command line syntax, we need to remove
the square brackets around the IPv6 address. However, for socat,
the address must contain square brackets, as before.
Secondly, if an IPv6 address is used, then from the joiner side for
a number of systems (such as Debian) we need to explicitly specify
the "-6" option, otherwise a listening socket with an IPv6 address
may not be created.
This patch also contains code improvements in the wsrep_sst_common.
Changed the code that pars the connection address - fixed the
shortcomings that sometimes led to incorrect parsing of parameters
when using shells other than the latest versions of bash.
Also, this patch removes the duplicate code that is intended
for parsing the connection address and which was located in the
wsrep_sst_mariabackup file, since all the necessary actions have
already been done in wsrep_sst_common and there they are done in
such a way that any shell is supported, not just bash.
The fix does not require separate tests, since all the
necessary tests are already present in the galera_3nodes suite.
On the contrary, after this fix, tests using IPv6 addresses can
be removed from the disabled list (this will be done in a separate
commit related to MDEV-23659).
Marking of deletion of row in fts index happens twice in
self-referential foreign key relation. So while performing
referential checks of foreign key, InnoDB can avoid updating
of fts index if the foreign key has self-referential relationship.
Reviewed-by: Marko Mäkelä
Test we can ALTER log tables directly when not being written
to.
This removes the contraint in the rpl_mysql_upgrade.test such
that we can run mysql_upgrade --write-binlog all the way
through to a replica. We test this in the replication scenario
where the mysql.{slow,general}_log tables aren't being
written to.
Reviewers: Vicențiu Ciorbaru, Anel Husakovic
The crash was caused by improper raising of an error or replication checksum
verification at time of the server initialization. As there is no THD object
associated with the main initializing thread yet the error text should be
assigned with calling a respective macro that is aware of that possibility.
Fixed accordingly.
[At merging to 10.4 the new test result file needs
+# restart: --master_verify_checksum=ON --debug_dbug=+d,corrupt_read_log_event_char
that mtr run will hint on.]
This regression for debug builds was introduced by
MDEV-23101 (commit 224c950462).
Due to MDEV-16664, the parameter
innodb_lock_schedule_algorithm=VATS
is not enabled by default.
The purpose of the added assertions was to enforce the invariant that
Galera replication cannot be enabled together with VATS due to MDEV-12837.
However, upon closer inspection, it is obvious that the variable 'lock'
may be assigned to the null pointer if no match is found in the
previous->hash list.
lock_grant_and_move_on_page(), lock_grant_and_move_on_rec():
Assert !lock->trx->is_wsrep() only after ensuring that lock
is not a null pointer.
1. rr record -h randomizes number of processors. Disable THREAD_POOL_SIZE check.
2. check for kernel.perf_event_paranoid for user-friendly error message.
Changes to be committed:
modified: mysql-test/suite/sys_vars/r/wsrep_cluster_address_basic.result
modified: mysql-test/suite/sys_vars/t/wsrep_cluster_address_basic.test
The setting innodb_lock_schedule_algorithm=VATS that was introduced
in MDEV-11039 (commit 021212b525)
causes conflicting exclusive locks to be incorrectly granted to
two transactions. Specifically, in lock_rec_insert_by_trx_age()
the predicate !lock_rec_has_to_wait_in_queue(in_lock) would hold even
though an active transaction is already holding an exclusive lock.
This was observed between two DELETE of the same clustered index record.
The HASH_DELETE invocation in lock_rec_enqueue_waiting() may be related.
Due to lack of progress in diagnosing the problem, we will deprecate the
option and issue a warning that using it may corrupt data. The unsafe
option was enabled between
commit 0c15d1a6ff (MariaDB 10.2.3)
and the parent of
commit 1cc1d0429d (MariaDB 10.2.17, 10.3.9).
Analysis:
========
"mysqlbinlog -v" option will reconstruct row events and display them as
commented SQL statements. If this option is given twice, the output includes
comments to indicate column data types and some metadata.
`log_event_print_value` is the function reponsible for printing values and
their types. This function doesn't handle GEOMETRY type. Hence the above error
gets printed.
Fix:
===
Add support for GEOMETRY datatype.
The statement ALTER TABLE...DISCARD TABLESPACE is problematic,
because its designed purpose is to break the referential integrity
of the data dictionary and make a table point to nowhere.
ha_innobase::commit_inplace_alter_table(): Check whether the
table has been discarded. (This is a bit late to check it, right
before committing the change.) Previously, we performed this check
only in a specific branch of the function commit_set_autoinc().
Note: We intentionally allow non-rebuilding ALTER TABLE even if
the tablespace has been discarded, to remain compatible with MySQL.
(See the various tests with "wl5522" in the name, such as
innodb.innodb-wl5522.)
The test case would crash starting with 10.3 only, but it does not hurt
to minimize the code and test difference between 10.2 and 10.3.
The test that was added in commit e05650e686
would break a subsequent run of a test encryption.innodb-bad-key-change
because some pages in the system tablespace would be encrypted with
a different key.
The failure was repeatable with the following invocation:
./mtr --no-reorder \
encryption.create_or_replace,cbc \
encryption.innodb-bad-key-change,cbc
Because the crash was unrelated to the code changes that we reverted
in commit eb38b1f703
we can safely re-apply those fixes.
After DISCARD TABLESPACE, the tablespace of a table will no longer
exist, and dict_get_and_save_data_dir_path() would invoke
dict_get_first_path() to read an entry from SYS_DATAFILES.
For some reason, DISCARD TABLESPACE would not to remove the entry
from there.
dict_get_and_save_data_dir_path(): If the tablespace has been
discarded, do not bother trying to read the name.
Side note: The tables SYS_TABLESPACES and SYS_DATAFILES are
redundant and subject to removal in MDEV-22343.
Let us shrink the test encryption.create_or_replace so that it can
run on the CI system, also on the embedded server.
encryption.create_or_replace_big: Renamed from the original test,
with the subset of encryption.create_or_replace omitted.
log_group_read_log_seg() returns error when:
1) Calculated log block number does not correspond to read log block
number. This can be caused by:
a) Garbage or an incompletely written log block. We can exclude this
case by checking log block checksum if it's enabled(see innodb-log-checksums,
encrypted log block contains checksum always).
b) The log block is overwritten. In this case checksum will be correct and
read log block number will be greater then requested one.
2) When log block length is wrong. In this case recv_sys->found_corrupt_log
is set.
3) When redo log block checksum is wrong. In this case innodb code
writes messages to error log with the following prefix: "Invalid log
block checksum."
The fix processes all the cases above.
Problem:
=======
SHOW BINLOG EVENTS FROM <"random"-pos> caused a variety of failures as
reported in MDEV-18046. They are fixed but that approach is not future-proof
as well as is not optimal to create extra check for being constructed event
parameters.
Analysis:
=========
"show binlog events from <pos>" code considers the user given position as a
valid event start position. The code starts reading data from this event start
position onwards and tries to map it to a set of known events. Each event has
a specific event structure and asserts have been added to ensure that, read
event data, satisfies the event specific requirements. When a random position
is supplied to "show binlog events command" the event structure specific
checks will fail and they result in assert.
For example: https://jira.mariadb.org/browse/MDEV-18046
In the bug description user executes CREATE TABLE/INSERT and ALTER SQL
commands.
When a crazy offset like "SHOW BINLOG EVENTS FROM 365" is provided code
assumes offset 365 as valid event begin and proceeds to EVENT_LEN_OFFSET reads
some random length and comes up with a crazy event which didn't exits in the
binary log. In this quoted example scenario, event read at offset 365 is
considered as "Update_rows_log_event", which is not present in binary log.
Since this is a random event its validation fails and code results in
assert/segmentation fault, as shown below.
mysqld: /data/src/10.4/sql/log_event.cc:10863: Rows_log_event::Rows_log_event(
const char*, uint, const Format_description_log_event*):
Assertion `var_header_len >= 2' failed.
181220 15:27:02 [ERROR] mysqld got signal 6 ;
#7 0x00007fa0d96abee2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#8 0x000055e744ef82de in Rows_log_event::Rows_log_event (this=0x7fa05800d390,
buf=0x7fa05800d080 "", event_len=254, description_event=0x7fa058006d60) at
/data/src/10.4/sql/log_event.cc:10863
#9 0x000055e744f00cf8 in Update_rows_log_event::Update_rows_log_event
Since we are reading random data repeating the same command SHOW BINLOG EVENTS
FROM 365 produces different types of crashes with different events. MDEV-18046
reported 10 such crashes.
In order to avoid such scenarios user provided starting offset needs to be
validated for its correctness. Best way of doing this is to make use of
checksums if they are available. MDEV-18046 fix introduced the checksum based
validation.
The issue still remains in cases where binlog checksums are disabled. Please
find the following bug reports.
MDEV-22473: binlog.binlog_show_binlog_event_random_pos failed in buildbot,
server crashed in read_log_event
MDEV-22455: Server crashes in Table_map_log_event,
binlog.binlog_invalid_read_in_rotate failed in buildbot
Fix:
====
When binlog checksum is disabled, perform scan(via reading event by event), to
validate the requested FROM <pos> offset. Starting from offset 4 read the
event_length of next_event in the binary log. Using the next_event length
advance current offset to point to next event. Repeat this process till the
current offset is less than or equal to crazy offset. If current offset is
higher than crazy offset provide appropriate invalid input offset error.
Parse SHOW SLAVE STATUS output for the "Using_Gtid" column. If the value
is "No", then old log file and position is backed up, otherwise gtid_slave_pos
is backed up.
virtual column in index
Problem:
row_ins_foreign_fill_virtual was unconditionally set virtual fields to NULL
even though the field is not a part of a foreign key
(but a part of an index)
Solution:
The new virtual value should be computed with regard to cascade updates.