for passing ones.
Changes to be committed:
new file: mysql-test/std_data/galera-cert.pem
new file: mysql-test/std_data/galera-key.pem
new file: mysql-test/std_data/galera-upgrade-ca-cert.pem
new file: mysql-test/std_data/galera-upgrade-server-cert.pem
new file: mysql-test/std_data/galera-upgrade-server-key.pem
modified: mysql-test/suite/galera/disabled.def
modified: mysql-test/suite/galera/r/MW-416.result
modified: mysql-test/suite/galera/r/MW-44.result
modified: mysql-test/suite/galera/r/galera_sst_mysqldump_with_key,debug.rdiff
modified: mysql-test/suite/galera/r/galera_sst_mysqldump_with_key.result
modified: mysql-test/suite/galera/t/MW-416.test
modified: mysql-test/suite/galera/t/galera_kill_applier.test
- Ported mysql Bug#20597981 test case to mariadb-10.2
- InnoDB never used fts_doc_id_in_read_set. Basically it tells
innodb to read the fts_doc_id from the index record itself.
Problem:
=======
Executing test with following options will result in test failure.
./mtr rpl.kill_race_condition{,,,,,,,,,,} --repeat=10 --par 12 --mem
Fix:
====
Test simulates applier thread kill scenario while applying a row event. But it
doesn't wait for applier to catch the error stop.
Added :wait_for_slave_sql_error.inc to catch the error.
Test uses START SLAVE as a final step and doesn't wait for both threads to
start.
Added: start_slave.inc
The test allowed non-deterministic execution thanks to unresetable status
var of Slave_connections.
Fixed with expecting a correct value for Slaves_connected.
- Introduce a new variable called innodb_encrypt_temporary_tables which is
a boolean variable. It decides whether to encrypt the temporary tablespace.
- Encrypts the temporary tablespace based on full checksum format.
- Introduced a new counter to track encrypted and decrypted temporary
tablespace pages.
- Warnings issued if temporary table creation has conflict value with
innodb_encrypt_temporary_tables
- Added a new test case which reads and writes the pages from/to temporary
tablespace.
Added the condition in innochecksum tool to check page id mismatch.
This could catch the write corruption caused by InnoDB.
Added the debug insert inside fil_io() to check whether it writes
the page to wrong offset.
Since the purpose of event is just to see on second node whether it is
created or not And we are not goint to execute the event also. So instead
of setting GLOBAL event_scheduler=ON and then turning it off, we can just
disable the warning.
The problem was that the code in maria_extra assumed that there could be
only one table open when doing maria_extra(MA_FORCE_REOPEN)
However in the case of triggers, there can be multiple copies of
the table open.
Fixed by removing assert.
Also, move part of the test back to innodb.innodb_mysql
and another part to a new test innodb.purge.
Last but not least, merge the tests innodb_zip.4k and innodb_zip.8k
to innodb_zip.page_size.
The test cases for the MDEV found several independent bugs
in MariaDB server and Aria:
- If a temporary table was marked as crashed, it could never
be deleted.
- Opening of a crashed temporary table gave an error message
but the error was never forwarded to the caller which caused
an assert() in my_ok()
- init_read_record() did mmap of all temporary tables, which is
probably not a good idea as this area can potentially be
very big. Changed code to only mmap internal temporary tables.
- mmap-ed tables where not unmapped in case of repair/optimize
which caused bad data in table and crashes if the original
table files where replaced with new ones (as the old mmap
was still in place). Fixed by removing the mmap in case
of repair.
- Cleaned up usage of code that disabled mmap in Aria
In collaboration with Sergey Vojtovich <svoj@mariadb.org>
The COMPRESSED clause is now a part of the data type and goes immediately
after the data type and length, but before the CHARACTER SET clause,
and before column attributes such as DEFAULT, COLLATE, ON UPDATE,
SYSTEM VERSIONING, engine specific column attributes.
In the old reduction, the COMPRESSED clause was a column attribute.
New syntax:
<varchar or text data type> <length> <compression> <character set> <column attributes>
<varbinary or blob data type> <length> <compression> <column attributes>
New syntax examples:
VARCHAR(1000) COMPRESSED CHARACTER SET latin1 DEFAULT ''
BLOB COMPRESSED DEFAULT ''
Deprecate syntax examples:
VARCHAR(1000) CHARACTER SET latin1 COMPRESSED DEFAULT ''
TEXT CHARACTER SET latin1 DEFAULT '' COMPRESSED
VARBINARY(1000) DEFAULT '' COMPRESSED
As a side effect:
- COMPRESSED is not valid as an SP label name in SQL/PSM routines any more
(but it's still valid as an SP label name in sql_mode=ORACLE)
- COMPRESSED is now allowed in combination with GENERATED ALWAYS AS:
TEXT COMPRESSED GENERATED ALWAYS AS REPEAT('a',1000)
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
Problem:
=========
One of the purge thread access the corrupted page and tries to remove from
LRU list. In the mean time, other purge threads are waiting for same page
in buf_wait_for_read(). Assertion(buf_fix_count == 0) fails for the
purge thread which tries to remove the page from LRU list.
Solution:
========
- Set the page id as FIL_NULL to indicate the page is corrupted before
removing the block from LRU list. Acquire hash lock for the particular
page id and wait for the other threads to release buf_fix_count
for the block.
- Added the error check for btr_cur_open() in row_search_on_row_ref().
Before killing the server, ensure that the incomplete state of
the transaction will be made durable and will be applied and
rolled back on recovery, so that each time, roughly the same
amount of work will be done.
Remove DML statements after the recovery, and execute
CHECK TABLE instead.
Remove the test, because it easily fails with a result difference.
Analysis by Thirunarayanan Balathandayuthapani:
By default, innodb_encrypt_tables=0.
1) Test case creates 100 tables in innodb_encrypt_1.
2) creates another 100 unencrypted tables (encryption=off) in innodb_encrypt_2
3) creates another 100 encrypted tables (encryption=on) in innodb_encrypt_3
4) enabling innodb_encrypt_tables=1 and checking that only
100 encrypted tables exist. (already we have 100 in dictionary)
5) opening all tables again (no idea why)
6) After that, set innodb_encrypt_tables=0 and wait for 100 tables
to be decrypted (already we have 100 unencrypted tables)
7) dropping all databases
Sporadic failure happens because after step 4, it could encrypt the
normal table too, because innodb_encryption_threads=4.
This test was added in MDEV-9931, which was about InnoDB startup being
slow due to all .ibd files being opened. There have been a number of
later fixes to this problem. Currently the latest one is
commit cad56fbaba, in which some tests
(in particular the test innodb.alter_kill) could fail if all InnoDB
.ibd files are read during startup. That could make this test redundant.
Let us remove the test, because it is big, slow, unreliable, and
does not seem to reliably catch the problem that all files are being
read on InnoDB startup.
Problem:
=======
fil_iterate() writes imported tablespace page0 as it is to discarded
tablespace. Space id wasn't even changed. While opening the tablespace,
tablespace fails with space id mismatch error.
Fix:
====
fil_iterate() copies the page0 with discarded space id to imported
tablespace.
fix MDEV-18750: failed to flashback large-size binlog file
fix mysqlbinlog flashback failure caused by reading io_cache without MY_FULL_IO flag
fix MDEV-18750: mysqlbinlog flashback failure on large binlog