Fix this patch (two csets before):
Disable rocksdb.shutdown test
It was introduced by this patch in fb/mysql-5.6:
Author: Yoshinori Matsunobu <yoshinori@fb.com>
Date: Mon Jun 10 14:09:28 2019 -0700
Extending SHUTDOWN query to support read_only/aborting
Summary:
This diff extends SHUTDOWN query to support the following
features.
- Aborting with any specified exit code (range is 0..255).
If nothing is specified or 0 is given, it does default clean
shutdown. If 1+ is given, exits with the given error code
immediately. This is helpful to shutting down instance
even if it is stuck somewhere.
MariaDB doesn't support SHUTDOWN statement or have any other way
to exit the server process.
It was introduced by this patch in fb/mysql-5.6:
Author: Yoshinori Matsunobu <yoshinori@fb.com>
Date: Mon Jun 10 14:09:28 2019 -0700
Extending SHUTDOWN query to support read_only/aborting
Summary:
This diff extends SHUTDOWN query to support the following
features.
- Aborting with any specified exit code (range is 0..255).
If nothing is specified or 0 is given, it does default clean
shutdown. If 1+ is given, exits with the given error code
immediately. This is helpful to shutting down instance
even if it is stuck somewhere.
MariaDB doesn't support SHUTDOWN statement or have any other way
to exit the server process.
Use RocksDB debug sync points to introduce a sync delay. This
commits to get grouped even when the datadir is on ramdisk.
For some unclear reason the effect is visible on write_prepared
but not write_committed, so run the test only with write_prepared.
Problem:
=======
Checksum fields can have value as zero. In that case, InnoDB falsely
consider that page should be all zeroes. It leads to wrong detection of page
corruption.
Solution:
========
Remove the condition that checks if checksum fields are zero then
page should be all zeroes.
which are pointed to the table being altered
Problem:
========
InnoDB failed to change the column name present in foreign key cache
for instant add column. So it leads to column mismatch for the consecutive
rename of column.
Solution:
=========
Evict the foreign key information from cache and load the foreign
key information again for instant operation.
Problem:
=======
During online alter, fts tokenization thread uses new table page size
to read the externally stored page from old table. If the alter changes
the page size then it leads to failure of alter table.
Solution:
=========
fts tokenization thread should use old table page size to read the
externally stored page from old table.
Problem:
========
There is a possibility that there can be more concurrent DMLs While the
alter table thread is waiting for upgrading to MDL_EXCLUSIVE before commit phase.
In commit phase, InnoDB acquires dict_operation_lock and it already holds MDL_EXCLUSIVE
on the table. After that, InnoDB applies the concurrent DML logs in commit phase.
This could lead to blocking of the following things:
1) DML on the particular table (due to MDL_EXCLUSIVE on the table)
2) InnoDB DDLs (due to dict_operation_lock)
3) Purge thread, stats thread, the master thread (due to dict_operation_lock)
Fix:
====
Apply the concurrent DML logs in commit phase but before acquiring
dict_operation_lock in commit phase. It makes sure that (2), (3) can't be
blocked for longer time.
Basic idea of the patch: disallow creating tables which allow to create
rows which are too big to insert. In other words, if user created a table user
should never see an errors like 'can not insert row as it is too big for current
page size'.
SET innodb_strict_mode=OFF; will allow to create very long tables and only a
warning will be issued.
dict_table_t::get_overflow_field_local_len(): this function lets know a maximum
local field len for overflow fields for every file and row format.
innobase_check_column_length(): improve name to too_big_key_part_length()
and reuse in a different part of code.
create_table_info_t::prepare_create_table(): add check for maximum allowed
key part length to keep ALGORITHM=COPY behavior similar to ALGORITHM=INPLACE
behavior. Affected test is innodb.strict_mode
Rename dict_index_too_big_for_tree() to
dict_index_t::rec_potentially_too_big(): copy overflow-related size computation
from dtuple_convert_big_rec(). A lot of tests was changed because of that.
I wonder whether users will complain about it?
Test innodb.max_record_size tests dict_index_t::rec_potentially_too_big()
for different row formats and page sizes.
In row_ins_foreign_check_on_constraint(), clustered index record is being passed to wsrep_append_foreign_key() after releasing the latch. If a record has been changed by other thread in the meantime then it could lead to a crash when
wsrep_rec_get_foreign_key () tries to access the record.
row_ins_foreign_check_on_constraint
Use cascade->pcur->old_rec instead of clust_rec.
row_ins_check_foreign_constraint
Add missing error printout.