The statement ALTER TABLE...DISCARD TABLESPACE is problematic,
because its designed purpose is to break the referential integrity
of the data dictionary and make a table point to nowhere.
ha_innobase::commit_inplace_alter_table(): Check whether the
table has been discarded. (This is a bit late to check it, right
before committing the change.) Previously, we performed this check
only in a specific branch of the function commit_set_autoinc().
Note: We intentionally allow non-rebuilding ALTER TABLE even if
the tablespace has been discarded, to remain compatible with MySQL.
(See the various tests with "wl5522" in the name, such as
innodb.innodb-wl5522.)
The test case would crash starting with 10.3 only, but it does not hurt
to minimize the code and test difference between 10.2 and 10.3.
The test that was added in commit e05650e686
would break a subsequent run of a test encryption.innodb-bad-key-change
because some pages in the system tablespace would be encrypted with
a different key.
The failure was repeatable with the following invocation:
./mtr --no-reorder \
encryption.create_or_replace,cbc \
encryption.innodb-bad-key-change,cbc
Because the crash was unrelated to the code changes that we reverted
in commit eb38b1f703
we can safely re-apply those fixes.
After DISCARD TABLESPACE, the tablespace of a table will no longer
exist, and dict_get_and_save_data_dir_path() would invoke
dict_get_first_path() to read an entry from SYS_DATAFILES.
For some reason, DISCARD TABLESPACE would not to remove the entry
from there.
dict_get_and_save_data_dir_path(): If the tablespace has been
discarded, do not bother trying to read the name.
Side note: The tables SYS_TABLESPACES and SYS_DATAFILES are
redundant and subject to removal in MDEV-22343.
Let us shrink the test encryption.create_or_replace so that it can
run on the CI system, also on the embedded server.
encryption.create_or_replace_big: Renamed from the original test,
with the subset of encryption.create_or_replace omitted.
log_group_read_log_seg() returns error when:
1) Calculated log block number does not correspond to read log block
number. This can be caused by:
a) Garbage or an incompletely written log block. We can exclude this
case by checking log block checksum if it's enabled(see innodb-log-checksums,
encrypted log block contains checksum always).
b) The log block is overwritten. In this case checksum will be correct and
read log block number will be greater then requested one.
2) When log block length is wrong. In this case recv_sys->found_corrupt_log
is set.
3) When redo log block checksum is wrong. In this case innodb code
writes messages to error log with the following prefix: "Invalid log
block checksum."
The fix processes all the cases above.
The presumed reason for the error is that the file was opened
by 3rd party antivirus or backup program, causing ERROR_SHARING_VIOLATION
on rename.
The fix, actually a workaround, is to retry MoveFileEx couple of times
before finally giving up. We expect 3rd party programs not to hold file
for extended time.
Parse SHOW SLAVE STATUS output for the "Using_Gtid" column. If the value
is "No", then old log file and position is backed up, otherwise gtid_slave_pos
is backed up.
virtual column in index
Problem:
row_ins_foreign_fill_virtual was unconditionally set virtual fields to NULL
even though the field is not a part of a foreign key
(but a part of an index)
Solution:
The new virtual value should be computed with regard to cascade updates.
fts_drop_orphaned_tables() takes long time to remove the orphaned
FTS tables. In order to reduce the time, do the following:
- Traverse fil_system.space_list and construct a set of
table_id,index_id of all FTS_*.ibd tablespaces.
- Traverse the sys_indexes table and ignore the entry
from the above collection if it exist.
- Existing elements in the collection can be considered as
orphaned fts tables. construct the table name from
(table_id,index_id) and invoke fts_drop_tables().
- Removed DICT_TF2_FTS_AUX_HEX_NAME flag usage from upgrade.
- is_aux_table() in dict_table_t to check whether the given name
is fts auxiliary table
fts_space_set_t is a structure to store set of parent table id
and index id
- Remove unused FTS function in fts0fts.cc
- Remove the fulltext index in row_format_redundant test case.
Because it deals with the condition that SYS_TABLES does have
corrupted entry and valid entry exist in SYS_INDEXES.
dict_foreign_qualify_index(): Reject corrupted or garbage indexes.
For index stubs that are created on virtual columns, no
dict_field_t::col would be assign. Instead, the entire table
definition would be reloaded on a successful operation.
During insertion of clustered index, InnoDB does the check for foreign key
constraints. Problem is that it uses the clustered index entry to search
indexes of referenced tables and it could lead to unexpected result
when there is no foreign index.
Solution:
========
Rebuild the tuple based on foreign column names before searching it
on reference index when there is no foreign index.
The InnoDB index fields store bytes, not characters.
Remove some unnecessary conversions from characters to bytes.
This also fixes MDEV-20422 and the wrong-result bug MDEV-12486.
Add a proper error handling of innobase_get_computed_value results in
row_upd_store_row/row_upd_store_v_row.
Also add an assertion in row_vers_build_clust_v_col to fail during row
purge.
Add one more assertion in row_sel_sec_rec_is_for_clust_rec for possible
future catches.
The problem was in improper error handling behavior in
`row_upd_build_difference_binary`:
`innobase_free_row_for_vcol` wasn't called.
To eliminate this problem in all potential places, a refactoring has been
made:
* class ib_vcol_row is added. It owns VCOL_STORAGE and heap and maintains
it in RAII manner
* all innobase_allocate_row_for_vcol/innobase_free_row_for_vcol pairs are
substituted
with ib_vcol_row usage
* row_merge_buf_add is only left untouched because it doesn't own vheap
passed as an argument
* innobase_allocate_row_for_vcol does not allocate VCOL_STORAGE anymore and
accepts it as an argument -- this reduces a number of memory allocations
* move rec_printer out of `#ifndef DBUG_OFF` and mark it cold
(This commit is exclusively for 10.1 branch, do not merge it to upper ones)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
Extra: mysqlbinlog_row_minimal refined to not produce mutable numeric values into the result file.
(This commit is for 10.3 and upper branches)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
(This commit is exclusively for 10.2 branch. Do not merge it to 10.3)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
This commit contains a fix and extended test case for a ASAN failure
reported during galera.fk mtr testing.
The reported heap buffer overflow happens in test case where a cascading
foreign key constraint is defined for a column of varchar type, and
galera.fk.test has such vulnerable test scenario.
Troubleshoting revealed that erlier fix for MDEV-19660 has made a fix
for cascading delete handling to append wsrep keys from pcur->old_rec,
in row_ins_foreign_check_on_constraint(). And, the ASAN failuer comes from
later scanning of this old_rec reference.
The fix in this commit, moves the call for wsrep_append_foreign_key() to happen
somewhat earlier, and inside ongoing mtr, and using clust_rec which is set
earlier in the same mtr for both update and delete cascade operations.
for wsrep key populating, it does not matter when the keys are populated,
all keys just have to be appended before wsrep transaction replicates.
Note that I also tried similar fix for earlier wsrep key append, but using
the old implementation with pcur->old_rec (instead of clust_rec), and same
ASAN failure was reported. So it appears that pcur->old_rec is not properly
set, to be used for wsrep key appending.
galera.galera_fk_cascade_delete test has been extended by two new test scenarios:
* FK cascade on varchar column.
This test case reproduces same scenario as galera.fk, and this test scenario
will also trigger ASAN failure with non fixed MariaDB versions.
* multi-master conflict with FK cascading.
this scenario causes a conflict between a replicated FK cascading transaction
and local transaction trying to modify the cascaded child table row.
Local transaction should be aborted and get deadlock error.
This test scenario is passing both with old MariaDB version and with this
commit as well.
The issue here was that the query was using ORDER BY LIMIT optimzation where
the access method was changed from EQ_REF access to an index scan (index that would
resolve the ORDER BY clause).
But the parameter READ_RECORD::unlock_row was not reset to rr_unlock_row, which is
used when the access method is not EQ_REF access.
When duplicates are removed from a table using a hash, if the record is a duplicate it is marked
as deleted. The handler API check if the record is deleted and send an error flag HA_ERR_RECORD_DELETED.
When we scan over the table if the thread is not killed then we skip the
records marked as HA_ERR_RECORD_DELETED.
The issue here is when a query is aborted by a user (this is happening when the LIMIT for ROWS EXAMINED
is exceeded), the scan over the table does not skip the records for which HA_ERR_RECORD_DELETED is sent.
It just returns an error flag HA_ERR_ABORTED_BY_USER.
This error flag is not checked at the upper level and hence we hit the assert.
If the query is aborted by the user we should just skip reading rows and return
control to the upper levels of execution.