To diagnose a hang in slow shutdown (innodb_fast_shutdown=0),
let us introduce a Boolean startup option in debug builds
that will cause the contents of the InnoDB change buffer
to be dumped to the server error log at startup.
Test innodb_read_only startup (which will be refused after a crash),
and test also innodb_force_recovery=5, and extract some change buffer
merge statistics. Omit any statistics about delete (purge) buffering,
because purge could happen at any time.
Use the sequence storage engine for populating the table.
Try to use more deterministic floating-point operations.
Apparently, 2.2 > 2.2 wrongly holds on many platforms, but
not ppc64le on the compiler used on Red Had Enterprise Linux 8.
The reason could be an infinite binary presentation:
2.2 = 0b10.001100110011…
With t1_f = 2.5 = 0b10.1, t1_f > 2.5 would no longer hold on AMD64.
Let us replace the 2.2 with 2.5 and compare t1_f >= 2.5 in order to
get more consistent results across all platforms.
We were missing a test that would exercise trx_free_prepared()
with innodb_fast_shutdown=0. Add a test.
Note: if shutdown hangs due to the XA PREPARE transactions,
in MariaDB 10.2 the test would unfortunately pass, but take
2*60 seconds longer, because of two shutdown_server statements
timing out after 60 seconds. Starting with MariaDB 10.3, the
hung server would be killed with SIGABRT, and the test could
fail thanks to a backtrace message.
This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc
Changes:
- Don't print out the value of system variables as one can't depend on
them to being constants.
- Don't set global variables to 'default' as the default may not
be the same as the test was started with if there was an additional
option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
a while.
ha_innobase::open(): Always ignore problems with FOREIGN KEY constraints
(pass DICT_ERR_IGNORE_FK_NOKEY), no matter whether foreign_key_checks
is enabled. Instead, we must report errors when enforcing the FOREIGN KEY
constraints. As a result of ignoring these errors, the tables will be
loaded with dict_foreign_t objects whose foreign_index or referenced_index
will be NULL.
Also, pass DICT_ERR_IGNORE_FK_NOKEY instead of DICT_ERR_IGNORE_NONE
to dict_table_open_on_id_low() in many other cases. Notably, on
CREATE TABLE and ALTER TABLE, we will keep validating the FOREIGN KEY
constraints as before.
dict_table_open_on_name(): If no other flags than
DICT_ERR_IGNORE_FK_NOKEY are set, refuse access to unreadable tables.
Some encryption tests rely on this code path.
For the DML code path, we used to have the problem that when
one of the indexes was missing in dict_foreign_t, we would ignore
the FOREIGN KEY constraint altogether. The following changes
address that.
row_ins_check_foreign_constraints(): Add the parameter pk.
For the primary key, consider also foreign key constraints for which
foreign->foreign_index=NULL (no underlying index is available).
row_ins_check_foreign_constraint(): Report errors also for !check_ref.
Remove a redundant check for srv_read_only_mode.
row_ins_foreign_report_add_err(): Tolerate foreign->foreign_index=NULL.
Let us invoke wait_all_purged.inc right before the workload.
Starting with MDEV-12288 in MariaDB Server 10.3, also INSERT
generates purge workload. If we do not ensure that purge has
run to completion, the results on 10.3 and later could be
nondeterministic.
Optimize the test by dropping the table early and by using only
one undo log thread, so that purge will be doing more useful work
and less busy work of suspending and resuming the worker threads.
The test used to cause shutdown timeout on 10.4 on buildbot, and
for me locally when using --mysqld=--innodb-sync-debug.
With these tweaks, it passes for me with --mysqld=--innodb-sync-debug.
If there're multiple row versions in InnoDB, reading one row from PK
may have O(N) complexity and reading from secondary keys may have
O(N^2) complexity.
The problem occurs when there are many pending versions of the same
row, meaning that the primary key is the same, but a secondary key is
different. The slowdown occurs when the secondary index is
traversed. This patch creates a helper class for the function
row_sel_get_clust_rec_for_mysql() which can remember and re-use
cached_clust_rec & cached_old_vers so that rec_get_offsets() does not
need to be called over and over for the clustered record.
Corrections by Kevin Lewis <kevin.lewis@oracle.com>
MDEV-20341 Unstable innodb.innodb_bug14704286
Removed test that tested the ability of interrupting long query which
is not long anymore.
Use DEBUG_SYNC to hang the execution at the interesting point,
and then kill and restart the server externally. This will work
also with Valgrind. DBUG_SUICIDE() causes Valgrind to hang,
and it could also cause uninteresting reports about memory leaks.
While we are at it, let us clean up innodb.innodb_bulk_create_index_debug
so that it will actually test the desired functionality also in future
versions (with instant ADD COLUMN and DROP COLUMN) and avoid
some unnecessary restarts.
We are adding two DEBUG_SYNC points for ALTER TABLE, because there were
none that would be executed right before ha_commit_trans().
MDEV-17614 flags INSERT…ON DUPLICATE KEY UPDATE unsafe for statement-based
replication when there are multiple unique indexes. This correctly fixes
something whose attempted fix in MySQL 5.7
in mysql/mysql-server@c93b0d9a97
caused lock conflicts. That change was reverted in MySQL 5.7.26
in mysql/mysql-server@066b6fdd43
(with a substantial amount of other changes).
In MDEV-17073 we already disabled the unfortunate MySQL change when
statement-based replication was not being used. Now, thanks to MDEV-17614,
we can actually remove the change altogether.
This reverts commit 8a346f31b9 (MDEV-17073)
and mysql/mysql-server@c93b0d9a97 while
keeping the test cases.
Problem: Clients running different values for auto_increment_increment
and doing concurrent inserts leads to "Duplicate key error" in one of them.
Analysis:
When auto_increment_increment value is reduced in a session,
InnoDB uses last auto_increment_increment value
to recalculate the autoinc value.
In case, some other session has inserted a value
with different auto_increment_increment, InnoDB recalculate
autoinc values based on current session previous auto_increment_increment
instead of considering the auto_increment_increment used for last insert
across all session
Fix:
revert 7acdf29cb4
a.k.a. 7c12a9e5c3
as it causing the bug.
Reviewed By:
Bin <bin.x.su@oracle.com>
Kevin <kevin.lewis@oracle.com>
RB#21777
Note: In MariaDB Server, earlier changes in
ae5bc05988
for MDEV-533 require that the original test in
mysql/mysql-server@1ccd472d63
be adjusted for MariaDB.
Also, ef47b62551 (MDEV-8827)
had to be reverted after the upstream fix had been backported.
Problem:
=======
Autoincrement value gives duplicate values because of the following reasons.
(1) In InnoDB handler function, current autoincrement value is not changed
based on newly set auto_increment_increment or auto_increment_offset variable.
(2) Handler function does the rounding logic and changes the current
autoincrement value and InnoDB doesn't aware of the change in current
autoincrement value.
Solution:
========
Fix the problem(1), InnoDB always respect the auto_increment_increment
and auto_increment_offset value in case of current autoincrement value.
By fixing the problem (2), handler layer won't change any current
autoincrement value.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
RB: 13748
This is a regression due to MDEV-16515 that affects some versions in
the MariaDB 10.1 server series starting with 10.1.35, and possibly
all versions starting with 10.2.17, 10.3.8, and 10.4.0.
The idea of MDEV-16515 is to allow DROP TABLE to be interrupted,
in case it was stuck due to some concurrent activity. We already
made some cases of internal DROP TABLE immune to kill in MDEV-18237,
MDEV-16647, MDEV-17470. We must include the cleanup of
CREATE TABLE...SELECT in the list of such internal DROP TABLE.
ha_innobase::delete_table(): Pass create_failed=true if the current
SQL statement is CREATE, so that the table will be dropped.
row_drop_table_for_mysql(): If create_failed=true, do not allow
the operation to be interrupted.
Problem:
========
There is a possibility that there can be more concurrent DMLs While the
alter table thread is waiting for upgrading to MDL_EXCLUSIVE before commit phase.
In commit phase, InnoDB acquires dict_operation_lock and it already holds MDL_EXCLUSIVE
on the table. After that, InnoDB applies the concurrent DML logs in commit phase.
This could lead to blocking of the following things:
1) DML on the particular table (due to MDL_EXCLUSIVE on the table)
2) InnoDB DDLs (due to dict_operation_lock)
3) Purge thread, stats thread, the master thread (due to dict_operation_lock)
Fix:
====
Apply the concurrent DML logs in commit phase but before acquiring
dict_operation_lock in commit phase. It makes sure that (2), (3) can't be
blocked for longer time.
Basic idea of the patch: disallow creating tables which allow to create
rows which are too big to insert. In other words, if user created a table user
should never see an errors like 'can not insert row as it is too big for current
page size'.
SET innodb_strict_mode=OFF; will allow to create very long tables and only a
warning will be issued.
dict_table_t::get_overflow_field_local_len(): this function lets know a maximum
local field len for overflow fields for every file and row format.
innobase_check_column_length(): improve name to too_big_key_part_length()
and reuse in a different part of code.
create_table_info_t::prepare_create_table(): add check for maximum allowed
key part length to keep ALGORITHM=COPY behavior similar to ALGORITHM=INPLACE
behavior. Affected test is innodb.strict_mode
Rename dict_index_too_big_for_tree() to
dict_index_t::rec_potentially_too_big(): copy overflow-related size computation
from dtuple_convert_big_rec(). A lot of tests was changed because of that.
I wonder whether users will complain about it?
Test innodb.max_record_size tests dict_index_t::rec_potentially_too_big()
for different row formats and page sizes.
Added the condition in innochecksum tool to check page id mismatch.
This could catch the write corruption caused by InnoDB.
Added the debug insert inside fil_io() to check whether it writes
the page to wrong offset.
Also, move part of the test back to innodb.innodb_mysql
and another part to a new test innodb.purge.
Last but not least, merge the tests innodb_zip.4k and innodb_zip.8k
to innodb_zip.page_size.
Problem:
=========
One of the purge thread access the corrupted page and tries to remove from
LRU list. In the mean time, other purge threads are waiting for same page
in buf_wait_for_read(). Assertion(buf_fix_count == 0) fails for the
purge thread which tries to remove the page from LRU list.
Solution:
========
- Set the page id as FIL_NULL to indicate the page is corrupted before
removing the block from LRU list. Acquire hash lock for the particular
page id and wait for the other threads to release buf_fix_count
for the block.
- Added the error check for btr_cur_open() in row_search_on_row_ref().
Before killing the server, ensure that the incomplete state of
the transaction will be made durable and will be applied and
rolled back on recovery, so that each time, roughly the same
amount of work will be done.
Remove DML statements after the recovery, and execute
CHECK TABLE instead.
row_search_mvcc(): Duplicate the logic of btr_pcur_move_to_next()
so that an infinite loop can be avoided when advancing to the next
page fails due to a corrupted page.