Improve the test that was imported and adapted for MariaDB in
commit fb217449dc.
row_undo_step(): Move the DEBUG_SYNC point from trx_rollback_for_mysql().
This DEBUG_SYNC point is executed after rolling back one row.
trx_rollback_for_mysql(): Clarify the comments that describe the scenario,
and remove the DEBUG_SYNC point.
If the statement "if (trx->has_logged_persistent())" and its body are
removed from trx_rollback_for_mysql(), then the test
innodb.xa_recovery_debug will fail because the transaction would still
exist in the XA PREPARE state. If we allow the XA COMMIT statement
to succeed in the test, we would observe an incorrect state of the
XA transaction where the table would contain row (1,NULL). Depending
on whether the XA transaction was committed, the table should either
be empty or contain the record (1,1). The intermediate state of
(1,NULL) should never be observed after completed recovery.
Adapt the test that was added in
mysql/mysql-server@6b65d9032c
but omitted in commit 2e814d4702.
Instead of triggering a log checkpoint, we will only trigger
a redo log flush before killing the server.
Note: the mtr.commit() call in trx_rollback_for_mysql()
will not actually make the undo log header page state change durable.
A call to log_write_up_to(mtr.commit_lsn(), true) would do that.
It is unclear what the originally reported bug scenario was.
As long as innobase_rollback_by_xid() will not return without
ensuring that the redo log has been durably written, we should be safe.
When binlog is disabled, WSREP will not behave correctly when
SAVEPOINT ROLLBACK is executed since we don't register handlers for such case.
Fixed by registering WSREP handlerton for SAVEPOINT related commands.
FOREIGN_KEY_CHECKS is disabled
- dict_foreign_find_index() can return NULL if InnoDB already dropped
the foreign index when FOREIGN_KEY_CHECKS is disabled.
redo log during recovery
- InnoDB unnecessarily reads the page even though it has fully initialized
buffered redo log records. Allow the page initialization redo log to
apply for the page in buf_page_get_gen() during recovery.
- Renamed buf_page_get_gen() to buf_page_get_low()
- Newly added buf_page_get_gen() will check for buffered redo log for
the particular page id during recovery
- Added new function buf_page_mtr_lock() which basically latches the page
for the given latch type.
- recv_recovery_create_page() is inline function which creates a page
if it has page initialization redo log records.
prepared statement
The method Item_func_in::build_clone() that builds a clone item for an
Item_func_in item first calls a generic method Item_func::build_item()
that builds the the clones for the arguments of the Item_func_in item
to be cloned, creates a copy of the Item_func_in object and attaches the
clones for the arguments to this copy. Then the method Item_func_in::build_clone()
makes the copy fully independent on the copied object in order to
guarantee a proper destruction of the clone. The fact is the copy of the
Item_func_in object is registered as any other item object and should be
destructed as any other item object.
If the method Item_func::build_item fails to build a clone of an argument
then it returns 0. In this case no copy of the Item_func_in object should
be created. Otherwise the finalizing actions for this copy would not be
performed and the copy would remain in a state that would prevent its
proper destruction.
The code of Item_func_in::build_clone() before this patch created the copy
of the Item_func_in object before cloning the argument items. If this
cloning failed the server crashed when trying to destruct the copy item.
The code of Item_row::build_clone() was changed similarly to the code of
Item_func::build_clone though this code could not cause any problems.
Remove CREATE/DROP database.
Remove some unnecessary suppressions, replacements, and
SQL statements.
Populate tables via have_sequence.inc to avoid the creation of
explicit InnoDB record locks in INSERT...SELECT. This will remove
some gaps in AUTO_INCREMENT values.
The problem happened in these line:
uval0= (ulonglong) (val0_negative ? -val0 : val0);
uval1= (ulonglong) (val1_negative ? -val1 : val1);
return check_integer_overflow(val0_negative ? -(longlong) res : res,
!val0_negative);
when unary minus was performed on -9223372036854775808.
This behavior is undefined in C/C++.
fil_delete_tablespace(): Remove the unused parameter drop_ahi,
and add the parameter if_exists=false. We want to suppress
error messages if we know that the tablespace has been discarded.
dict_table_rename_in_cache(): Pass the new parameter to
fil_delete_tablespace(), that is, do not complain about
missing tablespace if the tablespace has been discarded.
row_make_new_pathname(): Declare as static.
row_drop_table_for_mysql(): Tolerate !table->data_dir_path
when the tablespace has been discarded.
row_rename_table_for_mysql(): Skip part of the RENAME TABLE
when fil_space_get_first_path() returns NULL.
buf_pool_resize(): Simplify the fault injection
for innodb.buf_pool_resize_oom.
innodb.buf_pool_resize_oom: Use a small buffer pool.
innodb.innodb_buffer_pool_load_now: Make use of the sequence engine,
to avoid creating explicit InnoDB record locks. Clean up the
accesses to information_schema.innodb_buffer_page_lru.
Temporary tables are typically short-lived, and temporary tables
are assumed to be accessed only by the thread that is handling
the owning connection. Hence, they must not be subject to
defragmenting.
ha_innobase::optimize(): Do not add temporary tables to
the defragment_table() queue.
AUTO_INCREMENT values are nondeterministic after crash recovery.
While MDEV-6076 guarantees that the AUTO_INCREMENT values of committed
transactions will not roll back, it is possible that the AUTO_INCREMENT
values will be durably incremented for incomplete transactions. So
changing the test case to avoid showing the result of AUTO_INCREMENT value.
Problem:
=======
When we upgrade from "mysql" to "mariadb" if slave is using repositories as
tables their data is completely ignored and no warning is issued in error log.
Fix:
===
"mysql_upgrade" test should check for the presence of data in
"mysql.slave_master_info" and "mysql.slave_relay_log_info" tables. When tables
have some data the upgrade script should report a warning which hints users
that the data in repository tables will be ignored.
Re-enable main.mysql_client_test on all builders, because
at the moment we do not run any --big-test on buildbot
due to resource constraints.
A number of tests were declared big in
commit eeee1832d7
in an attempt to save resources on buildbot.
Stop masking the Data_free values, because innodb_file_per_table=1
is the default.
Also, do mask Update_time after updating tables, even though for
some reason it does appear to matter.
status threads_connected can temporarily be bigger than max_connections+1
If SHOW STATUS LIKE "Threads_connected" comes after
ER_CON_COUNT_ERROR is sent to the client, but before the counter is
decremented, Threads_connected can differ from the expected value.
If async replication slave thread conflicts with cluster replication,
then the async slave transaction should be BF aborted, and depending on the
state of async slave transaction execution, potentially also replayed.
There were problems in such BF abort implementation and the replaying was not
started.
This pull request contains fixes which make sure that if async slave thread is
marked to abort and replay, it will complete carry out the rollback and
release all locks and resources before starting the replaying. After replaying,
async slave transactions is treated as successful, so the slave thread will
continue as usual, handling next replication event.
There is also new mtr test: galera.galera_slave_replay, which stresses both a
certification failure for async slave thread and a successful BF abort
followed by replaying.
* `--defaults-file` option is showed only in `--help --verbose` if
applied
* `--default-extra-file` is showing correctly now in `--help --verbose`,
previously it was treated as a directory with appended `my.cnf`
* Remove those tests that will not be supported on that release.
* Make sure that correct tests are disabled and have MDEVs
* Sort test names
This should not be merged upwards.
- n_ext value may be less than dtuple_get_n_ext(dtuple) when PK is being
updated and new record inherits the externally stored fields from
delete mark old record.