Starting with MariaDB 10.5, roughly after MDEV-23855 was fixed,
we are observing sporadic hangs during the execution of the
RESET MASTER statement. We are hoping to fix the hangs with these
changes, but due to the rather infrequent occurrence of the hangs
and our inability to reliably reproduce the hangs, we cannot be
sure of this.
What we do know is that innodb_force_recovery=2 (or a larger setting)
will prevent srv_master_callback (the former srv_master_thread) from
running. In that mode, periodic log flushes would never occur and
RESET MASTER could hang indefinitely. That is demonstrated by the new
test case that was developed by Andrei Elkin. We fix this case by
implementing a special case for it.
This also includes some code cleanup and renames of misleadingly
named code. The interface has nothing to do with log checkpoints in
the storage engine; it is only about requesting log writes to be
persistent.
handlerton::commit_checkpoint_request,
commit_checkpoint_notify_ha(): Remove the unused parameter hton.
log_requests.start: Replaces pending_checkpoint_list.
log_requests.end: Replaces pending_checkpoint_list_end.
log_requests.mutex: Replaces pending_checkpoint_mutex.
log_flush_notify_and_unlock(), log_flush_notify(): Replaces
innobase_mysql_log_notify(). The new implementation should be
functionally equivalent to the old one.
innodb_log_flush_request(): Replaces innobase_checkpoint_request().
Implement a fast path for common cases, and reduce the mutex hold time.
POSSIBLE FIX OF THE HANG: We will invoke commit_checkpoint_notify_ha()
for the current request if it is already satisfied, as well as invoke
log_flush_notify_and_unlock() for any satisfied requests.
log_write(): Invoke log_flush_notify() when the write is already durable.
This was missing WITH_PMEM when the log is in persistent memory.
Reviewed by: Vladislav Vaintroub
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
* be strict in CREATE TABLE, just like in ALTER TABLE, because
CREATE TABLE, just like ALTER TABLE, can be rolled back for any engine
* but don't auto-convert warnings into errors for engine warnings
(handler::create) - this matches ALTER TABLE behavior
* and not when creating a default record, these errors are handled
specially (and replaced with ER_INVALID_DEFAULT)
* always issue a Note when a non-unique key is truncated, because it's
not a Warning that can be converted to an Error. Before this commit
it was a Note for blobs and a Warning for all other data types.
Server part:
kill_handlerton() was accessing thd->ha_data[] for some other thd,
while it could be concurrently modified by its owner thd.
protect thd->ha_data[] modifications with a mutex.
require this mutex when accessing thd->ha_data[] from kill_handlerton.
InnoDB part:
on close_connection, detach trx from thd before freeing the trx
failed in Diagnostics_area::set_ok_status on INSERT
Analysis: Error is not returned when strict mode is enabled and value is
truncated because double is outside range.
Fix: Return HA_ERR_AUTOINC_ERANGE if the error was reported when double is
outside range.
..causes error on slave.
Cause: if the master doesn't have the frm file for the table,
DROP TABLE code will call ha_delete_table_force() to drop the table
in all available storage engines.
The issue was that this code path didn't check for
HTON_TABLE_MAY_NOT_EXIST_ON_SLAVE flag for the storage engine,
and so did not add "... IF EXISTS" to the statement that's written
to the binary log. This can cause error on the slave when it tries to
drop a table that's already gone.
After Sergei's cleanup this assertion is not actual anymore -- we can't
predict if the handler was used for lookup, especially in multi-update
scenario.
`position(old_data)` is made earlier in `ha_check_overlaps`, therefore it
is guaranteed that we compare right refs.
The problem here was that ha_check_overlaps internally uses ha_index_read,
which in case of fail overwrites table->status. Even though the handlers
are different, they share a common table, so the value is anyway spoiled.
This is bad, and table->status is badly designed and overweighted by
functionality, but nothing can be done with it, since the code related to
this logic is ancient and it's impossible to extract it with normal effort.
So let's just save and restore the value in ha_update_row before and after
the checks.
Other operations like INSERT and simple UPDATE are not in risk, since they
don't use this table->status approach.
DELETE does not do any unique checks, so it's also safe.
Change xarecover_handlerton so that transaction with WSREP prefixed
xids are rolled back when Galera is disabled.
Reviewd-by: Jan Lindström <jan.lindstrom@mariadb.com>
This commit fixed the problems with S3 after the "DROP TABLE FORCE" changes.
It also fixes all failing replication S3 tests.
A slave is delayed if it is trying to execute replicated queries on a
table that is already converted to S3 by the master later in the binlog.
Fixes for replication events on S3 tables for delayed slaves:
- INSERT and INSERT ... SELECT and CREATE TABLE are ignored but written
to the binary log. UPDATE & DELETE will be fixed in a future commit.
Other things:
- On slaves with --s3-slave-ignore-updates set, allow S3 tables to be
opened in read-write mode. This was done to be able to
ignore-but-replicate queries like insert. Without this change any
open of an S3 table failed with 'Table is read only' which is too
early to be able to replicate the original query.
- Errors are now printed if handler::extra() call fails in
wait_while_tables_are_used().
- Error message for row changes are changed from HA_ERR_WRONG_COMMAND
to HA_ERR_TABLE_READONLY.
- Disable some maria_extra() calls for S3 tables. This could cause
S3 tables to fail in some cases.
- Added missing thr_lock_delete() to ma_open() in case of failure.
- Removed from mysql_prepare_insert() the not needed argument 'table'.
- Remove row_start/row_end from keys in fix_create_like();
- Disable manual adding of implicit row_start/row_end to indexes on
CREATE TABLE. INVISIBLE_SYSTEM fields are unoperable by user;
- Fix memory leak on allocation of Key_part_spec.
- row_search_mvcc() should return DB_INTERRUPTED when it got killed.
- Add a syncpoint for the ICP check.
- Add test coverage for killed-during-ICP-check scenario
Backport of MDEV-22761 fixes for ICP from 10.4 commits:
* a6f956488c
* c03885cd9c
XtraDB was fixed in deb3b9a174
Reviewer: Daniel Black
Part #2:
- row_search_mvcc() should return DB_INTERRUPTED when it got
- Move the sync point from innodb internals to
handler_rowid_filter_check() where other storage engines can use
it too
- Add a similar syncpoint for the ICP check.
- Add a bigger test and test coverage for Rowid Filter with MyISAM
- Add test coverage for killed-during-ICP-check scenario
MDEV-21953 deadlock between BACKUP STAGE BLOCK_COMMIT and parallel
replication
Fixed by partly reverting MDEV-21953 to put back MDL_BACKUP_COMMIT locking
before log_and_order.
The original problem for MDEV-21953 was that while a thread was waiting in
for another threads to commit in 'log_and_order', it had the
MDL_BACKUP_COMMIT lock. The backup thread was waiting to get the
MDL_BACKUP_WAIT_COMMIT lock, which blocks all new MDL_BACKUP_COMMIT locks.
This causes a deadlock as the waited-for thread can never get past the
MDL_BACKUP_COMMIT lock in ha_commit_trans.
The main part of the bug fix is to release the MDL_BACKUP_COMMIT lock while
a thread is waiting for other 'previous' threads to commit. This ensures
that no transactional thread keeps MDL_BACKUP_COMMIT while waiting, which
ensures that there are no deadlocks anymore.
failed or late ER_PERIOD_FIELD_WRONG_ATTRIBUTES upon attempt to create
existing table
Analysis: Error state is not stored when field is checked in
Table_period_info::check_field()
Fix: Store error state by setting res to true.
This happend when using XA transactions. I also added some extra asserts
to ensure that m_transactions are properly cleared.
Other things:
- Removed set_time() from THD::init_for_queries() as dispatch_command()
is already doing that.
- Removed duplicate init_for_queries() from prepare_new_connection_state().
The init_for_queries() functions should only be called once per
connection.
The issue was:
T1, a parallel slave worker thread, is waiting for another worker thread to
commit. While waiting, it has the MDL_BACKUP_COMMIT lock.
T2, working for mariabackup, is doing BACKUP STAGE BLOCK_COMMIT and blocks
all commits.
This causes a deadlock as the thread T1 is waiting for can't commit.
Fixed by moving locking of MDL_BACKUP_COMMIT from ha_commit_trans() to
commit_one_phase_2()
Other things:
- Added a new argument to ha_comit_one_phase() to signal if the
transaction was a write transaction.
- Ensured that ha_maria::implicit_commit() is always called under
MDL_BACKUP_COMMIT. This code is not needed in 10.5
- Ensure that MDL_Request values 'type' and 'ticket' are always
initialized. This makes it easier to check the state of the MDL_Request.
- Moved thd->store_globals() earlier in handle_rpl_parallel_thread() as
thd->init_for_queries() could use a MDL that could crash if store_globals
where not called.
- Don't call ha_enable_transactions() in THD::init_for_queries() as this
is both slow (uses MDL locks) and not needed.
first try discovering engines, then the rest.
otherwise every DROP TABLE non_existent; will do
lots of i/o trying to remove .MYI/.MYD/.MAI/.MAD/.CSV/etc files
this matches the old behavior where DROP TABLE always tried to discover
the table before dropping.
don't do table discovery on DROP. DROP falls back to "force"
approach when a table isn't found and will try to drop in all
engines anyway. That is, trying to discover in all engines before
the drop is redundant and may be expensive.