When the transaction isolation level is SERIALIZABLE, or when
a locking read is performed in the REPEATABLE READ isolation level,
InnoDB must lock delete-marked records in order to prevent another
transaction from inserting something.
However, at READ UNCOMMITTED or READ COMMITTED isolation level or
when the parameter innodb_locks_unsafe_for_binlog is set, the
repeatability of the reads does not matter, and there is no need
to lock any records.
row_search_mvcc(): Skip locks on delete-marked committed records upfront,
instead of invoking row_unlock_for_mysql() afterwards. The unlocking
never worked for secondary index records.
dict_stats_rename_table(): After DB_LOCK_WAIT_TIMEOUT
or DB_DUPLICATE_KEY, reset the trx->error_state before retrying.
Also, properly treat DB_DEADLOCK as a hard error.
Part2: make MyRocks add its directory into @@ignore_db_dirs when starting.
This is necessary because apparently not everybody are using plugin's my.cnf
So load ha_rocksdb.{so,dll} manually and then hit MDEV-12451, MDEV-14461
etc.
The assertion failure was caused by
MDEV-14511 Use fewer transactions for updating InnoDB persistent statistics
We are reusing a transaction object after commit, and sometimes,
even after a successful operation, the trx_t::error_state may be
something else than DB_SUCCESS. Reset the field when needed.
Starting with MySQL 5.7 (or MariaDB 10.2.2) InnoDB no longer contains
the "table monitor" or "tablespace monitor". The conditions on
srv_print_innodb_tablespace_monitor, srv_print_innodb_table_monitor
never hold. So, the code was dead.
Also, remove a bogus reference to dict_print(), which used to implement
the InnoDB table monitor.
LOCK_thd_data was used to protect both THD data and
ensure that the THD is not deleted while it was in use
This patch moves the THD delete protection to LOCK_thd_kill,
which already protects the THD for kill.
The benefits are:
- More well defined what LOCK_thd_data protects
- LOCK_thd_data usage is now much simpler and easier to verify
- Less chance of deadlocks in SHOW PROCESS LIST as there is less
chance of interactions between mutexes
- Remove not needed LOCK_thread_count from
thd_get_error_context_description()
- Fewer mutex taken for thd->awake()
Other things:
- Don't take mysys->var mutex in show processlist to check if thread
is kill marked
- thd->awake() now automatically takes the LOCK_thd_kill mutex
(Simplifies code)
- Apc uses LOCK_thd_kill instead of LOCK_thd_data
This will allow show processlist to continue, without blocking
all new connections, if some threads gets stuck while holding
LOCK_thd_data or mysys_var->mutex
Connections that has mutex 'stuck' are marked as 'Busy' in 'Command'
Todo:
Make F_BACKOFF to do 'pause' instead of just (1)
Allow DROP TABLE `#mysql50##sql-...._.` to drop tables that were
being rebuilt by ALGORITHM=INPLACE
NOTE: If the server is killed after the table-rebuilding ALGORITHM=INPLACE
commits inside InnoDB but before the .frm file has been replaced, then
the recovery will involve something else than DROP TABLE.
NOTE: If the server is killed in a true inplace ALTER TABLE commits
inside InnoDB but before the .frm file has been replaced, then we
are really out of luck. To properly handle that situation, we would
need a transactional mysql.ddl_fixup table that directs recovery to
rename or remove files.
prepare_inplace_alter_table_dict(): Use the altered_table->s->table_name
for generating the new_table_name.
table_name_t::part_suffix: The start of the partition name suffix.
table_name_t::dbend(): Return the end of the schema name.
table_name_t::dblen(): Return the length of the schema name, in bytes.
table_name_t::basename(): Return the name without the schema name.
table_name_t::part(): Return the partition name, or NULL if none.
row_drop_table_for_mysql(): Assert for #sql, not #sql-ib.
fseg_alloc_free_page_low(): Remove a bogus and redundant assertion about
fil_space_t::purpose. The debug function fsp_space_modify_check()
is asserting something similar, but more accurately.
This was missing bug fix from MySQL wsrep i.e. Galera.
Problem was that if stored procedure declares a handler that
catches deadlock error, then the error may have been
cleared in method sp_rcontext::handle_sql_condition().
Use wsrep_conflict_state correctly to determine is the
error already sent to client.
Add test case for both this bug and MDEV-12837: WSREP: BF
lock wait long. Test requires both fixes to pass.
Problem was a merge error from MySQL wsrep i.e. Galera.
wsrep_on_check
New check function. Galera can't be enabled
if innodb-lock-schedule-algorithm=VATS.
innobase_kill_query
In Galera async kill we could own lock mutex.
innobase_init
If Variance-Aware-Transaction-Sheduling Algorithm (VATS) is
used on Galera we fall back to First-Come-First-Served (FCFS)
with notice to user.
Changed innodb-lock-schedule-algorithm as read-only parameter
as it was designed to be.
lock_reset_lock_and_trx_wait
Use ib::hex() to print out transaction ID.
lock_rec_other_has_expl_req,
lock_rec_other_has_conflicting,
RecLock::add_to_waitq
lock_rec_lock_slow
lock_table_other_has_incompatible
lock_rec_insert_check_and_lock
lock_prdt_other_has_conflicting
Change pointer to conflicting lock to normal pointer as this
pointer contents could be changed later.
RecLock::create
Conclicting lock pointer is moved to last parameter with
default value NULL. This conflicting transaction could
be selected as victim in Galera if requesting transaction
is BF (brute force) transaction. In this case contents
of conflicting lock pointer will be changed. Use ib::hex() to print
transaction ids.
Introduce the debug flag trx_t::persistent_stats to suppress the
assertion for the updates of persistent statistics during fast
shutdown.
dict_stats_exec_sql(): Do execute the statement even though shutdown
has been initiated.
dict_stats_exec_sql(): Expect the caller to always provide a transaction.
Remove some redundant assertions. The caller must hold dict_sys->mutex,
but holding dict_operation_lock is only necessary for accessing
data dictionary tables, which we are not accessing.
dict_stats_save_index_stat(): Acquire dict_sys->mutex
for invoking dict_stats_exec_sql().
dict_stats_save(), dict_stats_update_for_index(), dict_stats_update(),
dict_stats_drop_index(), dict_stats_delete_from_table_stats(),
dict_stats_delete_from_index_stats(), dict_stats_drop_table(),
dict_stats_rename_in_table_stats(), dict_stats_rename_in_index_stats(),
dict_stats_rename_table(): Use a single caller-provided
transaction that is started and committed or rolled back by the caller.
dict_stats_process_entry_from_recalc_pool(): Let the caller provide
a transaction object.
ha_innobase::open(): Pass a transaction to dict_stats_init().
ha_innobase::create(), ha_innobase::discard_or_import_tablespace():
Pass a transaction to dict_stats_update().
ha_innobase::rename_table(): Pass a transaction to
dict_stats_rename_table(). We do not use the same transaction
as the one that updated the data dictionary tables, because
we already released the dict_operation_lock. (FIXME: there is
a race condition; a lock wait on SYS_* tables could occur
in another DDL transaction until the data dictionary transaction
is committed.)
ha_innobase::info_low(): Pass a transaction to dict_stats_update()
when calculating persistent statistics.
alter_stats_norebuild(), alter_stats_rebuild(): Update the
persistent statistics as well. In this way, a single transaction
will be used for updating the statistics of a whole table, even
for partitioned tables.
ha_innobase::commit_inplace_alter_table(): Drop statistics for
all partitions when adding or dropping virtual columns, so that
the statistics will be recalculated on the next handler::open().
This is a refactored version of Oracle Bug#22469660 fix.
RecLock::add_to_waitq(), lock_table_enqueue_waiting():
Do not allow a lock wait to occur for updating statistics
in a data dictionary transaction, such as DROP TABLE. Instead,
return the previously unused error code DB_QUE_THR_SUSPENDED.
row_merge_lock_table(), row_mysql_lock_table(): Remove dead code
for handling DB_QUE_THR_SUSPENDED.
row_drop_table_for_mysql(), row_truncate_table_for_mysql():
Drop the statistics as part of the data dictionary transaction.
After TRUNCATE TABLE, the statistics will be recalculated on
subsequent ha_innobase::open(), similar to how the logic after
the above-mentioned Oracle Bug#22469660 fix in
ha_innobase::commit_inplace_alter_table() works.
btr_defragment_thread(): Use a single transaction object for
updating defragmentation statistics.
dict_stats_save_defrag_stats(), dict_stats_save_defrag_stats(),
dict_stats_process_entry_from_defrag_pool(),
dict_defrag_process_entries_from_defrag_pool(),
dict_stats_save_defrag_summary(), dict_stats_save_defrag_stats():
Add a parameter for the transaction.
dict_stats_empty_table(): Make public. This will be called by
row_truncate_table_for_mysql() after dropping persistent statistics,
to clear the memory-based statistics as well.
TABLE_SHARE::init_from_binary_frm_image() calls handler_file->index_flags()
before it has set TABLE_SHARE::primary_key (it is 0 while it should be
MAX_KEY in my example).
This causes MyRocks to report wrong index flags (it thinks it's a PK while
it is not), which causes invalid query plans later on.
Do the only thing that seems feasible: adjust field->part_of key to have
correct value in ha_rocksdb::open.