* Note: breaking change; since this commit, a plugin that has
worked so far might get rejected due to plugin maturity
* mariabackup is not affected (allows all plugins)
* VERSION file defines SERVER_MATURITY, which defines the
corresponding numeric value as SERVER_MATURITY_LEVEL in
include/mysql_version.h
* The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1
* Logs a warning if a plugin has maturity lower than
SERVER_MATURITY_LEVEL
* Tests suppress the plugin maturity warning
* Tests use --plugin-maturity=unknown by default so as not to fail
due to the stricter plugin maturity handling
Relax memory barrier for lock_word.
rw_lock_lock_word_decr() - used to acquire rw-lock, thus we only need to issue
ACQUIRE when we succeed locking.
rw_lock_x_lock_func_nowait() - same as above, but used to attempt to acquire
X-lock.
rw_lock_s_unlock_func() - used to release S-lock, RELEASE is what we need here.
rw_lock_x_unlock_func() - used to release X-lock. Ideally we'd need only RELEASE
here, but due to mess with waiters (they must be loaded after lock_word is
stored) we have to issue both ACQUIRE and RELEASE.
rw_lock_sx_unlock_func() - same as above, but used to release SX-lock.
rw_lock_s_lock_spin(), rw_lock_x_lock_func(), rw_lock_sx_lock_func() -
fetch-and-store to waiters has to issue only ACQUIRE memory barrier, so that
waiters are stored before lock_word is loaded.
Note that there is violation of RELEASE-ACQUIRE protocol here, because we do
on lock:
my_atomic_fas32_explicit((int32*) &lock->waiters, 1, MY_MEMORY_ORDER_ACQUIRE);
my_atomic_load32_explicit(&lock->lock_word, MY_MEMORY_ORDER_RELAXED);
on unlock
my_atomic_add32_explicit(&lock->lock_word, X_LOCK_DECR, MY_MEMORY_ORDER_ACQ_REL);
my_atomic_load32_explicit((int32*) &lock->waiters, MY_MEMORY_ORDER_RELAXED);
That is we kind of synchronize ACQUIRE on lock_word with ACQUIRE on waiters.
It was there before this patch. Simple fix may have negative performance impact.
Proper fix requires refactoring of lock_word.
Relax memory barrier for waiters: these 2 stores must be completed before
os_event_set() finishes. This is guaranteed by RELEASE barrier issued by
mutex.exit() of os_event_set().
Remove volatile modifier from waiters: it's not supposed for inter-thread
communication, use appropriate atomic operations instead.
Changed waiters to int32_t, my_atomic friendly type.
* again, as in 10.2, NOW is a keyword only if followed by parentheses
* use AS OF CURRENT_TIMESTAMP or AS OF NOW()
* AS OF CURRENT_TIMESTAMP and AS OF NOW() mean AS OF NOW(6),
not AS OF NOW(0), (same behavior as in a DEFAULT clause)
- Remove not used thd_rpl_is_parallel()
- Remove not used mysql_notify_thread_having_shared_lock()
- Remove not needed LOCK_thread_count from MYSQL_BIN_LOG::reset_logs()
- LOCK_thread_count is not protecting against rollback, so this
code and comment is not needed
- Remove mutex_locks in slave.cc that are not needed.
Added THD::assert_not_linked() to ensure that it was safe to remove
- Fixed not repeatable test load_data_stmt_view
- Updated binlog_killed to test removal of mutex
(thanks to Andrei Elkin for test)
- More code comments
When logging ROW_T_INSERT or ROW_T_UPDATE records, we did not normalize
the DB_TRX_ID of the current transaction into 0 if the current transaction
had started (modifying other tables) before the ALTER TABLE started.
MDEV-13654 introduced this normalization for ROW_T_DELETE
and for all operations with ADD PRIMARY KEY, in row_log_table_get_pk().
Introduce the debug flag trx_t::persistent_stats to suppress the
assertion for the updates of persistent statistics during fast
shutdown.
dict_stats_exec_sql(): Do execute the statement even though shutdown
has been initiated.
dict_stats_exec_sql(): Expect the caller to always provide a transaction.
Remove some redundant assertions. The caller must hold dict_sys->mutex,
but holding dict_operation_lock is only necessary for accessing
data dictionary tables, which we are not accessing.
dict_stats_save_index_stat(): Acquire dict_sys->mutex
for invoking dict_stats_exec_sql().
dict_stats_save(), dict_stats_update_for_index(), dict_stats_update(),
dict_stats_drop_index(), dict_stats_delete_from_table_stats(),
dict_stats_delete_from_index_stats(), dict_stats_drop_table(),
dict_stats_rename_in_table_stats(), dict_stats_rename_in_index_stats(),
dict_stats_rename_table(): Use a single caller-provided
transaction that is started and committed or rolled back by the caller.
dict_stats_process_entry_from_recalc_pool(): Let the caller provide
a transaction object.
ha_innobase::open(): Pass a transaction to dict_stats_init().
ha_innobase::create(), ha_innobase::discard_or_import_tablespace():
Pass a transaction to dict_stats_update().
ha_innobase::rename_table(): Pass a transaction to
dict_stats_rename_table(). We do not use the same transaction
as the one that updated the data dictionary tables, because
we already released the dict_operation_lock. (FIXME: there is
a race condition; a lock wait on SYS_* tables could occur
in another DDL transaction until the data dictionary transaction
is committed.)
ha_innobase::info_low(): Pass a transaction to dict_stats_update()
when calculating persistent statistics.
alter_stats_norebuild(), alter_stats_rebuild(): Update the
persistent statistics as well. In this way, a single transaction
will be used for updating the statistics of a whole table, even
for partitioned tables.
ha_innobase::commit_inplace_alter_table(): Drop statistics for
all partitions when adding or dropping virtual columns, so that
the statistics will be recalculated on the next handler::open().
This is a refactored version of Oracle Bug#22469660 fix.
RecLock::add_to_waitq(), lock_table_enqueue_waiting():
Do not allow a lock wait to occur for updating statistics
in a data dictionary transaction, such as DROP TABLE. Instead,
return the previously unused error code DB_QUE_THR_SUSPENDED.
row_merge_lock_table(), row_mysql_lock_table(): Remove dead code
for handling DB_QUE_THR_SUSPENDED.
row_drop_table_for_mysql(), row_truncate_table_for_mysql():
Drop the statistics as part of the data dictionary transaction.
After TRUNCATE TABLE, the statistics will be recalculated on
subsequent ha_innobase::open(), similar to how the logic after
the above-mentioned Oracle Bug#22469660 fix in
ha_innobase::commit_inplace_alter_table() works.
btr_defragment_thread(): Use a single transaction object for
updating defragmentation statistics.
dict_stats_save_defrag_stats(), dict_stats_save_defrag_stats(),
dict_stats_process_entry_from_defrag_pool(),
dict_defrag_process_entries_from_defrag_pool(),
dict_stats_save_defrag_summary(), dict_stats_save_defrag_stats():
Add a parameter for the transaction.
dict_stats_empty_table(): Make public. This will be called by
row_truncate_table_for_mysql() after dropping persistent statistics,
to clear the memory-based statistics as well.
TABLE_SHARE::init_from_binary_frm_image() calls handler_file->index_flags()
before it has set TABLE_SHARE::primary_key (it is 0 while it should be
MAX_KEY in my example).
This causes MyRocks to report wrong index flags (it thinks it's a PK while
it is not), which causes invalid query plans later on.
Do the only thing that seems feasible: adjust field->part_of key to have
correct value in ha_rocksdb::open.
Do not generate fake values when adding an auto-inc column to a versioned
table. This is not a auto-inc issue, but a more general case of adding
a not nullalble unique column to a table with history. We don't support
it yet, not even with a special auto-inc hack. As a workaround, one
can use a nullable unique column, that works.