mirror of
https://github.com/MariaDB/server.git
synced 2025-01-20 05:52:27 +01:00
5fff906edd
TABLES <list> WITH READ LOCK are incompatible". The problem was that FLUSH TABLES <list> WITH READ LOCK which was issued when other connection has acquired global read lock using FLUSH TABLES WITH READ LOCK was blocked and has to wait until global read lock is released. This issue stemmed from the fact that FLUSH TABLES <list> WITH READ LOCK implementation has acquired X metadata locks on tables to be flushed. Since these locks required acquiring of global IX lock this statement was incompatible with global read lock. This patch addresses problem by using SNW metadata type of lock for tables to be flushed by FLUSH TABLES <list> WITH READ LOCK. It is OK to acquire them without global IX lock as long as we won't try to upgrade those locks. Since SNW locks allow concurrent statements using same table FLUSH TABLE <list> WITH READ LOCK now has to wait until old versions of tables to be flushed go away after acquiring metadata locks. Since such waiting can lead to deadlock MDL deadlock detector was extended to take into account waits for flush and resolve such deadlocks. As a bonus code in open_tables() which was responsible for waiting old versions of tables to go away was refactored. Now when we encounter old version of table in open_table() we don't back-off and wait for all old version to go away, but instead wait for this particular table to be flushed. Such approach supported by deadlock detection should reduce number of scenarios in which FLUSH TABLES aborts concurrent multi-statement transactions. Note that active FLUSH TABLES <list> WITH READ LOCK still blocks concurrent FLUSH TABLES WITH READ LOCK statement as the former keeps tables open and thus prevents the latter statement from doing flush. |
||
---|---|---|
.. | ||
aggregate.result | ||
bad_option_1.result | ||
bad_option_2.result | ||
binlog_mix.result | ||
binlog_row.result | ||
binlog_stmt.result | ||
cnf_option.result | ||
column_privilege.result | ||
ddl_cond_instances.result | ||
ddl_events_waits_current.result | ||
ddl_events_waits_history.result | ||
ddl_events_waits_history_long.result | ||
ddl_ews_by_event_name.result | ||
ddl_ews_by_instance.result | ||
ddl_ews_by_thread_by_event_name.result | ||
ddl_file_instances.result | ||
ddl_fs_by_event_name.result | ||
ddl_fs_by_instance.result | ||
ddl_mutex_instances.result | ||
ddl_performance_timers.result | ||
ddl_processlist.result | ||
ddl_rwlock_instances.result | ||
ddl_setup_consumers.result | ||
ddl_setup_instruments.result | ||
ddl_setup_objects.result | ||
ddl_setup_timers.result | ||
dml_cond_instances.result | ||
dml_events_waits_current.result | ||
dml_events_waits_history.result | ||
dml_events_waits_history_long.result | ||
dml_ews_by_event_name.result | ||
dml_ews_by_instance.result | ||
dml_ews_by_thread_by_event_name.result | ||
dml_file_instances.result | ||
dml_file_summary_by_event_name.result | ||
dml_file_summary_by_instance.result | ||
dml_mutex_instances.result | ||
dml_performance_timers.result | ||
dml_processlist.result | ||
dml_rwlock_instances.result | ||
dml_setup_consumers.result | ||
dml_setup_instruments.result | ||
dml_setup_objects.result | ||
dml_setup_timers.result | ||
func_file_io.result | ||
func_mutex.result | ||
global_read_lock.result | ||
information_schema.result | ||
misc.result | ||
myisam_file_io.result | ||
no_threads.result | ||
one_thread_per_con.result | ||
pfs_upgrade.result | ||
privilege.result | ||
query_cache.result | ||
read_only.result | ||
schema.result | ||
selects.result | ||
server_init.result | ||
start_server_no_cond_class.result | ||
start_server_no_cond_inst.result | ||
start_server_no_file_class.result | ||
start_server_no_file_inst.result | ||
start_server_no_mutex_class.result | ||
start_server_no_mutex_inst.result | ||
start_server_no_rwlock_class.result | ||
start_server_no_rwlock_inst.result | ||
start_server_no_thread_class.result | ||
start_server_no_thread_inst.result | ||
start_server_off.result | ||
start_server_on.result | ||
tampered_perfschema_table1.result | ||
thread_cache.result |