Remove threads that are doing nothing but wait
- main thread now handles the connections
(if threadpool is used, also threadpool threads would wait for connections)
- thread for socket and pipe connections are removed
- shutdown thread is now removed, we wait for shutdown
notification in main thread as well
- kill_server() is also called inside the main thread, after connection
loop finished.
The upper 1M limit for max_prepared_stmt_count was set over 10 years
ago. It doesn't suite current hardware and a sysbench oltp_read_write
test with 512 threads will hit this limit.
Main changes:
- Changing the constructor to accept a CHARSET_INFO pointer, instead of an Item pointer
- Updating the bison grammar accordingly
Additional cleanups:
- Simplifying Item_func_set_collation::eq() by reusing Item_func::eq()
- Removing unused binary_keyword
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
Solve 3 way deadlock between plugin_initialiaze(), THD::init() and
mysql_sys_var_char().
The deadlock exists because of the lock order inversion between
LOCK_global_system_variables mutex and LOCK_system_variables_hash
read-write lock-
In this case, it is enough to change LOCK_system_variables_hash to prefer
reads to fix the deadlock, i.e change it to mysql_prlock_t
and the system_versioning_transaction_registry variable.
The user enables transaction registry by specifying BIGINT for
row_start/row_end columns.
check mysql.transaction_registry structure on the first open,
not on startup. Avoid warnings unless transaction_registry
is actually used.
and specifically the ack receiving functionality.
Semisync is turned to be static instead of plugin so its functions
are invoked at the same points as RUN_HOOKS.
The RUN_HOOKS and the observer interface remain to be removed by later
patch.
Todo:
React on killed status by repl_semisync_master.wait_after_sync(). Currently
Repl_semi_sync_master::commit_trx does not check the killed status.
There were few bugfixes found that are present in mysql and its unclear
whether/how they are covered. Those include:
Bug#15985893: GTID SKIPPED EVENTS ON MASTER CAUSE SEMI SYNC TIME-OUTS
Bug#17932935 CALLING IS_SEMI_SYNC_SLAVE() IN EACH FUNCTION CALL
HAS BAD PERFORMANCE
Bug#20574628: SEMI-SYNC REPLICATION PERFORMANCE DEGRADES WITH A HIGH NUMBER OF THREADS
RUN_HOOK() is only called if semisync is enabled
As the server can't disable the hooks if something is in progress, I added
a new variable, run_hooks_enabled, that is set the first time semi sync is
used. This means that RUN_HOOK will have no overhead, unless semi sync
master or slave has been enabled once.
Some of the changes was just to get rid of warnings for embedded server
Part of MDEV-13073 AliSQL Optimize performance of semisync
The idea it to use a dedicated lock detecting if there is new data in
the master's binary log instead of the overused LOCK_log.
Changes:
- Use dedicated COND variables for the relay and binary log signaling.
This was needed as we where the old 'update_cond' variable was used
with different mutex's, which could cause deadlocks.
- Relay log uses now COND_relay_log_updated and LOCK_log
- Binary log uses now COND_bin_log_updated and LOCK_binlog_end_pos
- Renamed signal_cnt to relay_signal_cnt (as we now have two signals)
- Added some missing error handling in MYSQL_BIN_LOG::new_file_impl()
- Reformatted some comments with old style
- Renamed m_key_LOCK_binlog_end_pos to key_LOCK_binlog_end_pos
- Changed 'signal_update()' to update_binlog_end_pos() which works for
both relay and binary log
Instead of updating global counter, calculate Threads_running on the fly.
All threads having command != COM_SLEEP are included.
Behaviour changes:
Previously SHOW STATUS and SHOW GLOBAL STATUS returned the same values
representing global status. Now SHOW STATUS always returns 1 indicating that
current session has 1 thread running.
Previously only threads that were executing dispatch_command() or running events
were accounted by Threads_running. Now it is rough equivalent of
SELECT COUNT(*) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE state!='Sleep'
This is about adding more options to force slave retries
Two new variables has been added:
slave_transaction_retry_errors
- Tells the slave thread to retry transaction for replication when a
query event returns an error from the provided list. Deadlock and
elapsed lock wait timeout errors are automatically added to this list
slave-transaction-retry-interval
- Interval of the slave SQL thread will retry a transaction
in case it failed with a deadlock or elapsed lock wait
timeout or listed in slave_transaction_retry_errors
Other changes:
- Simplifed code for slave_skip_errors (to be aligned with
slave_transaction_retry_errors)
- Renamed print_slave_skip_errors() to make_slave_skip_errors_printable()
- Remove printing error from init_slave_skip_errors as my_bitmap_init()
will do that if needed.
- Generalize has_temporary_error()
This was done to get more information about where time is spent.
Now we can get proper timing for time spent in commit, rollback,
binlog write etc.
Following stages was added:
- Commit
- Commit_implicit
- Rollback
- Rollback implicit
- Binlog write
- Init for update
- This is used instead of "Init" for insert, update and delete.
- Staring cleanup
Following stages where changed:
- "Unlocking tables" stage reset stage to previous stage at end
- "binlog write" stage resets stage to previous stage at end
- "end" -> "end of update loop"
- "cleaning up" -> "Reset for next command"
- Added stage_searching_rows_for_update when searching for rows
to be deleted.
Other things:
- Renamed all stages to start with big letter (before there was no
consitency)
- Increased performance_schema_max_stage_classes from 150 to 160.
- Most of the test changes in performance schema comes from renaming of
stages.
- Removed duplicate output of variables and inital state in a lot of
performance schema tests.
This was done to make it easier to change a default value for a
performance variable without affecting all tests.
- Added start_server_variables.test to check configuration
- Removed some duplicate "closing tables" stages
- Updated position for "stage_init_update" and "stage_updating" for
delete, insert and update to be just before update loop (for more
exact timing).
- Don't set "Checking permissions" twice in a row.
- Remove stage_end stage from creating views (not done for create table
either).
- Updated default performance history size from 10 to 20 because of new
stages
- Ensure that ps_enabled is correct (to be used in a later patch)