Commit a923d6f49c disabled numeric setting
of character_set_* variables with non-default values:
MariaDB [(none)]> set character_set_client=224;
ERROR 1115 (42000): Unknown character set: '224'
However the corresponding binlog functionality still write numeric
values for log event, and this will break binlog replay if the value is
not default. Now make the server use 'String' type for
'character_set_client' when generating binlog events
Before:
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=224,@@session.collation_connection=224,@@session.collation_server=33/*!*/;
After:
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=utf8mb4,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
Note: prior to the previous commit, setting with '224' or '45' or
'utf8mb4' have the same effect, as they all set the parameter to
'utf8mb4'.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
In some cases, the errors would not be written to the log.
This was however not critical as in most cases mysql_install_db
should not normally write anything to the log.
- InnoDB does rollback the whole transaction and discards the
savepoint when there is a failure happens during bulk
insert operation. When server request to release the savepoint,
InnoDB should return DB_SUCCESS when it deals with bulk
insert operation
The failed assertion was about encountering the same rowid value in
two different partitions.
This wasn't possible with InnoDB previously: InnoDB used a global counter
to produce rowid values for hidden PK.
After the fix for MDEV-19506, it uses per-table counters so it's easily
possible to get the same hidden PK values in different tables.
The crash happened due to rows=2 vs rows=1 difference between how the
estimate of number of rows in a derived table is computed in
TABLE_LIST::fetch_number_of_rows() and JOIN::add_keyuses_for_splitting().
Made JOIN::add_keyuses_for_splitting() use the result of computations in
TABLE_LIST::fetch_number_of_rows().
For more convenient monitoring of something that could greatly affect
the volume of page writes, we add the status variable
Innodb_buffer_pool_pages_split that was previously only available
via information_schema.innodb_metrics as "innodb_page_splits".
This was suggested by Axel Schwenke.
buf_flush_page_count: Replaced with buf_pool.stat.n_pages_written.
We protect buf_pool.stat (except n_page_gets) with buf_pool.mutex
and remove unnecessary export_vars indirection.
buf_pool.flush_list_bytes: Moved from buf_pool.stat.flush_list_bytes.
Protected by buf_pool.flush_list_mutex.
buf_pool_t::page_cleaner_status: Replaces buf_pool_t::n_flush_LRU_,
buf_pool_t::n_flush_list_, and buf_pool_t::page_cleaner_is_idle.
Protected by buf_pool.flush_list_mutex. We will exclusively broadcast
buf_pool.done_flush_list by the buf_flush_page_cleaner thread,
and only wait for it when communicating with buf_flush_page_cleaner.
There is no need to keep a count of pending writes by the
buf_pool.flush_list processing. A single flag suffices for that.
Waits for page write completion can be performed by
simply waiting on block->page.lock, or by invoking
buf_dblwr.wait_for_page_writes().
buf_LRU_block_free_non_file_page(): Broadcast buf_pool.done_free and
set buf_pool.try_LRU_scan when freeing a page. This would be
executed also as part of buf_page_write_complete().
buf_page_write_complete(): Do not broadcast buf_pool.done_flush_list,
and do not acquire buf_pool.mutex unless buf_pool.LRU eviction is needed.
Let buf_dblwr count all writes to persistent pages and broadcast a
condition variable when no outstanding writes remain.
buf_flush_page_cleaner(): Prioritize LRU flushing and eviction right after
"furious flushing" (lsn_limit). Simplify the conditions and reduce the
hold time of buf_pool.flush_list_mutex. Refuse to shut down
or sleep if buf_pool.ran_out(), that is, LRU eviction is needed.
buf_pool_t::page_cleaner_wakeup(): Add the optional parameter for_LRU.
buf_LRU_get_free_block(): Protect buf_lru_free_blocks_error_printed
with buf_pool.mutex. Invoke buf_pool.page_cleaner_wakeup(true) to
to ensure that buf_flush_page_cleaner() will process the LRU flush
request.
buf_do_LRU_batch(), buf_flush_list(), buf_flush_list_space():
Update buf_pool.stat.n_pages_written when submitting writes
(while holding buf_pool.mutex), not when completing them.
buf_page_t::flush(), buf_flush_discard_page(): Require that
the page U-latch be acquired upfront, and remove
buf_page_t::ready_for_flush().
buf_pool_t::delete_from_flush_list(): Remove the parameter "bool clear".
buf_flush_page(): Count pending page writes via buf_dblwr.
buf_flush_try_neighbors(): Take the block of page_id as a parameter.
If the tablespace is dropped before our page has been written out,
release the page U-latch.
buf_pool_invalidate(): Let the caller ensure that there are no
outstanding writes.
buf_flush_wait_batch_end(false),
buf_flush_wait_batch_end_acquiring_mutex(false):
Replaced with buf_dblwr.wait_for_page_writes().
buf_flush_wait_LRU_batch_end(): Replaces buf_flush_wait_batch_end(true).
buf_flush_list(): Remove some broadcast of buf_pool.done_flush_list.
buf_flush_buffer_pool(): Invoke also buf_dblwr.wait_for_page_writes().
buf_pool_t::io_pending(), buf_pool_t::n_flush_list(): Remove.
Outstanding writes are reflected by buf_dblwr.pending_writes().
buf_dblwr_t::init(): New function, to initialize the mutex and
the condition variables, but not the backing store.
buf_dblwr_t::is_created(): Replaces buf_dblwr_t::is_initialised().
buf_dblwr_t::pending_writes(), buf_dblwr_t::writes_pending:
Keeps track of writes of persistent data pages.
buf_flush_LRU(): Allow calls while LRU flushing may be in progress
in another thread.
Tested by Matthias Leich (correctness) and Axel Schwenke (performance)
Adaptive flushing is enabled by setting innodb_max_dirty_pages_pct_lwm>0
(not default) and innodb_adaptive_flushing=ON (default).
There is also the parameter innodb_adaptive_flushing_lwm
(default: 10 per cent of the log capacity). It should enable some
adaptive flushing even when innodb_max_dirty_pages_pct_lwm=0.
That is not being changed here.
This idea was first presented by Inaam Rana several years ago,
and I discussed it with Jean-François Gagné at FOSDEM 2023.
buf_flush_page_cleaner(): When we are not near the log capacity limit
(neither buf_flush_async_lsn nor buf_flush_sync_lsn are set),
also try to move clean blocks from the buf_pool.LRU list to buf_pool.free
or initiate writes (but not the eviction) of dirty blocks, until
the remaining I/O capacity has been consumed.
buf_flush_LRU_list_batch(): Add the parameter bool evict, to specify
whether dirty least recently used pages (from buf_pool.LRU) should
be evicted immediately after they have been written out. Callers outside
buf_flush_page_cleaner() will pass evict=true, to retain the existing
behaviour.
buf_do_LRU_batch(): Add the parameter bool evict.
Return counts of evicted and flushed pages.
buf_flush_LRU(): Add the parameter bool evict.
Assume that the caller holds buf_pool.mutex and
will invoke buf_dblwr.flush_buffered_writes() afterwards.
buf_flush_list_holding_mutex(): A low-level variant of buf_flush_list()
whose caller must hold buf_pool.mutex and invoke
buf_dblwr.flush_buffered_writes() afterwards.
buf_flush_wait_batch_end_acquiring_mutex(): Remove. It is enough to have
buf_flush_wait_batch_end().
page_cleaner_flush_pages_recommendation(): Avoid some floating-point
arithmetics.
buf_flush_page(), buf_flush_check_neighbor(), buf_flush_check_neighbors(),
buf_flush_try_neighbors(): Rename the parameter "bool lru" to "bool evict".
buf_free_from_unzip_LRU_list_batch(): Remove the parameter.
Only actual page writes will contribute towards the limit.
buf_LRU_free_page(): Evict freed pages of temporary tables.
buf_pool.done_free: Broadcast whenever a block is freed
(and buf_pool.try_LRU_scan is set).
buf_pool_t::io_buf_t::reserve(): Retry indefinitely.
During the test encryption.innochecksum we easily run out of
these buffers for PAGE_COMPRESSED or ENCRYPTED pages.
Tested by Matthias Leich and Axel Schwenke
lock_sec_rec_some_has_impl(): Remove a harmful condition that caused the
performance regression and should not have been added in
commit b6e41e3872 in the first place.
Locking transactions that have not modified any persistent tables
can carry the transaction identifier 0.
trx_t::max_inactive_id: A cache for trx_sys_t::find_same_or_older().
The value is not reset on transaction commit so that previous results
can be reused for subsequent transactions. The smallest active
transaction ID can only increase over time, not decrease.
trx_sys_t::find_same_or_older(): Remember the maximum previous id for which
rw_trx_hash.iterate() returned false, to avoid redundant iterations.
lock_sec_rec_read_check_and_lock(): Add an early return in case we are
already holding a covering table lock.
lock_rec_convert_impl_to_expl(): Add a template parameter to avoid
a redundant run-time check on whether the index is secondary.
lock_rec_convert_impl_to_expl_for_trx(): Move some code from
lock_rec_convert_impl_to_expl(), to reduce code duplication due
to the added template parameter.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
This is a follow-up to
commit de4030e4d4 (MDEV-30400),
which fixed some hangs related to B-tree split or merge.
btr_root_block_get(): Use and update the root page guess. This is just
a minor performance optimization, not affecting correctness.
btr_validate_level(): Remove the parameter "lockout", and always
acquire an exclusive dict_index_t::lock in CHECK TABLE without QUICK.
This is needed in order to avoid latching order violation in
btr_page_get_father_node_ptr_for_validate().
btr_cur_need_opposite_intention(): Return true in case
btr_cur_compress_recommendation() would hold later during the
mini-transaction, or if a page underflow or overflow is possible.
If we return true, our caller will escalate to aqcuiring an exclusive
dict_index_t::lock, to prevent a latching order violation and deadlock
during btr_compress() or btr_page_split_and_insert().
btr_cur_t::search_leaf(), btr_cur_t::open_leaf():
Also invoke btr_cur_need_opposite_intention() on the leaf page.
btr_cur_t::open_leaf(): When escalating to exclusive index locking,
acquire exclusive latches on all pages as well.
innobase_instant_try(): Return an error code if the root page cannot
be retrieved.
In addition to the normal stress testing with Random Query Generator (RQG)
this has been tested with
./mtr --mysqld=--loose-innodb-limit-optimistic-insert-debug=2
but with the injection in btr_cur_optimistic_insert() for non-leaf pages
adjusted so that it would use the value 3. (Otherwise, infinite page
splits could occur in some mtr tests.)
Tested by: Matthias Leich
srv_start(): If we are going to close the log file in
mariadb-backup --prepare, call buf_flush_sync() before
calling recv_sys.debug_free() to ensure that the log file
will not be accessed.
This fixes a rather rare failure in the test
mariabackup.innodb_force_recovery where buf_flush_page_cleaner()
would invoke log_checkpoint_low() because !recv_recovery_is_on()
would hold due to the fact that recv_sys.debug_free() had
already been called. Then, the log write for the checkpoint
would fail because srv_start() had invoked log_sys.log.close_file().
Building of MariaDB server version 11.0 and 11.1 fails
on MacOS 12.x (Monterey). Build failure happened on generating
header files from the error messages contained in the file
errmsg-utf8.txt. This process is performed by the utility comp_err
that crashes when it is run on MacOS Monterey.
comp_err invokes my_init at the very beginning of start before
initialization of thread environment done. While executing my_init
the function my_readlink is called. my_readlink is wrapper around
the system call readlink with extra errors handling. In case the
system call readlink returns error the following block of code is run
if (my_thread_var)
my_errno= errno;
my_thred_var is macros that expanded to invocation of
my_pthread_getspecific() against supplied thread specific key
THR_KEY_mysys. Unfortunately, the tsd key THR_KEY_mysys is initialized
right after the call of my_init() so return value of pthread_getspecific
is platform dependent. On Linux pthread_getspecific returns NULL
if key is not a valid TSD key. On MacOS, the effect of calling
pthread_getspecific() with a key value not obtained from pthread_key_create()
is undefined. So, on MacOS pthread_getspecific() returns some invalid address
where the errno value is written. It leads to a crash latter when the library
API function pthread_self() is called.
To fix the issue, initialization of thread environment is moved at very
beginning of the main() function in order to run it as the first step
to have full initiliazed environment at the moment when my_init()
is ivoked.
Created tests for "delete" based on update_use_source.test
For the update_use_source.test tests, data recovery in the table has been changed
from a rollback transaction to a complete delete and re-insert of the data with
optimize table. Cases are now being checked on three engines.
Added tests for update/delete with LooseScan and DuplicateWeedout optimization strategies
Added tests for engine MEMORY on delete and update
Added tests for multi-update with JSON_TABLE
Added tests for multi-update and multi-delete for engine Connect
This patch allows to use semi-join optimization at the top level of
single-table update and delete statements.
The problem of supporting such optimization became easy to resolve after
processing a single-table update/delete statement started using JOIN
structure. This allowed to use JOIN::prepare() not only for multi-table
updates/deletes but for single-table ones as well. This was done in the
patch for mdev-28883:
Re-design the upper level of handling UPDATE and DELETE statements.
Note that JOIN::prepare() detects all subqueries that can be considered
as candidates for semi-join optimization. The code added by this patch
looks for such candidates at the top level and if such candidates are found
in the processed single-table update/delete the statement is handled in
the same way as a multi-table update/delete.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This bug caused a crash of the server at the second execution of a stored
function that used DELETE or UPDATE statement if the first execution
of this function reported an error encountered after the prepare phase.
This happened because in such cases the executed DELETE/UPDATE statement
remained marked as prepared. As a result the second execution of SF missed
the prepare phase for the statement altogether and the statement could not
be executed properly.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch fixes not only the assertion failure in the function
Field_iterator_table_ref::set_field_iterator() but also:
- fixes the problem of forced materialization of derived tables used
in subqueries contained in WHERE clauses of single-table and multi-table
UPDATE and DELETE statements
- fixes the problem of MDEV-17954 that prevented execution of multi-table
DELETE statements if they use in their WHERE clauses references to
the tables that are updated.
The patch must be considered a complement to the patch for MDEV-28883.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch introduces a new way of handling UPDATE and DELETE commands at
the top level after the parsing phase. This new way of processing update
and delete statements can be seen in the implementation of the prepare()
and execute() methods from the new Sql_cmd_dml class. This class derived
from the Sql_cmd class can be considered as an interface class for processing
such commands as SELECT, INSERT, UPDATE, DELETE and other comands
manipulating data in tables.
With this patch processing of update and delete statements after parsing
proceeds by the following schema:
- precheck of the access rights is performed for the used tables
- the used tables are opened
- context analysis phase is performed for the statement
- the used tables are locked
- the statement is optimized and executed
- clean-up is performed for the statement
The implementation of the method Sql_cmd_dml::execute() adheres this schema.
The virtual functions of the class Sql_cmd_dml used for precheck of the
access rights, context analysis, optimization and execution allow to adjust
this schema for processing data manipulation statements of any types.
This schema of processing data manipulation statements is taken from the
current MySQL code. Moreover the definition the class Sql_cmd_dml introduced
in this patch is almost a full replica of such class in the existing MySQL.
However the implementation of the derived classes for update and delete
statements is quite different. This implementation employs the JOIN class
for all kinds of update and delete statements. It allows to perform main
bulk of context analysis actions by the function JOIN::prepare(). This
guarantees that characteristics and properties of the statement tree
discovered for optimization phase when doing context analysis are the same
for single-table and multi-table updates and deletes.
With this patch the following functions are gone:
mysql_prepare_update(), mysql_multi_update_prepare(),
mysql_update(), mysql_multi_update(),
mysql_prepare_delete(), mysql_multi_delete_prepare(), mysql_delete().
The code within these functions have been used as much as possible though.
The functions mysql_test_update() and mysql_test_delete() are also not
needed anymore. The method Sql_cmd_dml::prepare() serves processing
- update/delete statement
- PREPARE stmt FROM "<update/delete statement>"
- EXECUTE stmt when stmt is prepared from update/delete statement.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Do not set any flags in the items for constant subformulas TRUE/FALSE when
checking pushability of a formula into a view. Occurrences of these
subformulas can be ignored when checking pushability of the formula.
At the same time the items used for these constants became immutable
starting from version 10.7.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Type_handler::partition_field_append_value() erroneously
passed the address of my_collation_contextually_typed_binary
to conversion functions copy_and_convert() and my_convert().
This happened because generate_partition_syntax_for_frm()
was called from mysql_create_frm_image() in the stage when
the fields in List<Create_field> can still contain unresolved
contextual collations, like "binary" in the reported crash scenario:
ALTER TABLE t CHANGE COLUMN a a CHAR BINARY;
Fix:
1. Splitting mysql_prepare_create_table() into two parts:
- mysql_prepare_create_table_stage1() interates through
List<Create_field> and calls Create_field::prepare_stage1(),
which performs basic attribute initialization, including
context collation resolution.
- mysql_prepare_create_table_finalize() - the rest of the
old mysql_prepare_create_table() code.
2. Changing mysql_create_frm_image():
It now calls:
- mysql_prepare_create_table_stage1() in the very
beginning, before the partition related code.
- mysql_prepare_create_table_finalize() in the end,
instead of the old mysql_prepare_create_table() call
3. Adding mysql_prepare_create_table() as a wrapper
for two calls:
mysql_prepare_create_table_stage1() ||
mysql_prepare_create_table_finalize()
so the code stays unchanged in the other places
where mysql_prepare_create_table() was used.
4. Changing prototype for Type_handler::Column_definition_prepare_stage1()
Removing arguments:
- handler *file
- ulonglong table_flags
Adding a new argument instead:
- column_definition_type_t type
This allows to call Column_definition_prepare_stage1() and
therefore to call mysql_prepare_create_table_stage1()
before instantiation of a handler.
This simplifies the code, because in case of a partitioned table,
mysql_create_frm_image() creates a handler of the underlying
partition first, the frees it and created a ha_partition
instance instead.
mysql_prepare_create_table() before the fix was called with the final
(ha_partition) handler.
5. Moving parts of Column_definition_prepare_stage1() which
need a pointer to handler and table_flags to
Column_definition_prepare_stage2().
The deprecated parameters will be removed:
innodb_defragment
innodb_defragment_n_pages
innodb_defragment_stats_accuracy
innodb_defragment_fill_factor_n_recs
innodb_defragment_fill_factor
innodb_defragment_frequency
The mysql.innodb_index_stats.stat_name values 'n_page_split' and
'n_pages_freed' will lose their special meaning.
The related changes to OPTIMIZE TABLE in InnoDB will be removed as well.
The parameter innodb_optimize_fulltext_only will retain its special
meaning in OPTIMIZE TABLE.
Tested by: Matthias Leich
fil_node_open_file_low() tries to close files from the top of
fil_system.space_list if the number of opened files is exceeded.
It invokes fil_space_t::try_to_close(), which iterates the list searching
for the first opened space. Then it just closes the space, leaving it in
the same position in fil_system.space_list.
On heavy files opening, like during 'SHOW TABLE STATUS ...' execution,
if the number of opened files limit is reached,
fil_space_t::try_to_close() iterates more and more closed spaces before
reaching any opened space for each fil_node_open_file_low() call. What
causes performance regression if the number of spaces is big enough.
The fix is to keep opened spaces at the top of fil_system.space_list,
and move closed files at the end of the list.
For this purpose fil_space_t::space_list_last_opened pointer is
introduced. It points to the last inserted opened space in
fil_space_t::space_list. When space is opened, it's inserted to the
position just after the pointer points to in fil_space_t::space_list to
preserve the logic, inroduced in MDEV-23855. Any closed space is added
to the end of fil_space_t::space_list.
As opened spaces are located at the top of fil_space_t::space_list,
fil_space_t::try_to_close() finds opened space faster.
There can be the case when opened and closed spaces are mixed in
fil_space_t::space_list if fil_system.freeze_space_list was set during
fil_node_open_file_low() execution. But this should not cause any error,
as fil_space_t::try_to_close() still iterates spaces in the list.
There is no need in any test case for the fix, as it does not change any
functionality, but just fixes performance regression.
Renaming the default MariaDB backup directory from
xtrabackup_backupfiles to mariadb_backup_files.
Renaming files:
- xtrabackup_binlog_info to mariadb_backup_binlog_info
- xtrabackup_checkpoints to mariadb_backup_checkpoints
- xtrabackup_galera_info to mariadb_backup_galera_info
- xtrabackup_info to mariadb_backup_info
- xtrabackup_slave_info to mariadb_backup_slave_info