This patch introduces a new way of handling UPDATE and DELETE commands at
the top level after the parsing phase. This new way of processing update
and delete statements can be seen in the implementation of the prepare()
and execute() methods from the new Sql_cmd_dml class. This class derived
from the Sql_cmd class can be considered as an interface class for processing
such commands as SELECT, INSERT, UPDATE, DELETE and other comands
manipulating data in tables.
With this patch processing of update and delete statements after parsing
proceeds by the following schema:
- precheck of the access rights is performed for the used tables
- the used tables are opened
- context analysis phase is performed for the statement
- the used tables are locked
- the statement is optimized and executed
- clean-up is performed for the statement
The implementation of the method Sql_cmd_dml::execute() adheres this schema.
The virtual functions of the class Sql_cmd_dml used for precheck of the
access rights, context analysis, optimization and execution allow to adjust
this schema for processing data manipulation statements of any types.
This schema of processing data manipulation statements is taken from the
current MySQL code. Moreover the definition the class Sql_cmd_dml introduced
in this patch is almost a full replica of such class in the existing MySQL.
However the implementation of the derived classes for update and delete
statements is quite different. This implementation employs the JOIN class
for all kinds of update and delete statements. It allows to perform main
bulk of context analysis actions by the function JOIN::prepare(). This
guarantees that characteristics and properties of the statement tree
discovered for optimization phase when doing context analysis are the same
for single-table and multi-table updates and deletes.
With this patch the following functions are gone:
mysql_prepare_update(), mysql_multi_update_prepare(),
mysql_update(), mysql_multi_update(),
mysql_prepare_delete(), mysql_multi_delete_prepare(), mysql_delete().
The code within these functions have been used as much as possible though.
The functions mysql_test_update() and mysql_test_delete() are also not
needed anymore. The method Sql_cmd_dml::prepare() serves processing
- update/delete statement
- PREPARE stmt FROM "<update/delete statement>"
- EXECUTE stmt when stmt is prepared from update/delete statement.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The deprecated parameters will be removed:
innodb_defragment
innodb_defragment_n_pages
innodb_defragment_stats_accuracy
innodb_defragment_fill_factor_n_recs
innodb_defragment_fill_factor
innodb_defragment_frequency
The mysql.innodb_index_stats.stat_name values 'n_page_split' and
'n_pages_freed' will lose their special meaning.
The related changes to OPTIMIZE TABLE in InnoDB will be removed as well.
The parameter innodb_optimize_fulltext_only will retain its special
meaning in OPTIMIZE TABLE.
Tested by: Matthias Leich
Renaming the default MariaDB backup directory from
xtrabackup_backupfiles to mariadb_backup_files.
Renaming files:
- xtrabackup_binlog_info to mariadb_backup_binlog_info
- xtrabackup_checkpoints to mariadb_backup_checkpoints
- xtrabackup_galera_info to mariadb_backup_galera_info
- xtrabackup_info to mariadb_backup_info
- xtrabackup_slave_info to mariadb_backup_slave_info
Firstmatch_picker::check_qep() has an optimization that allows firstmatch
to be used together with join buffer under some conditions. In this
case the cost was assumed to be same as what best_access_path()
had calculated.
However if HASH+join_buffer was used, then
fix_semijoin_strategies_for_picked_join_order() would remove the
join_buffer (which would cause a full join to be used) and the cost
assumption by Firstmatch_picker::check_qep() would be wrong.
Later check_join_cache_usage() sees that it's a full scan and decides
it can use join buffering, (But not the hash join).
Fixed by also allowing HASH joins with firstmatch.
This removes the need to change disable and re-enable join buffer.
Test case changes:
- HASH join used with firstmatch (Using join buffer (flat, BNLH join))
- Filtered could change with firstmatch as the conversion with and without
join_buffered lost the filtering information.
- The not "re-enabling join buffer" is shown in main.optimizer_trace
Original code by Sergei, optimized by Monty.
Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
This error was discovered while working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
sl->join->exec();
saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join would
be set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
scovered whle working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
sl->join->exec();
saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join was
set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
The problem was the mysql_derived_prepare() did not correctly set
'distinct' when creating a temporary derivated table.
Fixed by separating checking for distinct for queries with and without
UNION.
Other things:
- Fixed bug in generate_derived_keys_for_table() where we set the wrong
bit for join_tab->keys
- Cleaned up JOIN::drop_unused_derived_keys()
- Changed TABLE::use_index() to keep unique keys and update
share->key_parts
Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
This patch also fixes some bugs detected by valgrind after this
patch:
- Not enough copy_func elements was allocated by Create_tmp_table() which
causes an memory overwrite in Create_tmp_table::add_fields()
I added an ASSERT() to be able to detect this also without valgrind.
The bug was that TMP_TABLE_PARAM::copy_fields was not correctly set
when calling create_tmp_table().
- Aria::empty_bits is not allocated if there is no varchar/char/blob
fields in the table. Fixed code to take this into account.
This cannot cause any issues as this is just a memory access
into other Aria memory and the content of the memory would not be used.
- Aria::last_key_buff was not allocated big enough. This may have caused
issues with rtrees and ma_extra(HA_EXTRA_REMEMBER_POS) as they
would use the same memory area.
- Aria and MyISAM didn't take extended key parts into account, which
caused problems when copying rec_per_key from engine to sql level.
- Mark asan builds with 'asan' in version strihng to detect these in
not_valgrind_build.inc.
This is needed to not have main.sp-no-valgrind fail with asan.
It is not safe to invoke trx_purge_free_segment() or execute
innodb_undo_log_truncate=ON before all undo log records in
the rollback segment has been processed.
A prominent failure that would occur due to premature freeing of
undo log pages is that trx_undo_get_undo_rec() would crash when
trying to copy an undo log record to fetch the previous version
of a record.
If trx_undo_get_undo_rec() was not invoked in the unlucky time frame,
then the symptom would be that some committed transaction history is
never removed. This would be detected by CHECK TABLE...EXTENDED that
was impleented in commit ab0190101b.
Such a garbage collection leak should be possible even when using
innodb_undo_log_truncate=OFF, just involving trx_purge_free_segment().
trx_rseg_t::needs_purge: Change the type from Boolean to a transaction
identifier, noting the most recent non-purged transaction, or 0 if
everything has been purged. On transaction start, we initialize this
to 1 more than the transaction start ID. On recovery, the field may be
adjusted to the transaction end ID (TRX_UNDO_TRX_NO) if it is larger.
The field TRX_UNDO_NEEDS_PURGE becomes write-only; only some debug
assertions that would validate the value. The field reflects the old
inaccurate Boolean field trx_rseg_t::needs_purge.
trx_undo_mem_create_at_db_start(), trx_undo_lists_init(),
trx_rseg_mem_restore(): Remove the parameter max_trx_id.
Instead, store the maximum in trx_rseg_t::needs_purge,
where trx_rseg_array_init() will find it.
trx_purge_free_segment(): Contiguously hold a lock on
trx_rseg_t to prevent any concurrent allocation of undo log.
trx_purge_truncate_rseg_history(): Only invoke trx_purge_free_segment()
if the rollback segment is empty and there are no pending transactions
associated with it.
trx_purge_truncate_history(): Only proceed with innodb_undo_log_truncate=ON
if trx_rseg_t::needs_purge indicates that all history has been purged.
Tested by: Matthias Leich
- rollback_inplace_alter_table() locks the fts internal tables.
At the time, insert tries to fetch the doc id from config table,
fails to lock the config table and returns doc id as 0.
fts_cmp_set_sync_doc_id(): Retry to fetch the doc id again if
it encounter DB_LOCK_WAIT_TIMEOUT error
- Use log2() insted of log()
- Added missing ''+' when calculating rowid setup cost
- Adjusted ROWID_FILTER_PER_ELEMENT_MODIFIER (from 3 to 1)
Other things:
- Adjusted cost for index_merge where rows_out < 1.0
The effects of the changes:
- rowid filter will have higher setup cost
- rowid filter will have slightly less costs per row
This can be seen in mtr where some tests, with 'small tables or
that uses rowid filters with many rows, will not use rowid filter anymore.
There is a little used option innodb_defragment that would make
OPTIMIZE TABLE not rebuild the table as usual for InnoDB, but
instead cause the index B-trees to be optimized in place.
This option uses excessive locking (exclusively locking index trees).
It never covered SPATIAL INDEX or FULLTEXT INDEX. Storage space
was never reclaimed.
Because this option is not particularly useful and causes a
maintenance burden (most recently in
commit de4030e4d4),
it is best to deprecate it, to prepare for its removal.
The initial issue was in assertion failure, which checked the equality
of lock to cancel with trx->lock.wait_lock in lock_sys_t::cancel().
If we analyze lock_sys_t::cancel() code from the perspective of
trx->lock.wait_lock racing, we won't find the error there, except the
cases when we need to reload it after the corresponding latches
acquiring.
So the fix is just to remove the assertion and reload
trx->lock.wait_lock after acquiring necessary latches.
Reviewed by: Marko Mäkelä <marko.makela@mariadb.com>
Post-fix to MDEV-30318 and MDEV-22570-related changes:
unified handling of wsrep_provider by code so that "none"
is interpreted as case-insensitive everywhere and that
work with an empty string is supported everywhere.
Do not compile wsrep_provider plugin if WITH_WSREP is not enabled.
We should not enable wsrep_provider plugin if WSREP_ON=OFF and
at that case we can only print information that Plugin
'wsrep-provider' is disabled.
Make sure tests require Galera library 26.4.14 if needed.
- Provider options are read from the provider during
startup, before plugins are initialized.
- New wsrep_provider plugin for which sysvars are generated
dynamically from options read from the provider.
- The plugin is enabled by option plugin-wsrep-provider=ON.
If enabled, wsrep_provider_options can no longer be used,
(an error is raised on attempts to do so).
- Each option is either string, integer, double or bool
- Options can be dynamic / readonly
- Options can be deprecated
Limitations:
- We do not check that the value of a provider option falls
within a certain range. This type of validation is still
done in Galera side.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
- During non-last batch of multi-batch recovery, InnoDB holds
log_sys.mutex and preallocates the block which may intiate
page flush, which may initiate log flush, which requires
log_sys.mutex to acquire again. This leads to assert failure.
So InnoDB recovery should release log_sys.mutex before
preallocating the block.
Task:
=====
Update tests to reflect MDEV-20122, deprecation of master_use_gtid=current_pos.
Change Master (CM) statements were either removed or modified with
current_pos --> slave_pos based on original intention of the test.
Reviewed by:
============
Brandon Nesterenko <brandon.nesterenko@mariadb.com>