This patch fixes not only the assertion failure in the function
Field_iterator_table_ref::set_field_iterator() but also:
- fixes the problem of forced materialization of derived tables used
in subqueries contained in WHERE clauses of single-table and multi-table
UPDATE and DELETE statements
- fixes the problem of MDEV-17954 that prevented execution of multi-table
DELETE statements if they use in their WHERE clauses references to
the tables that are updated.
The patch must be considered a complement to the patch for MDEV-28883.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch introduces a new way of handling UPDATE and DELETE commands at
the top level after the parsing phase. This new way of processing update
and delete statements can be seen in the implementation of the prepare()
and execute() methods from the new Sql_cmd_dml class. This class derived
from the Sql_cmd class can be considered as an interface class for processing
such commands as SELECT, INSERT, UPDATE, DELETE and other comands
manipulating data in tables.
With this patch processing of update and delete statements after parsing
proceeds by the following schema:
- precheck of the access rights is performed for the used tables
- the used tables are opened
- context analysis phase is performed for the statement
- the used tables are locked
- the statement is optimized and executed
- clean-up is performed for the statement
The implementation of the method Sql_cmd_dml::execute() adheres this schema.
The virtual functions of the class Sql_cmd_dml used for precheck of the
access rights, context analysis, optimization and execution allow to adjust
this schema for processing data manipulation statements of any types.
This schema of processing data manipulation statements is taken from the
current MySQL code. Moreover the definition the class Sql_cmd_dml introduced
in this patch is almost a full replica of such class in the existing MySQL.
However the implementation of the derived classes for update and delete
statements is quite different. This implementation employs the JOIN class
for all kinds of update and delete statements. It allows to perform main
bulk of context analysis actions by the function JOIN::prepare(). This
guarantees that characteristics and properties of the statement tree
discovered for optimization phase when doing context analysis are the same
for single-table and multi-table updates and deletes.
With this patch the following functions are gone:
mysql_prepare_update(), mysql_multi_update_prepare(),
mysql_update(), mysql_multi_update(),
mysql_prepare_delete(), mysql_multi_delete_prepare(), mysql_delete().
The code within these functions have been used as much as possible though.
The functions mysql_test_update() and mysql_test_delete() are also not
needed anymore. The method Sql_cmd_dml::prepare() serves processing
- update/delete statement
- PREPARE stmt FROM "<update/delete statement>"
- EXECUTE stmt when stmt is prepared from update/delete statement.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The deprecated parameters will be removed:
innodb_defragment
innodb_defragment_n_pages
innodb_defragment_stats_accuracy
innodb_defragment_fill_factor_n_recs
innodb_defragment_fill_factor
innodb_defragment_frequency
The mysql.innodb_index_stats.stat_name values 'n_page_split' and
'n_pages_freed' will lose their special meaning.
The related changes to OPTIMIZE TABLE in InnoDB will be removed as well.
The parameter innodb_optimize_fulltext_only will retain its special
meaning in OPTIMIZE TABLE.
Tested by: Matthias Leich
Renaming the default MariaDB backup directory from
xtrabackup_backupfiles to mariadb_backup_files.
Renaming files:
- xtrabackup_binlog_info to mariadb_backup_binlog_info
- xtrabackup_checkpoints to mariadb_backup_checkpoints
- xtrabackup_galera_info to mariadb_backup_galera_info
- xtrabackup_info to mariadb_backup_info
- xtrabackup_slave_info to mariadb_backup_slave_info
Removed an old '* 2' from the HASH join cost. This was made obsolete by
a later patch that added cost for copying the data out from the join buffer
to table->record.
I also added some 'echo' to some test cases to make it easier to debug
test case changes.
Test case changes:
- subselect3_jcl6 and subselect_sj2_jcl6 result changes as materialized
tables changed to hash join + first_match
Firstmatch_picker::check_qep() has an optimization that allows firstmatch
to be used together with join buffer under some conditions. In this
case the cost was assumed to be same as what best_access_path()
had calculated.
However if HASH+join_buffer was used, then
fix_semijoin_strategies_for_picked_join_order() would remove the
join_buffer (which would cause a full join to be used) and the cost
assumption by Firstmatch_picker::check_qep() would be wrong.
Later check_join_cache_usage() sees that it's a full scan and decides
it can use join buffering, (But not the hash join).
Fixed by also allowing HASH joins with firstmatch.
This removes the need to change disable and re-enable join buffer.
Test case changes:
- HASH join used with firstmatch (Using join buffer (flat, BNLH join))
- Filtered could change with firstmatch as the conversion with and without
join_buffered lost the filtering information.
- The not "re-enabling join buffer" is shown in main.optimizer_trace
Original code by Sergei, optimized by Monty.
Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
AWS KMS plugin saves all key files under the root folder of data
directory. Increasing of the key IDs and key rotations will generate a
lot of key files under the root folder, looks messy and hard to
maintain the folder permission etc.
Now introduce a new plugin parameter `aws_key_management_keyfile_dir` to
define the directory for saving the key files for better maintenance.
Detailed parameter information as following:
```
VARIABLE_NAME: AWS_KEY_MANAGEMENT_KEYFILE_DIR
SESSION_VALUE: NULL
GLOBAL_VALUE: <Directory path>
GLOBAL_VALUE_ORIGIN: COMMAND-LINE
DEFAULT_VALUE:
VARIABLE_SCOPE: GLOBAL
VARIABLE_TYPE: VARCHAR
VARIABLE_COMMENT: Define the directory in which to save key files
for the AWS key management plugin. If not set,
the root datadir will be used
READ_ONLY: YES
COMMAND_LINE_ARGUMENT: REQUIRED
GLOBAL_VALUE_PATH: NULL
```
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
If you are running mariadb-install-db from a source tree instead of
installation it was executing `mysqld` instead of `mariadb` which showed
the deprecation warning. This patch fixes that as well as fixing
messages and links to other things that have been renamed.
This stabilizes main.order_by_optimizer_innodb, where the result varies
depending on the rec_per_key status from the engine.
The logic to prefer range over a const ref:
- If range of has only one part and it uses more key parts than ref, then
use the range.
Example:
WHERE key_part1=1 and key_part2 > #
Here we will prefer a range over (key_part1,key_part2) instead a ref
over key_part1.
This fixes a regression due to MDEV-19229. InnoDB would fail to maintain
the maximum transaction ID when it changes and reinitializes the number
of undo tablespaces. InnoDB should maintain the maximum transaction ID
in TRX_RSEG_MAX_TRX_ID of system rollback segment header.
srv_undo_tablespaces_reinit(): Preserve the system-wide maximum
transaction identifier in the TRX_RSEG_MAX_TRX_ID field of
the first rollback segment. If needed, upgrade the page to the
MariaDB 10.3 format first. All this must be done in the same
atomic mini-transaction that will reinitialize the TRX_SYS page.
Before MariaDB Server 10.3, InnoDB persisted the maximum transaction
identifier only in the TRX_SYS page. MariaDB 10.3 started to treat that
page as a read-only directory of rollback segments, and the maximum
transaction identifier will be recovered from TRX_RSEG_MAX_TRX_ID or
from undo logs. Since a change of innodb_undo_tablespaces is only
allowed when no undo log records exist, the only place to store the
persistent maximum transaction identifier is in TRX_RSEG_MAX_TRX_ID
of one of the rollback segment header pages.
The bug was observed when the database was upgraded directly from MySQL 5.7
or earlier, or from MariaDB Server 10.2 or earlier, to multiple
innodb_undo_tablespaces. On a restart of MariaDB after the upgrade,
the transaction identifier would be reported to be smaller than during
the upgrade:
2023-03-03 10:43:57 0 [Note] InnoDB: log sequence number 2762352; transaction id 1794
2023-03-03 10:44:17 0 [Note] InnoDB: log sequence number 2786076; transaction id 770
This error was discovered while working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
sl->join->exec();
saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join would
be set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
scovered whle working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
sl->join->exec();
saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join was
set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
This error was discovered while working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
Failing test: cte_recursive.test
If one writes to a file, then truncates it and then call mmap() over the
file_size + 7, then the file size changes to 7. (On Linux mmap() does not
change file size).
This caused _ma_read_rnd_dynamic_record() to believe that there are more
records in the data file, which is not the case, and the table will be
marked as corrupted.
Fixed by disabling mmap() in Aria on Windows.
The problem was the mysql_derived_prepare() did not correctly set
'distinct' when creating a temporary derivated table.
Fixed by separating checking for distinct for queries with and without
UNION.
Other things:
- Fixed bug in generate_derived_keys_for_table() where we set the wrong
bit for join_tab->keys
- Cleaned up JOIN::drop_unused_derived_keys()
- Changed TABLE::use_index() to keep unique keys and update
share->key_parts
Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
- Remove DBUG calls from my_winfile.c where call and parameters
are already printed by mysys.
- Remove DBUG from my_get_osfhandle() and my_get_open_flags() to remove
DBUG noise.
- Updated convert-debug-for-diff to take into account windows.
- Changed some DBUG_RETURN(function()) to tmp=function(); DBUG_RETURN(tmp);
This is needed as Visual C++ prints for DBUG binaries a trace for
func_a()
{
DBUG_ENTER("func_a");
DBUG_RETURN(func_b())
}
as
>func_a
<func_a
>func_b
<func_b
instead of when using gcc:
>func_a
| >func_b
| <func_b
<func_a
- InnoDB fails to reset the check_foreigns and check_unique_secondary
in trx_t::free(), trx_t::commit_cleanup(). This lead to bulk insert
in internal innodb fts table operation.
Starting with commit 0de3be8cfd (MDEV-30671),
the field TRX_UNDO_NEEDS_PURGE lost its previous meaning.
The following scenario is possible:
(1) InnoDB is killed at a point of time corresponding to the durable
execution of some fseg_free_step_not_header() but not
trx_purge_remove_log_hdr().
(2) After restart, the affected pages are allocated for something else.
(3) Purge will attempt to access the newly reallocated pages when looking
for some old undo log records.
trx_purge_free_segment(): Invoke trx_purge_remove_log_hdr() as the first
thing, to be safe. If the server is killed, some pages will never be
freed. That is the lesser evil. Also, before each mtr.start(), invoke
log_free_check() to prevent ib_logfile0 overrun.
Because downgrades from 11.0 to older MariaDB server are not possible
due to the removal of the InnoDB change buffer, there is no need to
access the field TRX_UNDO_NEEDS_PURGE anymore.
This patch also fixes some bugs detected by valgrind after this
patch:
- Not enough copy_func elements was allocated by Create_tmp_table() which
causes an memory overwrite in Create_tmp_table::add_fields()
I added an ASSERT() to be able to detect this also without valgrind.
The bug was that TMP_TABLE_PARAM::copy_fields was not correctly set
when calling create_tmp_table().
- Aria::empty_bits is not allocated if there is no varchar/char/blob
fields in the table. Fixed code to take this into account.
This cannot cause any issues as this is just a memory access
into other Aria memory and the content of the memory would not be used.
- Aria::last_key_buff was not allocated big enough. This may have caused
issues with rtrees and ma_extra(HA_EXTRA_REMEMBER_POS) as they
would use the same memory area.
- Aria and MyISAM didn't take extended key parts into account, which
caused problems when copying rec_per_key from engine to sql level.
- Mark asan builds with 'asan' in version strihng to detect these in
not_valgrind_build.inc.
This is needed to not have main.sp-no-valgrind fail with asan.
This was discovered as part of adding a protected memory area between
each area allocated by multi_alloc().
The patch that adds the protection will be pushed in 10.5.
This patch adds fixes that are unique for 10.10
It is not safe to invoke trx_purge_free_segment() or execute
innodb_undo_log_truncate=ON before all undo log records in
the rollback segment has been processed.
A prominent failure that would occur due to premature freeing of
undo log pages is that trx_undo_get_undo_rec() would crash when
trying to copy an undo log record to fetch the previous version
of a record.
If trx_undo_get_undo_rec() was not invoked in the unlucky time frame,
then the symptom would be that some committed transaction history is
never removed. This would be detected by CHECK TABLE...EXTENDED that
was impleented in commit ab0190101b.
Such a garbage collection leak should be possible even when using
innodb_undo_log_truncate=OFF, just involving trx_purge_free_segment().
trx_rseg_t::needs_purge: Change the type from Boolean to a transaction
identifier, noting the most recent non-purged transaction, or 0 if
everything has been purged. On transaction start, we initialize this
to 1 more than the transaction start ID. On recovery, the field may be
adjusted to the transaction end ID (TRX_UNDO_TRX_NO) if it is larger.
The field TRX_UNDO_NEEDS_PURGE becomes write-only; only some debug
assertions that would validate the value. The field reflects the old
inaccurate Boolean field trx_rseg_t::needs_purge.
trx_undo_mem_create_at_db_start(), trx_undo_lists_init(),
trx_rseg_mem_restore(): Remove the parameter max_trx_id.
Instead, store the maximum in trx_rseg_t::needs_purge,
where trx_rseg_array_init() will find it.
trx_purge_free_segment(): Contiguously hold a lock on
trx_rseg_t to prevent any concurrent allocation of undo log.
trx_purge_truncate_rseg_history(): Only invoke trx_purge_free_segment()
if the rollback segment is empty and there are no pending transactions
associated with it.
trx_purge_truncate_history(): Only proceed with innodb_undo_log_truncate=ON
if trx_rseg_t::needs_purge indicates that all history has been purged.
Tested by: Matthias Leich
Currently cross compilation on x86_64 to arch i686 fails
with error:
> ctype-uca1400data.h
/bin/sh: 1: uca-dump: not found
Commit makes sure that uca-dump is treated correctly
when cross compiling MariaDB to another architecture
- rollback_inplace_alter_table() locks the fts internal tables.
At the time, insert tries to fetch the doc id from config table,
fails to lock the config table and returns doc id as 0.
fts_cmp_set_sync_doc_id(): Retry to fetch the doc id again if
it encounter DB_LOCK_WAIT_TIMEOUT error