This commits replaces the call of the function setup_tables() with
a call of the function setup_tables_and_check_access() in the method
Multiupdate_prelocking_strategy::handle_end().
There is no known bug that would require this change. However the change
aligns this piece of code with the code existed before the patch for
MDEV-24823.
Before this patch mergeable derived tables / view used in a multi-table
update / delete were merged before the preparation stage.
When the merge of a derived table / view is performed the on expression
attached to it is fixed and ANDed with the where condition of the select S
containing this derived table / view. It happens after the specification of
the derived table / view has been merged into S. If the ON expression refers
to a non existing field an error is reported and some other mergeable derived
tables / views remain unmerged. It's not a problem if the multi-table
update / delete statement is standalone. Yet if it is used in a stored
procedure the select with incompletely merged derived tables / views may
cause a problem for the second call of the procedure. This does not happen
for select queries using derived tables / views, because in this case their
specifications are merged after the preparation stage at which all ON
expressions are fixed.
This patch makes sure that merging of the derived tables / views used in a
multi-table update / delete statement is performed after the preparation
stage.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Before this patch mergeable derived tables / view used in a multi-table
update / delete were merged before the preparation stage.
When the merge of a derived table / view is performed the on expression
attached to it is fixed and ANDed with the where condition of the select S
containing this derived table / view. It happens after the specification of
the derived table / view has been merged into S. If the ON expression refers
to a non existing field an error is reported and some other mergeable derived
tables / views remain unmerged. It's not a problem if the multi-table
update / delete statement is standalone. Yet if it is used in a stored
procedure the select with incompletely merged derived tables / views may
cause a problem for the second call of the procedure. This does not happen
for select queries using derived tables / views, because in this case their
specifications are merged after the preparation stage at which all ON
expressions are fixed.
This patch makes sure that merging of the derived tables / views used in a
multi-table update / delete statement is performed after the preparation
stage.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
If a multi-update query was killed because of timeout or a kill statement,
there was a chance the query returned the error "An error occurred in
multi-table update" instead of the right error, like
"Query execution was interrupted".
Cause: no table->update_handler cloned at the moment of
vers_insert_history_row(). update_handler is needed because there
can't be several inited indexes at once in the same handler. First
index is inited by QUICK_RANGE_SELECT::reset(). Then when history row
is inserted check_duplicate_long_entry_key() is done and it requires
another index.
through 2nd execution of SP
This bug caused a server crash on the second call of any stored procedure
that contained an UPDATE statement over a multi-table view reporting an
error message at the prepare stage.
On the first call of the stored procedure after reporting an error at
the preparation stage of the UPDATE statement finished without calling
the function SELECT_LEX::save_prep_leaf_tables() for the SELECT used as
the definition of the view. This left the SELECT_LEX structure used by
the UPDATE statement in an inconsistent state for second call of the stored
procedure.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Add history row outside of compare_record() check. For TRX_ID
versioning we have to fail can_compare_record to force InnoDB update
which adds history row; and there in ha_innobase::update_row() is
additional "row changed" check where we force history row anyway.
in queries like
create view v1 as select 2 like 1 escape (3 in (select 0 union select 1));
select 2 union select * from v1;
Item_func_like::escape was left uninitialized, because
Item_in_optimizer is const_during_execution()
but not actually const_item() during execution.
It's not, because const subquery evaluation was disabled for derived.
Practically it only needs to be disabled for multi-update
that runs fix_fields() before all tables are locked.
For debug build of MariaDB server running of the following test case
will hit the assert `thd->lex->sql_command == SQLCOM_UPDATE' in the function
check_fields() on attempt to execute the UPDATE statement.
CREATE TABLE t1 (a INT);
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
Stack trace to the fired assert statement
DBUG_ASSERT(thd->lex->sql_command == SQLCOM_UPDATE)
listed below:
mysql_execute_command() ->
mysql_multi_update_prepare() -->
Multiupdate_prelocking_strategy::handle_end() -->
check_fiels()
It's worth to note that this stack trace looks like a multi update
statement is being executed. The fired assert is checked inside the
function check_fields() in case table->has_period() returns the value
true that in turns happens when temporal period specified in the UPDATE
statement. Condition specified in the DEBUG_ASSERT statement returns
the false value since the data member thd->lex->sql_command have the
value SQLCOM_UPDATE_MULTI. So, the main question is why a program control
flow go to the path prescribed for handling MULTI update statement
despite of the fact that the ordinary UPDATE statement being executed.
The answer is a way that SQL grammar rules written.
When the statement
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
being parsed an action for the rule 'table_primary_ident' (part of this action
is listed below to simplify description) is invoked to handle the table
name 't1' specified in the clause 'SELECT 1 FROM t1'.
table_primary_ident:
table_ident opt_use_partition opt_for_system_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
This action calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
Later, an action for the following grammar rule
update_table_list:
table_ident opt_use_partition for_portion_of_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
is invoked to handle the clause 't1 FOR PORTION OF APPTIME FROM ... TO 2'.
This action also calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
In result the table name 't1' contained twice in this list.
Presence of duplicate names for the table 't1' in a list of table used by
a statement leads to the fact that the function unique_table() called
from the function mysql_update() returns the value true that forces
implementation of the function mysql_update() to return the value 2 as
a signal to fall through the case boundary of the switch statement placed
in the function mysql_execute_statement() and start handling of the case
for sql_command SQLCOM_UPDATE_MULTI. The compound statement block for the
case SQLCOM_UPDATE_MULTI invokes the function mysql_multi_update_prepare()
that executes the statement
set thd->lex->sql_command= SQLCOM_UPDATE_MULTI;
and after that calls the method
Multiupdate_prelocking_strategy::handle_end(). Finally, this method
invokes the check_field() function and assert is fired.
The above analysis shows that update for a table that simultaneously specified
both as a destination table of UPDATE statement and as a table taking part in
subquery is actually treated by MariaDB server as multi-update statement.
Taking into account that multi-update statement for temporal period
table is not supported yet by MariaDB, correct way to fix the bug is to return
the error ER_NOT_SUPPORTED_YET for this case.
is_bulk_op())' fails on UPDATE on a partitioned table with subquery
(MySQL:71630)
Analysis and fix: Error is not checked. So correct error state is not returned.
Fix: Check for error and return the error state.
Fixed by:
- Make all quick_* variable allocated according to real number keys instead
of MAX_KEY
- Store all the quick* items in separated allocated structure (OPT_RANGE)
- Ensure we don't access any quick* variable without first checking
opt_range_keys.is_set(). Thanks to this, we don't need any
pre-initialization of quick* variables anymore.
Some renames was done to use the new structure:
table->quick_keys -> table->opt_range_keys
table->quick_rows[X] -> table->opt_range[X].rows
table->quick_key_parts[X] -> table->opt_range[X].key_parts
table->quick_costs[X] -> table->opt_range[X].cost
table->quick_index_only_costs[X] -> table->opt_range[X].index_only_cost
table->quick_n_ranges[X] -> table->opt_range[X].ranges
table->quick_condition_rows -> table->opt_range_condition_rows
This patch should both decrease memory needed for TABLE objects
(3528 -> 984 + keyinfo) and increase performance, thanks to less
initializations per query, and more localized memory, thanks to the
opt_range structure.
MDEV-22531 Remove maria::implicit_commit()
MDEV-22607 Assertion `ha_info->ht() != binlog_hton' failed in
MYSQL_BIN_LOG::unlog_xa_prepare
From the handler point of view, Aria now looks like a transactional
engine. One effect of this is that we don't need to call
maria::implicit_commit() anymore.
This change also forces the server to call trans_commit_stmt() after doing
any read or writes to system tables. This work will also make it easier
to later allow users to have system tables in other engines than Aria.
To handle the case that Aria doesn't support rollback, a new
handlerton flag, HTON_NO_ROLLBACK, was added to engines that has
transactions without rollback (for the moment only binlog and Aria).
Other things
- Moved freeing of MARIA_SHARE to a separate function as the MARIA_SHARE
can be still part of a transaction even if the table has closed.
- Changed Aria checkpoint to use the new MARIA_SHARE free function. This
fixes a possible memory leak when using S3 tables
- Changed testing of binlog_hton to instead test for HTON_NO_ROLLBACK
- Removed checking of has_transaction_manager() in handler.cc as we can
assume that as the transaction was started by the engine, it does
support transactions.
- Added new class 'start_new_trans' that can be used to start indepdendent
sub transactions, for example while reading mysql.proc, using help or
status tables etc.
- open_system_tables...() and open_proc_table_for_Read() doesn't anymore
take a Open_tables_backup list. This is now handled by 'start_new_trans'.
- Split thd::has_transactions() to thd::has_transactions() and
thd::has_transactions_and_rollback()
- Added handlerton code to free cached transactions objects.
Needed by InnoDB.
squash! 2ed35999f2a2d84f1c786a21ade5db716b6f1bbc
All changes (except one) is of type
thd->transaction. -> thd->transaction->
thd->transaction points by default to 'thd->default_transaction'
This allows us to 'easily' have multiple active transactions for a
THD object, like when reading data from the mysql.proc table
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
`inited == NONE` at the initialization time does not always mean
that it'll be `NONE` later, at the execution time. Use a more complex
caller-specific logic to decide whether to create a cloned lookup handler.
Besides LOAD (as in the original bug report) make sure that all
prepare_for_insert() invocations are covered by tests. Add tests for
CREATE ... SELECT, multi-UPDATE, and multi-DELETE.
Don't enable write cache with long uniques.
Turn read cache off for periodic update.
Like 498a96a4 says:
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
This applicable to any additional row inserts on UPDATE. In this case
it was initiated by UPDATE FOR PORTION.
Related to MDEV-20441.