Though this is an error message task, the problem was deep in the
`mysql_prepare_create_table` implementation. The problem is described as
follows:
1. `append_system_key_parts` was called before
`mysql_prepare_create_table`, though key name generation was done close to
the latest stage of the latter.
2. We can't move `append_system_key_parts` in the end, because system keys
should be appended before some checks done.
3. If the checks from `append_system_key_parts` are moved to the end of
`mysql_prepare_create_table`, then some other inappropriate errors are
issued. like `ER_DUP_FIELDNAME`.
To have key name specified in error message, name generation should be done
before the checks, which consequenced in more changes.
The final design for key initialization in `mysql_prepare_create_table`
follows. The initialization is done in three phases:
1. Calculate a total number of keys created with respect to keys ignored.
Allocate KEY* buffer.
2. Generate unique names; calculate a total number of key parts.
Make early checks. Allocate KEY_PART_INFO* buffer.
3. Initialize key parts, make the rest of the checks.
`ha_heap::clone` was creating a handler by share's handlerton, which is
partition handlerton.
handler's handlerton should be used instead.
Here in particular, HEAP handlerton will be used and it will create ha_heap
handler.
The bug was fixed by MDEV-22599 bugfix, which changed `Field::cmp` call
to `Field::cmp_prefix` in `TABLE::check_period_overlaps`.
The trick is that `Field_bit::cmp` apparently calls `Field_bit::cmp_key`,
which condiders an argument an actual pointer to data, which isn't correct
for `Field_bit`, since it stores data by `bit_ptr`. which is in the
beginning of the record, and using `ptr` is incorrect (we use it through
`ptr_in_record` call)
After Sergei's cleanup this assertion is not actual anymore -- we can't
predict if the handler was used for lookup, especially in multi-update
scenario.
`position(old_data)` is made earlier in `ha_check_overlaps`, therefore it
is guaranteed that we compare right refs.
The problem here was that ha_check_overlaps internally uses ha_index_read,
which in case of fail overwrites table->status. Even though the handlers
are different, they share a common table, so the value is anyway spoiled.
This is bad, and table->status is badly designed and overweighted by
functionality, but nothing can be done with it, since the code related to
this logic is ancient and it's impossible to extract it with normal effort.
So let's just save and restore the value in ha_update_row before and after
the checks.
Other operations like INSERT and simple UPDATE are not in risk, since they
don't use this table->status approach.
DELETE does not do any unique checks, so it's also safe.
For debug build of MariaDB server running of the following test case
will hit the assert `thd->lex->sql_command == SQLCOM_UPDATE' in the function
check_fields() on attempt to execute the UPDATE statement.
CREATE TABLE t1 (a INT);
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
Stack trace to the fired assert statement
DBUG_ASSERT(thd->lex->sql_command == SQLCOM_UPDATE)
listed below:
mysql_execute_command() ->
mysql_multi_update_prepare() -->
Multiupdate_prelocking_strategy::handle_end() -->
check_fiels()
It's worth to note that this stack trace looks like a multi update
statement is being executed. The fired assert is checked inside the
function check_fields() in case table->has_period() returns the value
true that in turns happens when temporal period specified in the UPDATE
statement. Condition specified in the DEBUG_ASSERT statement returns
the false value since the data member thd->lex->sql_command have the
value SQLCOM_UPDATE_MULTI. So, the main question is why a program control
flow go to the path prescribed for handling MULTI update statement
despite of the fact that the ordinary UPDATE statement being executed.
The answer is a way that SQL grammar rules written.
When the statement
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
being parsed an action for the rule 'table_primary_ident' (part of this action
is listed below to simplify description) is invoked to handle the table
name 't1' specified in the clause 'SELECT 1 FROM t1'.
table_primary_ident:
table_ident opt_use_partition opt_for_system_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
This action calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
Later, an action for the following grammar rule
update_table_list:
table_ident opt_use_partition for_portion_of_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
is invoked to handle the clause 't1 FOR PORTION OF APPTIME FROM ... TO 2'.
This action also calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
In result the table name 't1' contained twice in this list.
Presence of duplicate names for the table 't1' in a list of table used by
a statement leads to the fact that the function unique_table() called
from the function mysql_update() returns the value true that forces
implementation of the function mysql_update() to return the value 2 as
a signal to fall through the case boundary of the switch statement placed
in the function mysql_execute_statement() and start handling of the case
for sql_command SQLCOM_UPDATE_MULTI. The compound statement block for the
case SQLCOM_UPDATE_MULTI invokes the function mysql_multi_update_prepare()
that executes the statement
set thd->lex->sql_command= SQLCOM_UPDATE_MULTI;
and after that calls the method
Multiupdate_prelocking_strategy::handle_end(). Finally, this method
invokes the check_field() function and assert is fired.
The above analysis shows that update for a table that simultaneously specified
both as a destination table of UPDATE statement and as a table taking part in
subquery is actually treated by MariaDB server as multi-update statement.
Taking into account that multi-update statement for temporal period
table is not supported yet by MariaDB, correct way to fix the bug is to return
the error ER_NOT_SUPPORTED_YET for this case.
DELETE...FOR PORTION OF... statement accepts non-constant
FROM...TO clause. This contradicts the documentation and
is inconsistent with the behavior of UPDATE statement.
Thus, we add a validation that checks if a given deletion
period is specified by constant.
Insert worked incorrect as well. RocksDB used table->record[0] internally to store some
intermediate results for key conversion, during index searching among other operations.
So table->record[0] is spoiled during ha_rnd_index_map in ha_check_overlaps, so in turn
the broken record data was inserted.
The fix is to store RocksDB intermediate result in its own buffer instead of table->record[0].
`rocksdb` MTR suite is is checked and runs fine.
No need for additional tests. The existing overlaps.test covers the case completely.
However, I am not going to add anything related to rocksdb to suite, to keep it away
from additional dependencies.
To run tests with RocksDB engine, one can add following to engines.combinations:
[rocksdb]
plugin-load=$HA_ROCKSDB_SO
default-storage-engine=rocksdb
rocksdb
even if we're *allowed to* convert DELETE .. FOR PERIOD OF
into an update internally, doesn't think we'll *be able to*.
We always have to prepare for insert.
Turn read cache off for periodic update.
Like 498a96a4 says:
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
This applicable to any additional row inserts on UPDATE. In this case
it was initiated by UPDATE FOR PORTION.
Related to MDEV-20441.
- Fixed mysql_prepare_create_table() constraint duplicate checking;
- Refactored period constraint handling in mysql_prepare_alter_table():
* No need to allocate new objects;
* Keep old constraint name but exclude it from dup checking by automatic_name;
- Some minor memory leaks fixed;
- Some conceptual TODOs.
* The overlaps check is implemented on a handler level per row command.
It creates a separate cursor (actually, another handler instance) and
caches it inside the original handler, when ha_update_row or
ha_insert_row is issued. Cursor closes on unlocking the handler.
* Containing the same key in index means unique constraint violation
even in usual terms. So we fetch left and right neighbours and check
that they have same key prefix, excluding from the key only the period part.
If it doesnt match, then there's no such neighbour, and the check passes.
Otherwise, we check if this neighbour intersects with the considered key.
* The check does not introduce new error and fails with ER_DUPP_KEY error.
This might break REPLACE workflow and should be fixed separately
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong
Don't do skip_setup_conds() unless all errors are checked.
Fixes following errors:
ER_PERIOD_NOT_FOUND
ER_VERS_QUERY_IN_PARTITION
ER_VERS_ENGINE_UNSUPPORTED
ER_VERS_NOT_VERSIONED
DROP DATABASE would internally execute DROP TABLE on every contained
table and finally remove the directory. In InnoDB, DROP TABLE is
sometimes executed in the background. The table would be renamed to
a name that starts with #sql. The existence of these files would
prevent DROP DATABASE from succeeding.
CREATE OR REPLACE DATABASE can internally execute DROP DATABASE if
the directory already exists. This could fail due to the InnoDB
background DROP TABLE, possibly due to some tables that were
leftovers from earlier tests.
rpl_write_set is initialized in TABLE::mark_columns_per_binlog_row_image.
Since we just call use_all_columns for PORTION OF case, no need in
column marking logic here. Instead, initialize table->rpl_write_set in
place.
The main problem was lack of proper QueryArena handling in
`period_setup_conds`. Since mysql_prepare_update/mysql_prepare_delete
are called during `PREPARE` statement, period conditions, should be
allocated on statement query arena.
Another problem is incorrect statement state handling in
period_setup_conds, which led to unexpected mysql_update termination.
* mysql_update: move period_setup_conds() to mysql_prepare_update to
store conditions in statement's mem_root
* mtr: add period suite to default list, since --ps-protocol is now
fixed
Fixes bugs:
MDEV-18853 Assertion `0' failed in Protocol::end_statement upon DELETE .. FOR PORTION via prepared statement
MDEV-18852 Server crashes in reinit_stmt_before_use upon UPDATE .. FOR PORTION via prepared statement
* don't suppress output unnecessary
* only run system versioning tests with two innodb combinations
* show results of delete/update (add SELECTs as needed)
* inject portion of time updates into mysql_delete main loop
* triggered case emits delete+insert, no updates
* PORTION OF `SYSTEM_TIME` is forbidden
* `DELETE HISTORY .. FOR PORTION OF ...` is forbidden as well