Atomic CREATE OR REPLACE allows to keep an old table intact if the
command fails or during the crash. That is done through creating
a table with a temporary name and filling it with the data
(for CREATE OR REPLACE .. SELECT), then renaming the original table
to another temporary (backup) name and renaming the replacement table
to original table. The backup table is kept until the last chance of
failure and if that happens, the replacement table is thrown off and
backup recovered. When the command is complete and logged the backup
table is deleted.
Atomic replace algorithm
Two DDL chains are used for CREATE OR REPLACE:
ddl_log_state_create (C) and ddl_log_state_rm (D).
1. (C) Log CREATE_TABLE_ACTION of TMP table (drops TMP table);
2. Create new table as TMP;
3. Do everything with TMP (like insert data);
finalize_atomic_replace():
4. Link chains: (D) is executed only if (C) is closed;
5. (D) Log DROP_ACTION of BACKUP;
6. (C) Log RENAME_TABLE_ACTION from ORIG to BACKUP (replays BACKUP -> ORIG);
7. Rename ORIG to BACKUP;
8. (C) Log CREATE_TABLE_ACTION of ORIG (drops ORIG);
9. Rename TMP to ORIG;
finalize_ddl() in case of success:
10. Close (C);
11. Replay (D): BACKUP is dropped.
finalize_ddl() in case of error:
10. Close (D);
11. Replay (C):
1) ORIG is dropped (only after finalize_atomic_replace());
2) BACKUP renamed to ORIG (only after finalize_atomic_replace());
3) drop TMP.
If crash happens (C) or (D) is replayed in reverse order. (C) is
replayed if crash happens before it is closed, otherwise (D) is
replayed.
Temporary table for CREATE OR REPLACE
Before dropping "old" table, CREATE OR REPLACE creates "tmp" table.
ddl_log_state_create holds the drop of the "tmp" table. When
everything is OK (data is inserted, "tmp" is ready) ddl_log_state_rm
is written to replace "old" with "tmp". Until ddl_log_state_create
is closed ddl_log_state_rm is not executed.
After the binlogging is done ddl_log_state_create is closed. At that
point ddl_log_state_rm is executed and "tmp" is replaced with
"old". That is: final rename is done by the DDL log.
With that important role of DDL log for CREATE OR REPLACE operation
replay of ddl_log_state_rm must fail at the first hit error and
print the error message if possible. F.ex. foreign key error is
discovered at this phase: InnoDB rejects to drop the "old" table and
returns corresponding foreign key error code.
Additional notes
- CREATE TABLE without REPLACE is not affected by this commit.
- Engines having HTON_EXPENSIVE_RENAME flag set are not affected by
this commit.
- CREATE TABLE .. SELECT XID usage is fixed and now there is no need
to log DROP TABLE via DDL_CREATE_TABLE_PHASE_LOG (see comments in
do_postlock()). XID is now correctly updated so it disables
DDL_LOG_DROP_TABLE_ACTION. Note that binary log is flushed at the
final stage when the table is ready. So if we have XID in the
binary log we don't need to drop the table.
- Three variations of CREATE OR REPLACE handled:
1. CREATE OR REPLACE TABLE t1 (..);
2. CREATE OR REPLACE TABLE t1 LIKE t2;
3. CREATE OR REPLACE TABLE t1 SELECT ..;
- Test case uses 6 combinations for engines (aria, aria_notrans,
myisam, ib, lock_tables, expensive_rename) and 2 combinations for
binlog types (row, stmt). Combinations help to check differences
between the results. Error failures are tested for the above three
variations.
- expensive_rename tests CREATE OR REPLACE without atomic
replace. The effect should be the same as with the old behaviour
before this commit.
- Triggers mechanism is unaffected by this change. This is tested in
create_replace.test.
- LOCK TABLES is affected. Lock restoration must be done after "rm"
chain is replayed.
- Moved ddl_log_complete() from send_eof() to finalize_ddl(). This
checkpoint was not executed before for normal CREATE TABLE but is
executed now.
- CREATE TABLE will now rollback also if writing to the binary
logging failed. See rpl_gtid_strict.test
Rename and drop via DDL log
We replay ddl_log_state_rm to drop the old table and rename the
temporary table. In that case we must throw the correct error
message if ddl_log_revert() fails (f.ex. on FK error).
If table is deleted earlier and not via DDL log and the crash
happened, the create chain is not closed. Linked drop chain is not
executed and the new table is not installed. But the old table is
already deleted.
ddl_log.cc changes
Now we can place action before DDL_LOG_DROP_INIT_ACTION and it will
be replayed after DDL_LOG_DROP_TABLE_ACTION.
report_error parameter for ddl_log_revert() allows to fail at first
error and print the error message if possible.
ddl_log_execute_action() now can print error message.
Since we now can handle errors from ddl_log_execute_action() (in
case of non-recovery execution) unconditional setting "error= TRUE"
is wrong (it was wrong anyway because it was overwritten at the end
of the function).
On XID usage
Like with all other atomic DDL operations XID is used to avoid
inconsistency between master and slave in the case of a crash after
binary log is written and before ddl_log_state_create is closed. On
recovery XIDs are taken from binary log and corresponding DDL log
events get disabled. That is done by
ddl_log_close_binlogged_events().
On linking two chains together
Chains are executed in the ascending order of entry_pos of execute
entries. But entry_pos assignment order is undefined: it may assign
bigger number for the first chain and then smaller number for the
second chain. So the execution order in that case will be reverse:
second chain will be executed first.
To avoid that we link one chain to another. While the base chain
(ddl_log_state_create) is active the secondary chain
(ddl_log_state_rm) is not executed. That is: only one chain can be
executed in two linked chains.
The interface ddl_log_link_chains() was done in "MDEV-22166
ddl_log_write_execute_entry() extension".
More on CREATE OR REPLACE .. SELECT
We use create_and_open_tmp_table() like in ALTER TABLE to create
temporary TABLE object (tmp_table is (NON_)TRANSACTIONAL_TMP_TABLE).
After we created such TABLE object we use create_info->tmp_table()
instead of table->s->tmp_table when we need to check for
parser-requested tmp-table.
External locking is required for temporary table created by
create_and_open_tmp_table(). F.ex. that disables logging for Aria
transactional tables and without that (when no mysql_lock_tables()
is done) it cannot work correctly.
For making external lock the patch requires Aria table to work in
non-transactional mode. That is usually done by
ha_enable_transaction(false). But we cannot disable transaction
completely because: 1. binlog rollback removes pending row events
(binlog_remove_pending_rows_event()). The row events are added
during CREATE .. SELECT data insertion phase. 2. replication slave
highly depends on transaction and cannot work without it.
So we put temporary Aria table into non-transactional mode with
"thd->transaction->on hack". See comment for on_save variable.
Note that Aria table has internal_table mode. But we cannot use it
because:
if (!internal_table)
{
mysql_mutex_lock(&THR_LOCK_myisam);
old_info= test_if_reopen(name_buff);
}
For internal_table test_if_reopen() is not called and we get a new
MARIA_SHARE for each file handler. In that case duplicate errors are
missed because insert and lookup in CREATE .. SELECT is done via two
different handlers (see create_lookup_handler()).
For temporary table before dropping TABLE_SHARE by
drop_temporary_table() we must do ha_reset(). ha_reset() releases
storage share. Without that the share is kept and the second CREATE
OR REPLACE .. SELECT fails with:
HA_ERR_TABLE_EXIST (156): MyISAM table '#sql-create-b5377-4-t2' is
in use (most likely by a MERGE table). Try FLUSH TABLES.
HA_EXTRA_PREPARE_FOR_DROP also removes MYISAM_SHARE, but that is
not needed as ha_reset() does the job.
ha_reset() is usually done by
mark_tmp_table_as_free_for_reuse(). But we don't need that mechanism
for our temporary table.
Atomic_info in HA_CREATE_INFO
Many functions in CREATE TABLE pass the same parameters. These
parameters are part of table creation info and should be in
HA_CREATE_INFO (or whatever). Passing parameters via single
structure is much easier for adding new data and
refactoring.
InnoDB changes (revised by Marko Mäkelä)
row_rename_table_for_mysql(): Specify the treatment of FOREIGN KEY
constraints in a 4-valued enum parameter. In cases where FOREIGN KEY
constraints cannot exist (partitioned tables, or internal tables of
FULLTEXT INDEX), we can use the mode RENAME_IGNORE_FK.
The mod RENAME_REBUILD is for any DDL operation that rebuilds the
table inside InnoDB, such as TRUNCATE and native ALTER TABLE
(or OPTIMIZE TABLE). The mode RENAME_ALTER_COPY is used solely
during non-native ALTER TABLE in ha_innobase::rename_table().
Normal ha_innobase::rename_table() will use the mode RENAME_FK.
CREATE OR REPLACE will rename the old table (if one exists) along
with its FOREIGN KEY constraints into a temporary name. The replacement
table will be initially created with another temporary name.
Unlike in ALTER TABLE, all FOREIGN KEY constraints must be renamed
and not inherited as part of these operations, using the mode RENAME_FK.
dict_get_referenced_table(): Let the callers convert names when needed.
create_table_info_t::create_foreign_keys(): CREATE OR REPLACE creates
the replacement table with a temporary name table, so for
self-references foreign->referenced_table will be a table with
temporary name and charset conversion must be skipped for it.
Reviewed by:
Michael Widenius <monty@mariadb.org>
1. For INSERT..SELECT statements: don't include table/view the data
is inserted into in the list of leaf tables
2. Remove duplicated and dead code related to table_count
DBUG_ASSERT(0) was added by MDEV-17554 (auto-create history
partitions) as an experimental measure. Testing has shown this
conditional branch of can_recover_from_failed_open() can be possible
due to MDL deadlock.
The fix replaces DBUG_ASSERT with more specific one for
!OT_ADD_HISTORY_PARTITION.
Test case was synchronized to reproduce deadlock always and commented
with execution path of MDL locking for Good (no deadlock) and Bad
(deadlock). The logging was done with the help of preceding patch for
debug MDL tracing.
The issue was that flush_tables() didn't take a MDL lock on cached
TABLE_SHARE before calling open_table() to do a HA_EXTRA_FLUSH call.
Most engines seams to have no issue with it, but apparantly this conflicts
with InnoDB in 10.6 when using TRUNCATE
Fixed by taking a MDL lock before trying to open the table in
flush_tables().
There is no test case as it hard to repeat the scheduling that causes
the error. I did run the test case in MDEV-28897 to verify
that the bug is fixed.
If UPDATE/DELETE does not change data it is skipped from
replication. We now force replication of such events when they trigger
partition auto-creation.
For ROLLBACK it is as simple as set OPTION_KEEP_LOG
flag. trans_cannot_safely_rollback() does the rest.
For UPDATE/DELETE .. LIMIT 0 we make additional binlog_query() calls
at the early points of return.
As a safety measure we also convert row format into statement if it is
needed. The condition is decided by
binlog_need_stmt_format(). Basically if there are some row events in
cache we don't need that: table open of row event will trigger
auto-creation anyway.
Multi-update/delete works via mysql_select(). There is no early points
of return, so binlogging is always checked by
send_eof()/abort_resultset(). But we must comply with the above
measure of converting into statement.
:: Syntax change ::
Keyword AUTO enables history partition auto-creation.
Examples:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 MONTH
STARTS '2021-01-01 00:00:00' AUTO PARTITIONS 12;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 1000 AUTO;
Or with explicit partitions:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO
(PARTITION p0 HISTORY, PARTITION pn CURRENT);
To disable or enable auto-creation one can use ALTER TABLE by adding
or removing AUTO from partitioning specification:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
# Disables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR;
# Enables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
If the rest of partitioning specification is identical to CREATE TABLE
no repartitioning will be done (for details see MDEV-27328).
:: Description ::
Before executing history-generating DML command (see the list of commands below)
add N history partitions, so that N would be sufficient for potentially
generated history. N > 1 may be required when history partitions are switched
by INTERVAL and current_timestamp is N times further than the interval
boundary of the last history partition.
If the last history partition equals or exceeds LIMIT records then new history
partition is created and selected as the working partition. According to
MDEV-28411 partitions cannot be switched (or created) while the command is
running. Thus LIMIT does not carry strict limitation and the history partition
size must be planned as LIMIT value plus average number of history one DML
command can generate.
Auto-creation is implemented by synchronous fast_alter_partition_table() call
from the thread of the executed DML command before the command itself is run
(by the fallback and retry mechanism similar to Discovery feature,
see Open_table_context).
The name for newly added partitions are generated like default partition names
with extension of MDEV-22155 (which avoids name clashes by extending assignment
counter to next free-enough gap).
These DML commands can trigger auto-creation:
DELETE (including multitable DELETE, excluding DELETE HISTORY)
UPDATE (including multitable UPDATE)
REPLACE (including REPLACE .. SELECT)
INSERT .. ON DUPLICATE KEY UPDATE (including INSERT .. SELECT .. ODKU)
LOAD DATA .. REPLACE
:: Bug fixes ::
MDEV-23642 Locking timeout caused by auto-creation affects original DML
The reasons for this are:
- Do not disrupt main business process (the history is auxiliary service);
- Consequences are non-fatal (history is not lost, but comes into wrong
partition; fixed by partitioning rebuild);
- There is more freedom for application to fail in this case or not: it may
read warning info and find corresponding error number.
- While non-failing command is easy to handle by an application and fail it,
the opposite is hard to handle: there is no automatic actions to fix
failed command and retry, DBA intervention is required and until then
application is non-functioning.
MDEV-23639 Auto-create does not work under LOCK TABLES or inside triggers
Don't do tdc_remove_table() for OT_ADD_HISTORY_PARTITION because it is
not possible in locked tables mode.
LTM_LOCK_TABLES mode (and LTM_PRELOCKED_UNDER_LOCK_TABLES) works out
of the box as fast_alter_partition_table() can reopen tables via
locked_tables_list.
In LTM_PRELOCKED we reopen and relock table manually.
:: More fixes ::
* some_table_marked_for_reopen flag fix
some_table_marked_for_reopen affets only reopen of
m_locked_tables. I.e. Locked_tables_list::reopen_tables() reopens only
tables from m_locked_tables.
* Unused can_recover_from_failed_open() condition
Is recover_from_failed_open() can be really used after
open_and_process_routine()?
:: Reviewed by ::
Sergei Golubchik <serg@mariadb.org>
Moved LIMIT warning from vers_set_hist_part() to new call
vers_check_limit() at table unlock phase. At that point
read_partitions bitmap is already pruned by DML code (see
prune_partitions(), find_used_partitions()) so we have to set
corresponding bits for working history partition.
Also we don't do my_error(ME_WARNING|ME_ERROR_LOG), because at that
point it doesn't update warnings number, so command reports 0 warnings
(but warning list is still updated). Instead we do
push_warning_printf() and sql_print_warning() separately.
Under LOCK TABLES external_lock(F_UNLCK) is not executed. There is
start_stmt(), but no corresponding "stop_stmt()". So for that mode we
call vers_check_limit() directly from close_thread_tables().
Test result has been changed according to new LIMIT and warning
printing algorithm. For convenience all LIMIT warnings are marked with
"You see warning above ^".
TODO MDEV-20345 fixed. Now vers_history_generating() contains
fine-grained list of DML-commands that can generate history (and TODO
mechanism worked well).
- Describe the lifetime of EXPLAIN data structures in
sql_explain.h:ExplainDataStructureLifetime.
- Make Item_field::set_field() call set_refers_to_temp_table()
when it refers to a temp. table.
- Introduce QT_DONT_ACCESS_TMP_TABLES flag for Item::print.
It directs Item_field::print to not try access its the
temp table.
- Introduce Explain_query::notify_tables_are_closed()
and call it right before the query closes its tables.
- Make Explain data stuctures' print_explain_json() methods
accept "no_tmp_tbl" parameter which means pass
QT_DONT_ACCESS_TMP_TABLES when printing items.
- Make Show_explain_request::call_in_target_thread() not call
set_current_thd(). This wasn't needed as the code inside
lex->print_explain() uses output->thd anyway. output->thd
refers to the SHOW command's THD object.
EXPLAIN FOR CONNECTION is a MySQL-compatible syntax for SHOW EXPLAIN.
This commit also adds support for FORMAT=JSON to SHOW EXPLAIN,
so the possible options to get JSON-formatted output are:
- SHOW EXPLAIN FORMAT=JSON FOR $con
- EXPLAIN FORMAT=JSON FOR CONNECTION $con
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <serg@mariadb.org>
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
Re-execution of a query containing subquery in the FROM clause results
in assert failure in case the query is run as part of a stored routine or
as a prepared statement AND derived table merge optimization is off.
As an example, the following test case
CREATE TABLE t1 (a INT) ;
CREATE PROCEDURE sp() SELECT * FROM (SELECT a FROM t1) tb;
CALL sp();
SET optimizer_switch='derived_merge=off';
CALL sp();
results in assert failure on the second invocation of the 'sp' stored routine.
The reason for assertion failure is that the expression
derived->is_excluded()
returns the value true where the value false expected.
The method is_excluded() returns the value true for a derived table
that has been merged to a parent select. Such transformation happens as part
of Derived Table Merge Optimization that is performed on first invocation of
a stored routine or a prepared statement containing a query with subquery
in the FROM clause of the main SELECT.
When the same routine or prepared statement is run the second time and
Derived Table Merge Optimization is OFF the MariaDB server tries to materialize
a derived table specified by the subquery that fails since this subquery
has already been merged to the top-most SELECT. This transformation is permanent
and can't be reverted. That is the reason why the assert
DBUG_ASSERT(!derived->is_excluded());
fails inside the function TABLE_LIST::set_check_materialized().
Similar behaviour can be observed in case a stored routine or prepared statement
containing a SELECT statement with subquery in the FROM clause, first is run
with the optimizer_switch option set to derived_merge=off and re-run after this
option has been switched to derived_merge=on. In this case a derived table for
subquery is materialized on the first execution and marked as merged derived
table on the second execution that results in error with misleading error
message:
MariaDB [test]> CALL sp1();
ERROR 1030 (HY000): Got error 1 "Operation not permitted" from storage engine MEMORY
To fix the issue, a derived table that has been already optimized shouldn't be
re-marked for one more round of optimization.
One significant consequence following from suggested change is that the data
member TABLE_LIST::derived_type is not updated once the table optimization
has been done. This fact should be taken into account when Prepared Statement
being handled since once a table listed in a query has been optimized on
execution of the statement PREPARE FROM it won't be touched anymore on handling
the statement EXECUTE.
One side effect caused by this change could be observed for the following
test case:
CREATE TABLE t1 (s1 INT);
CREATE VIEW v1 AS
SELECT s1,s2 FROM (SELECT s1 as s2 FROM t1 WHERE s1 <100) x, t1 WHERE t1.s1=x.s2;
INSERT INTO v1 (s1) VALUES (-300);
PREPARE stmt FROM "INSERT INTO v1 (s1) VALUES (-300)";
EXECUTE stmt;
Execution of the above EXECUTE statement results in issuing the error
ER_COLUMNACCESS_DENIED_ERROR since table_ref->is_merged_derived() is false
and check_column_grant_in_table_ref() called for a temporary table that
shouldn't be. To fix this issue the function find_field_in_tables has been
modified in such a way that the function check_column_grant_in_table_ref()
is not called for a temporary table.
This patch reverts the fixes of the bugs MDEV-24454 and MDEV-25631 from
the commit 3690c549c6.
It leaves the changes in plugin/feedback/feedback.cc and corresponding
test files introduced in this commit intact.
Proper fixes for the bug MDEV-24454 and MDEV-25631 will follow immediately.
Diagnostics_area::set_error_status (interrupted ALTER TABLE under LOCK)
Analysis: KILL_QUERY is not ignored when local memory used exceeds maximum
session memory. Hence the query proceeds, OK is sent and we end up
reopening tables that are marked for reopen. During this, kill status is
eventually checked and assertion failure happens during trying to send error
message because OK has already been sent.
Fix: Ok is already sent so statement has already executed. It is too
late to give error. So ignore kill.
Dead code cleanup:
part_info->num_parts usage was wrong and working incorrectly in
mysql_drop_partitions() because num_parts is already updated in
prep_alter_part_table(). We don't have to update part_info->partitions
because part_info is destroyed at alter_partition_lock_handling().
Cleanups:
- DBUG_EVALUATE_IF() macro replaced by shorter form DBUG_IF();
- Typo in ER_KEY_COLUMN_DOES_NOT_EXITS.
Refactorings:
- Splitted write_log_replace_delete_frm() into write_log_delete_frm()
and write_log_replace_frm();
- partition_info via DDL_LOG_STATE;
- set_part_info_exec_log_entry() removed.
DBUG_EVALUATE removed
DBUG_EVALUTATE was only added for consistency together with
DBUG_EVALUATE_IF. It is not used anywhere in the code.
DBUG_SUICIDE() fix on release build
On release DBUG_SUICIDE() was statement. It was wrong as
DBUG_SUICIDE() is used in expression context.