- Added missing information about database of corresponding table for various types of commands
- Update some typos
- Reviewed by: <vicentiu@mariadb.org>
Use temporary constraint names for temporary tables. The constraints
are not added to cache (skipped in dict_table_rename_in_cache()).
The scheme for temporary constraint names is as follows:
for old table: db_name/\xFFconstraint_name
for new table: db_name/\xFF\xFFconstraint_name
normalize_table_name_c_low(): wrong comparison "less than FN_REFLEN -
1". Somewhere array of FN_REFLEN includes the trailing 0, somewhere
array of FN_REFLEN + 1 includes trailing 0, but nowhere array of
FN_REFLEN - 1 must include trailing 0.
If UPDATE/DELETE does not change data it is skipped from
replication. We now force replication of such events when they trigger
partition auto-creation.
For ROLLBACK it is as simple as set OPTION_KEEP_LOG
flag. trans_cannot_safely_rollback() does the rest.
For UPDATE/DELETE .. LIMIT 0 we make additional binlog_query() calls
at the early points of return.
As a safety measure we also convert row format into statement if it is
needed. The condition is decided by
binlog_need_stmt_format(). Basically if there are some row events in
cache we don't need that: table open of row event will trigger
auto-creation anyway.
Multi-update/delete works via mysql_select(). There is no early points
of return, so binlogging is always checked by
send_eof()/abort_resultset(). But we must comply with the above
measure of converting into statement.
:: Syntax change ::
Keyword AUTO enables history partition auto-creation.
Examples:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 MONTH
STARTS '2021-01-01 00:00:00' AUTO PARTITIONS 12;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 1000 AUTO;
Or with explicit partitions:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO
(PARTITION p0 HISTORY, PARTITION pn CURRENT);
To disable or enable auto-creation one can use ALTER TABLE by adding
or removing AUTO from partitioning specification:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
# Disables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR;
# Enables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
If the rest of partitioning specification is identical to CREATE TABLE
no repartitioning will be done (for details see MDEV-27328).
:: Description ::
Before executing history-generating DML command (see the list of commands below)
add N history partitions, so that N would be sufficient for potentially
generated history. N > 1 may be required when history partitions are switched
by INTERVAL and current_timestamp is N times further than the interval
boundary of the last history partition.
If the last history partition equals or exceeds LIMIT records then new history
partition is created and selected as the working partition. According to
MDEV-28411 partitions cannot be switched (or created) while the command is
running. Thus LIMIT does not carry strict limitation and the history partition
size must be planned as LIMIT value plus average number of history one DML
command can generate.
Auto-creation is implemented by synchronous fast_alter_partition_table() call
from the thread of the executed DML command before the command itself is run
(by the fallback and retry mechanism similar to Discovery feature,
see Open_table_context).
The name for newly added partitions are generated like default partition names
with extension of MDEV-22155 (which avoids name clashes by extending assignment
counter to next free-enough gap).
These DML commands can trigger auto-creation:
DELETE (including multitable DELETE, excluding DELETE HISTORY)
UPDATE (including multitable UPDATE)
REPLACE (including REPLACE .. SELECT)
INSERT .. ON DUPLICATE KEY UPDATE (including INSERT .. SELECT .. ODKU)
LOAD DATA .. REPLACE
:: Bug fixes ::
MDEV-23642 Locking timeout caused by auto-creation affects original DML
The reasons for this are:
- Do not disrupt main business process (the history is auxiliary service);
- Consequences are non-fatal (history is not lost, but comes into wrong
partition; fixed by partitioning rebuild);
- There is more freedom for application to fail in this case or not: it may
read warning info and find corresponding error number.
- While non-failing command is easy to handle by an application and fail it,
the opposite is hard to handle: there is no automatic actions to fix
failed command and retry, DBA intervention is required and until then
application is non-functioning.
MDEV-23639 Auto-create does not work under LOCK TABLES or inside triggers
Don't do tdc_remove_table() for OT_ADD_HISTORY_PARTITION because it is
not possible in locked tables mode.
LTM_LOCK_TABLES mode (and LTM_PRELOCKED_UNDER_LOCK_TABLES) works out
of the box as fast_alter_partition_table() can reopen tables via
locked_tables_list.
In LTM_PRELOCKED we reopen and relock table manually.
:: More fixes ::
* some_table_marked_for_reopen flag fix
some_table_marked_for_reopen affets only reopen of
m_locked_tables. I.e. Locked_tables_list::reopen_tables() reopens only
tables from m_locked_tables.
* Unused can_recover_from_failed_open() condition
Is recover_from_failed_open() can be really used after
open_and_process_routine()?
:: Reviewed by ::
Sergei Golubchik <serg@mariadb.org>
When we need to add/remove or change LIMIT, INTERVAL, AUTO we have to
recreate partitioning from scratch (via data copy). Such operations
should be done fast. To remove options like LIMIT or INTERVAL one
should write:
alter table t1 partition by system_time;
The command checks whether it is new or existing SYSTEM_TIME
partitioning. And in the case of new it behaves as CREATE would do:
adds default number of partitions (2). If SYSTEM_TIME partitioning
already existed it just changes its options: removes unspecified ones
and adds/changes those specified explicitly. In case when partitions
list was supplied it behaves as usual: does full repartitioning.
Examples:
create or replace table t1 (x int) with system versioning
partition by system_time limit 100 partitions 4;
# Change LIMIT
alter table t1 partition by system_time limit 33;
# Remove LIMIT
alter table t1 partition by system_time;
# This does full repartitioning
alter table t1 partition by system_time limit 33 partitions 4;
# This does data copy as pruning will require records in correct partitions
alter table t1 partition by system_time interval 1 hour
starts '2000-01-01 00:00:00';
# But this works fast, LIMIT will apply to DML commands
alter table t1 partition by system_time limit 33;
To sum up, ALTER for SYSTEM_TIME partitioning does full repartitioning
when:
- INTERVAL was added or changed;
- partition list or partition number was specified;
Otherwise it does fast alter table.
Cleaned up dead condition in set_up_default_partitions().
Reviewed by:
Oleksandr Byelkin <sanja@mariadb.com>
Nikita Malyavin <nikitamalyavin@gmail.com>
Moved LIMIT warning from vers_set_hist_part() to new call
vers_check_limit() at table unlock phase. At that point
read_partitions bitmap is already pruned by DML code (see
prune_partitions(), find_used_partitions()) so we have to set
corresponding bits for working history partition.
Also we don't do my_error(ME_WARNING|ME_ERROR_LOG), because at that
point it doesn't update warnings number, so command reports 0 warnings
(but warning list is still updated). Instead we do
push_warning_printf() and sql_print_warning() separately.
Under LOCK TABLES external_lock(F_UNLCK) is not executed. There is
start_stmt(), but no corresponding "stop_stmt()". So for that mode we
call vers_check_limit() directly from close_thread_tables().
Test result has been changed according to new LIMIT and warning
printing algorithm. For convenience all LIMIT warnings are marked with
"You see warning above ^".
TODO MDEV-20345 fixed. Now vers_history_generating() contains
fine-grained list of DML-commands that can generate history (and TODO
mechanism worked well).
Like in MDEV-27217 vers_set_hist_part() for LIMIT depends on all
partitions selected in read_partitions. That bugfix just disabled
partition selection for DELETE with this check:
if (table->pos_in_table_list &&
table->pos_in_table_list->partition_names)
{
return HA_ERR_PARTITION_LIST;
}
ALTER TABLE TRUNCATE PARTITION is a different story. First, it doesn't
update pos_in_table_list->partition_names, but
thd->lex->alter_info.partition_names. But we cannot depend on that
since alter_info will be stale for DML. Second, we should not disable
TRUNCATE PARTITION for that to be consistent with TRUNCATE TABLE
behavior.
Now we don't do vers_set_hist_part() for ALTER TABLE as this command
is not DML, so it does not produce history.
Implicit system-versioned table does not contain system fields in SHOW
CREATE. Therefore after mysqldump recovery such table has system
fields in the last place in frm image. The original table meanwhile
does not guarantee these system fields on last place because adding
new fields via ALTER TABLE places them last. Thus the order of fields
may be different between master and slave, so row-based replication
may fail.
To fix this on ALTER TABLE we now place system-invisible fields always
last in frm image. If the table was created via old revision and has
an incorrect order of fields it can be fixed via any copy operation of
ALTER TABLE, f.ex.:
ALTER TABLE t1 FORCE;
To check the order of fields in frm file one can use hexdump:
hexdump -C t1.frm
Note, the replication fails only when all 3 conditions are met:
1. row-based or mixed mode replication;
2. table has new fields added via ALTER TABLE;
3. table was rebuilt on some, but not all nodes via mysqldump image.
Otherwise it will operate properly even with incorrect order of
fields.
vers_info->hist_part retained stale value after ROLLBACK. The
algorithm in vers_set_hist_part() continued iteration from that value.
The simplest solution is to process partitions each time from start
for LIMIT in vers_set_hist_part().
records_are_comparable() requires this condition:
bitmap_is_subset(table->write_set, table->read_set)
On first iteration vers_update_fields() changes write_set and
read_set. On second iteration the above condition fails.
Added missing read bit for ROW_START. Also reorganized
bitmap_set_bit() so it is called only when needed.
Throw ER_NOT_FORM_FILE if this is wrong FRM data (warning with
ER_VERS_FIELD_WRONG_TYPE is still printed for deeper knowledge of what
was happened).
Keep ER_VERS_FIELD_WRONG_TYPE for creating partitioned table with
trx-versioning. Tested by MDEV-15951 in trx_id.test
A few regression tests invoke heavy flushing of the buffer pool
and may trigger warnings that tablespaces could not be deleted
because of pending writes. Those warnings are to be expected
during the execution of such tests.
The warnings are also frequently seen with Valgrind or MemorySanitizer.
For those, the global suppression in have_innodb.inc does the trick.
MDEV-27025 allows to insert records before the record on which DELETE is
locked, as a result the DELETE misses those records, what causes serious ACID
violation.
Revert MDEV-27025, MDEV-27550. The test which shows the scenario of ACID
violation is added.
The code was backported from 10.6 bd03c0e516
commit. See that commit message for details.
Apart from the above commit trx_lock_t::wait_trx was also backported from
MDEV-24738. trx_lock_t::wait_trx is protected with lock_sys.wait_mutex
in 10.6, but that mutex was implemented only in MDEV-24789. As there is no
need to backport MDEV-24789 for MDEV-27025,
trx_lock_t::wait_trx is protected with the same mutexes as
trx_lock_t::wait_lock.
This fix should not break innodb-lock-schedule-algorithm=VATS. This
algorithm uses an Eldest-Transaction-First (ETF) heuristic, which prefers
older transactions over new ones. In this fix we just insert granted lock
just before the last granted lock of the same transaction, what does not
change transactions execution order.
The changes in lock_rec_create_low() should not break Galera Cluster,
there is a big "if" branch for WSREP. This branch is necessary to provide
the correct transactions execution order, and should not be changed for
the current bug fix.
When lock is checked for conflict, ignore other locks on the record if
they wait for the requesting transaction.
lock_rec_has_to_wait_in_queue() iterates not all locks for
the page, but only the locks located before the waiting lock in the
queue. So there is some invariant - any lock in the queue can wait only
lock which is located before the waiting lock in the queue.
In the case when conflicting lock waits for the transaction of
requesting lock, we need to place the requesting lock before the waiting
lock in the queue to preserve the invariant. That is why we are looking
for the first waiting for requesting transation lock and place the new
lock just after the last granted requesting transaction lock before the
first waiting for requesting transaction lock.
Example:
trx1 waiting lock, trx1 granted lock, ..., trx2 lock - waiting for trx1
place new lock here -----------------^
There are also implicit locks which are lazily converted to explicit
ones, and we need to place the newly created explicit lock to the correct
place in a queue. All explicit locks converted from implicit ones are
placed just after the last non-waiting lock of the same transaction before
the first waiting for the transaction lock.
Code review and cleanup was made by Marko Mäkelä.
First, we do not add VERS_UPDATE_UNVERSIONED_FLAG for system field and
that fixes SHOW CREATE result.
Second, we have to call check_sys_fields() for any CREATE TABLE and
there correct type is checked for system fields.
Third, we update system_time like as_row structures for ALTER TABLE
and that makes check_sys_fields() happy for ALTER TABLE when we make
system fields hidden.
Update was skipped (need_update was false) because compare_record()
used HA_PARTIAL_COLUMN_READ branch and it skipped row_start check
has_explicit_value() was false. When we set bit for row_start in
has_value_set the row is updated with new row_start value.
The bug was caused by combination of MDEV-23446 and 3789692d17. The
latter one says:
... But generated columns that are written to the table are always
deterministic and cannot change unless normal non-generated columns
were changed. ...
Since MDEV-23446 generated row_start can change while non-generated
columns are not changed.
Explicit value flag came from HAS_EXPLICIT_DEFAULT which was used to
distinguish default-generated value from user-supplied one.
LIMIT history switching requires the number of history partitions to
be marked for read: from first to last non-empty plus one empty. The
least we can do is to fail with error message if the needed partition
was not marked for read. As this is handler interface we require new
handler error code to display user-friendly error message.
Switching by INTERVAL works out-of-the-box with
ER_ROW_DOES_NOT_MATCH_GIVEN_PARTITION_SET error.
MDEV-25803 excluded some cases from key sort upon alter table. That
particularly depends on ALTER_ADD_INDEX flag. Creating a column of
SERIAL data type missed that flag. Though equivalent operation
alter table t1 add x bigint unsigned not null auto_increment unique;
has ALTER_ADD_INDEX flag.
versioning_fields flag indicates that any columns were specified WITH
SYSTEM VERSIONING. In that case we imply WITH SYSTEM VERSIONING for
the whole table and WITHOUT SYSTEM VERSIONING for the other columns.
Replaced HA_ADMIN_NOT_IMPLEMENTED error code by HA_ADMIN_OK. Now CHECK
TABLE does not fail by unsupported check_misplaced_rows(). Admin
message is not needed as well.
Test case is the same as for MDEV-21011 (a7cf0db3d8), the result have
been changed.
Syntax for CONVERT keyword
ALTER TABLE tbl_name
[alter_option [, alter_option] ...] |
[partition_options]
partition_option: {
...
| CONVERT PARTITION partition_name TO TABLE tbl_name
}
Examples:
ALTER TABLE t1 CONVERT PARTITION p2 TO TABLE tp2;
New ALTER_PARTITION_CONVERT_OUT command for
fast_alter_partition_table() is done in alter_partition_convert_out()
function which basically does ha_rename_table().
Partition to extract is marked with the same flag as dropped
partition: PART_TO_BE_DROPPED. Note that we cannot have multiple
partitioning commands in one ALTER.
For DDL logging basically the principle is the same as for other
fast_alter_partition_table() commands. The only difference is that it
integrates late Atomic DDL functions and introduces additional phase
of WFRM_BACKUP_ORIGINAL. That is required for binlog consistency
because otherwise we could not revert back after WFRM_INSTALL_SHADOW
is done. And before DDL log is complete if we crash or fail the
altered table will be already new but binlog will miss that ALTER
command. Note that this is different from all other atomic DDL in that
it rolls back until the ddl_log_complete() is done even if everything
was done fully before the crash.
Test cases added to:
parts.alter_table \
parts.partition_debug \
versioning.partition \
atomic.alter_partition
Dead code cleanup:
part_info->num_parts usage was wrong and working incorrectly in
mysql_drop_partitions() because num_parts is already updated in
prep_alter_part_table(). We don't have to update part_info->partitions
because part_info is destroyed at alter_partition_lock_handling().
Cleanups:
- DBUG_EVALUATE_IF() macro replaced by shorter form DBUG_IF();
- Typo in ER_KEY_COLUMN_DOES_NOT_EXITS.
Refactorings:
- Splitted write_log_replace_delete_frm() into write_log_delete_frm()
and write_log_replace_frm();
- partition_info via DDL_LOG_STATE;
- set_part_info_exec_log_entry() removed.
DBUG_EVALUATE removed
DBUG_EVALUTATE was only added for consistency together with
DBUG_EVALUATE_IF. It is not used anywhere in the code.
DBUG_SUICIDE() fix on release build
On release DBUG_SUICIDE() was statement. It was wrong as
DBUG_SUICIDE() is used in expression context.
Per review by Serg, include start row with new line.
We are hoping we haven't annoyed people that prefered the old
way.
Adding an option for new lines seems like over-engineering in advance.
So if there are complaints, let them be known (JIRA), and we'll add
this under an option.
Test cases updated.
This essentially reverts commit 4e89ec6692
and only disables InnoDB persistent statistics for tests where it is
desirable. By design, InnoDB persistent statistics will not be updated
except by ANALYZE TABLE or by STATS_AUTO_RECALC.
The internal transactions that update persistent InnoDB statistics
in background tasks (with innodb_stats_auto_recalc=ON) may cause
nondeterministic query plans or interfere with some tests that deal
with other InnoDB internals, such as the purge of transaction history.
Based on Aleksey Midenkov's patch
mysqldump changes:
* --as-of option specifies historical point;
* query forging protection for --as-of parameter.
system versioned tables are detected by querying I_S.TABLES:
* it transfers much less data when the full table definition is not needed
* it does not give false positives on x TEXT DEFAULT 'WITH SYSTEM VERSIONING'
Delete-marked record is on the secondary index and the clustered index
already purged the corresponding record. We cannot detect if such
record is historical and we should not: the algorithm of
row_ins_check_foreign_constraint() skips such record anyway.
MDEV-16026: Forbid global system_versioning_asof in non-default time zone
* store `system_versioning_asof` in unix time;
* both session and global vars are processed in session timezone;
* setting `default` does not copy global variable anymore. Instead, it sets
system_time to SYSTEM_TIME_UNSPECIFIED, which means that no 'AS OF' time
is applied and `now()` can be assumed
As a regression, we cannot assign values below 1970 (UTC) anymore
MDEV-16481: set global system_versioning_asof=sf() crashes in specific case
* sys_vars.h: add `MYSQL_TIME` field to `set_var::save_result`
* sys_vars.ic: get rid of calling `var->value->get_date()` from
`Sys_var_vers_asof::update()`
* versioning.sysvars: add test; remove double warning
refactor Sys_var_vers_asof
* inherit from sys_var rather than Sys_var_enum
* remove junk "DEFAULT" keyword. There is DEFAULT in SQL grammar for it.
* make all conversions in check() to avoid possible errors
* avoid double var->value evaluation, which could
consequence in undefined behavior
Historical query with AS OF point after the last history partition
must include last history partition because it can be overflown
(contain history rows out of right endpoint).
Move left point back to hist_part->id in that case.
The practical maximum value of the parameter innodb_lock_wait_timeout
is 100,000,000. Any value larger than that specifies an infinite timeout.
Therefore, we should make 100,000,000 the maximum value of the parameter.
Item_func_history (is_history()) is a bool function that checks if the
row is the history row by checking row_end->is_max(). The argument to
this function must be row_end system field.
Added the above function to conjunction with SYSTEM_TIME_BEFORE
versioning condition.
Cause: no table->update_handler cloned at the moment of
vers_insert_history_row(). update_handler is needed because there
can't be several inited indexes at once in the same handler. First
index is inited by QUICK_RANGE_SELECT::reset(). Then when history row
is inserted check_duplicate_long_entry_key() is done and it requires
another index.
We implement an idea that was suggested by Michael 'Monty' Widenius
in October 2017: When InnoDB is inserting into an empty table or partition,
we can write a single undo log record TRX_UNDO_EMPTY, which will cause
ROLLBACK to clear the table.
For this to work, the insert into an empty table or partition must be
covered by an exclusive table lock that will be held until the transaction
has been committed or rolled back, or the INSERT operation has been
rolled back (and the table is empty again), in lock_table_x_unlock().
Clustered index records that are covered by the TRX_UNDO_EMPTY record
will carry DB_TRX_ID=0 and DB_ROLL_PTR=1<<55, and thus they cannot
be distinguished from what MDEV-12288 leaves behind after purging the
history of row-logged operations.
Concurrent non-locking reads must be adjusted: If the read view was
created before the INSERT into an empty table, then we must continue
to imagine that the table is empty, and not try to read any records.
If the read view was created after the INSERT was committed, then
all records must be visible normally. To implement this, we introduce
the field dict_table_t::bulk_trx_id.
This special handling only applies to the very first INSERT statement
of a transaction for the empty table or partition. If a subsequent
statement in the transaction is modifying the initially empty table again,
we must enable row-level undo logging, so that we will be able to
roll back to the start of the statement in case of an error (such as
duplicate key).
INSERT IGNORE will continue to use row-level logging and locking, because
implementing it would require the ability to roll back the latest row.
Since the undo log that we write only allows us to roll back the entire
statement, we cannot support INSERT IGNORE. We will introduce a
handler::extra() parameter HA_EXTRA_IGNORE_INSERT to indicate to storage
engines that INSERT IGNORE is being executed.
In many test cases, we add an extra record to the table, so that during
the 'interesting' part of the test, row-level locking and logging will
be used.
Replicas will continue to use row-level logging and locking until
MDEV-24622 has been addressed. Likewise, this optimization will be
disabled in Galera cluster until MDEV-24623 enables it.
dict_table_t::bulk_trx_id: The latest active or committed transaction
that initiated an insert into an empty table or partition.
Protected by exclusive table lock and a clustered index leaf page latch.
ins_node_t::bulk_insert: Whether bulk insert was initiated.
trx_t::mod_tables: Use C++11 style accessors (emplace instead of insert).
Unlike earlier, this collection will cover also temporary tables.
trx_mod_table_time_t: Add start_bulk_insert(), end_bulk_insert(),
is_bulk_insert(), was_bulk_insert().
trx_undo_report_row_operation(): Before accessing any undo log pages,
invoke trx->mod_tables.emplace() in order to determine whether undo
logging was disabled, or whether this is the first INSERT and we are
supposed to write a TRX_UNDO_EMPTY record.
row_ins_clust_index_entry_low(): If we are inserting into an empty
clustered index leaf page, set the ins_node_t::bulk_insert flag for
the subsequent trx_undo_report_row_operation() call.
lock_rec_insert_check_and_lock(), lock_prdt_insert_check_and_lock():
Remove the redundant parameter 'flags' that can be checked in the caller.
btr_cur_ins_lock_and_undo(): Simplify the logic. Correctly write
DB_TRX_ID,DB_ROLL_PTR after invoking trx_undo_report_row_operation().
trx_mark_sql_stat_end(), ha_innobase::extra(HA_EXTRA_IGNORE_INSERT),
ha_innobase::external_lock(): Invoke trx_t::end_bulk_insert() so that
the next statement will not be covered by table-level undo logging.
ReadView::changes_visible(trx_id_t) const: New accessor for the case
where the trx_id_t is not read from a potentially corrupted index page
but directly from the memory. In this case, we can skip a sanity check.
row_sel(), row_sel_try_search_shortcut(), row_search_mvcc():
row_sel_try_search_shortcut_for_mysql(),
row_merge_read_clustered_index(): Check dict_table_t::bulk_trx_id.
row_sel_clust_sees(): Replaces lock_clust_rec_cons_read_sees().
lock_sec_rec_cons_read_sees(): Replaced with lower-level code.
btr_root_page_init(): Refactored from btr_create().
dict_index_t::clear(), dict_table_t::clear(): Empty an index or table,
for the ROLLBACK of an INSERT operation.
ROW_T_EMPTY, ROW_OP_EMPTY: Note a concurrent ROLLBACK of an INSERT
into an empty table.
This is joint work with Thirunarayanan Balathandayuthapani,
who created a working prototype.
Thanks to Matthias Leich for extensive testing.
Problem: Assertion `transactional_table || !changed ||
thd->transaction.stmt.modified_non_trans_table' failed due REPLACE into a
versioned table.
It is not specific to system versioning/pertitioning/heap, but this
combination makes it much easier to reproduce.
The thing is to make first ha_update_row call succeed to make
info->deleted != 0. And then make REPLACE fail by any reason.
In this scenario we overflow versioned partition, so next ha_update_row
succeeds, but corresponding ha_write_row fails to insert history record.
Fix: modified_non_trans_table is set in one missed place
Add history row outside of compare_record() check. For TRX_ID
versioning we have to fail can_compare_record to force InnoDB update
which adds history row; and there in ha_innobase::update_row() is
additional "row changed" check where we force history row anyway.
First part of the fix (row0mysql.cc) addresses external columns when adding history
row on referential action. The full data must be retrieved before the
row is inserted.
Second part of the fix (the rest) avoids duplicate primary key error between
the history row generated on referential action and the history row
generated by SQL command. Both command and referential action can
happen on same table since foreign key can be self-reference (parent
and child tables are same). Moreover, the self-reference can refer
multiple rows when the key is non-unique. In such case history is
generated by referential action occured on first row but processed all
rows by a matched key. The second round is when the next row is
processed by a command but history already exists. In such case we
check TRX_ID of existing history row and if it is the same we assume
the above situation and skip adding one more history row or failing
the command.
First part (row0mysql.cc) fixes ins_node_set_new_row() usage workflow
as it is designed to operate on empty row (see row_get_prebuilt_insert_row()
for example).
Second part (row0ins.cc) fixes duplicate key error in FTS_DOC_ID_INDEX
since history rows must not generate entries in that index. We detect
FTS_DOC_ID_INDEX by a number of attributes and skip it if the row is
historical.
Misc fixes:
row_build_index_entry_low() does not accept non-NULL tuple
for FTS index (subject assertion fails), assertion (index->type !=
DICT_FTS) adds code understanding.
Now as historical_row is copied in row_update_vers_insert() there is
no need to copy the row twice: ROW_COPY_POINTERS is used to build
historical_row initially.
dbug_print_rec() debug functions.
- Remove row_start/row_end from keys in fix_create_like();
- Disable manual adding of implicit row_start/row_end to indexes on
CREATE TABLE. INVISIBLE_SYSTEM fields are unoperable by user;
- Fix memory leak on allocation of Key_part_spec.
PARTITION clause in SELECT means query is non-versioned (see
WITH_PARTITION_STORAGE_ENGINE in vers_setup_conds()).
vers_setup_conds() expands such query to SYSTEM_TIME_ALL which is then
added to VIEW specification. When VIEW is queried both clauses
PARTITION and FOR SYSTEM_TIME ALL lead to ER_VERS_QUERY_IN_PARTITION
(same place WITH_PARTITION_STORAGE_ENGINE).
Fix removes FOR SYSTEM_TIME ALL from VIEW by accessing original
SYSTEM_TIME clause: the one specified in parser. As a side-effect
EXPLAIN SELECT displays SYSTEM_TIME specified in SELECT which is
user-friendly.
For join to work correctly versioning condition must be added to table
on_expr. Without that JOIN_CACHE gets expression (1)
trigcond(xtitle.row_end = TIMESTAMP'2038-01-19 06:14:07.999999') and
trigcond(xtitle.elementId = x.`id` and xtitle.pkey = 'title')
instead of (2)
trigcond(xtitle.elementId = x.`id` and xtitle.pkey = 'title')
for join_null_complements(). It is NULL-row of xtitle for
complementing the join and the above comparisons of course FALSE, but
trigcond (Item_func_trig_cond) makes them TRUE via its trig_var
property which is bound to some boolean properties of JOIN_TAB.
Expression (2) evaluated to TRUE because its trig_var is bound to
first_inner_tab->not_null_compl. The expression (1) does not evaluate
correctly because row_end comparison's trig_var is bound to
first_inner->found earlier. As a result JOIN_CACHE::check_match()
skipped the row for join_null_complements().
When we add versioning condition to table's on_expr the optimizer in
make_join_select() distributes conditions differently. tmp_cond
inherits on_expr value and in Good case it is full expression
xgender.elementId = x.`id` and xgender.pkey = 'gender' and
xgender.row_end = TIMESTAMP'2038-01-19 06:14:07.999999'
while in Bad case it is only
xgender.elementId = x.`id` and xgender.pkey = 'gender'.
Later in Good row_end condition is optimized out and we get one
trigcond in form of (2).
The code in vers_select_conds_t::init_from_sysvar() assumed that
the value of the system_versioning_asof is DATETIME.
But it also could be DATE after a query like this:
SET system_versioning_asof = DATE(NOW());
Fixing Sys_var_vers_asof::update() to convert the value
of the assignment source to DATETIME if it returned DATE.
Now vers_select_conds_t::init_from_sysvar() always gets a DATETIME value.
Fix stale virtual field value in 4 cases: when virtual field depends
on row_start/row_end in timestamp/trx_id versioned table. row_start
dep is recalculated in vers_update_fields() (SQL and InnoDB
layer). row_end dep is recalculated on history row insert.
make_versioned_helper() appended new update field unconditionally
while it should check if this field already exists in update vector.
Misc renames to conform versioning prefix. vers_update_fields() name
conforms with sql layer TABLE::vers_update_fields().
Incorrect syntax for SYSTEM_TIME partition. work_part_info is detected
as HASH partition. We cannot add partition of different type neither
we cannot reorganize SYSTEM_TIME into/from different type
partitioning.
The sidefix for version until 10.5 corrects the message:
"For LIST partitions each partition must be defined"
Respect system fields in NO_ZERO_DATE mode.
This is the subject for refactoring in MDEV-19597
Conflict resolution from 7d5223310789f967106d86ce193ef31b315ecff0
UPDATE gets access to history records because versioning conditions
are not set for VIEW. This leads to endless loop of inserting history
records when clustered index is rebuilt and ha_rnd_next() returns
newly inserted history record.
Return back original behavior of failing on write-locked table in
historical query.
35b679b9 assumed that SELECT_LEX::lock_type influences anything, but
actually at this point table is already locked. Original bug report
was tempesta-tech/mariadb#102
System versioning assertion fix. Since DROP SYSTEM VERSIONING does not
change list of dropped keys we should handle a special case.
Caused by MDEV-19751. This fix deprecates MDEV-17091.
This is continuation of MDEV-22153 bug when contiguity of history
partitions is broken. ha_partition::open_read_partitions() can not
handle non-contiguous list of default partitions.
Fix: when default partition is dropped convert list of partitions to
non-default.
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
ADD default history partitions generates wrong partition name,
f.ex. p2 instead of p1. Gap in sequence of partition names leads to
ha_partition::open_read_partitions() fail on inexistent name.
Manual fixing such broken table requires:
1. create empty table by any name (t_empty) with correct number
of partitions;
2. stop the server;
3. rename data files (.myd, .myi or .ibd) of broken table to t_empty
fixing the partition sequence (#p2 to #p1, #p3 to #p2);
4. start the server;
5. drop the broken table;
6. rename t_empty to correct table name.
Cause
Join tmp table inserts null row because of OUTER JOIN, that's
expected. Since `multi_update::prepare2()` converted
`Item_temptable_rowid` into `Item_field` (28dbdf3)
`multi_update::send_data()` accesses join tmp record directly and
treats it as a normal row ignoring null status of ref field. NULL ref
field is then treated as normal in `multi_update::do_updates()` which
tries to position updated table by reference 0.
Note that reference 0 may be valid reference and the first row of
table can be wrongly updated (see multi_update.test).
Fix
Do not add row into multi-update tmp table in case of null ref
field. Join tmp table does not have null_row status at this time (as
well as `STATUS_NULL_ROW`) and cannot be skipped by these properties
(see first comment in multi_update::send_data()). But it has all null
fields (including the ref field).
SQL_SELECT::check_quick() returns error status only
test_quick_select() returns -1. Fix error handling when lower frames
throw error, but it is ignored by test_quick_select(). Fix return
status for out-of-memory errors which are obviously must be processed
as error in upper frames.
Assertion `old_part_id == m_last_part' failed in ha_partition::update_row or `part_id == m_last_part' in ha_partition::delete_row upon UPDATE/DELETE after dropping versioning
PRIMARY KEY change hadn't been treated as partition reorganization in case of partitioning by KEY() (without parameters).
* set `*partition_changed= true` in the described case.
* since add/drop system versioning does not affect alter_info->key_list, it required separate attention
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
MDEV-21606 Improve update handler (long unique keys on blobs)
MDEV-21470 MyISAM and Aria start_bulk_insert doesn't work with long unique
MDEV-21606 Bug fix for previous version of this code
MDEV-21819 2 Assertion `inited == NONE || update_handler != this'
- Move update_handler from TABLE to handler
- Move out initialization of update handler from ha_write_row() to
prepare_for_insert()
- Fixed that INSERT DELAYED works with update handler
- Give an error if using long unique with an autoincrement column
- Added handler function to check if table has long unique hash indexes
- Disable write cache in MyISAM and Aria when using update_handler as
if cache is used, the row will not be inserted until end of statement
and update_handler would not find conflicting rows.
- Removed not used handler argument from
check_duplicate_long_entries_update()
- Syntax cleanups
- Indentation fixes
- Don't use single character indentifiers for arguments
When there are E empty partitions left, auto-create N new empty
partitions for SYSTEM_TIME partitioning rotated by INTERVAL/LIMIT and
marked by AUTO_INCREMENT keyword. Syntax change: AUTO_INCREMENT
keyword (or shorter AUTO may be used instead) after LIMIT/INTERVAL
clause.
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 100000 AUTO_INCREMENT;
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 WEEK AUTO_INCREMENT;
The current revision implements hard-coded values of 1 for E and N. As
well as auto-creation threshold MinInterval = 1 hour, MinLimit = 1000.
The name for newly added partition will be first chosen as "pX", where
X is partition number and "p" is hard-coded name prefix. If this name
is already occupied, the X will be incremented until the resulting
name will be free to use.
ALTER TABLE ADD PARTITION is now always fast. If there some history
partition overflow occurs manual ALTER TABLE REBUILD PARTITION is
needed.
Race condition when innodb_lock_wait_timeout (default 50 seconds)
exceeds for 'send update', but information_schema.innodb_lock_waits
still sees this wait or it my exit by timeout. My occur on overloaded
host.
Cause:
* row_start != 0 treated as it exists. Probably, possible row permutations had not been taken in mind.
Solution:
* Checking both row_start and row_end is correct, so versioned() function is used
Cause:
* when autocommit=0 (or transaction is issued by user),
`ha_commit_trans` is called twice on ALTER TABLE, causing a duplicated
insert into `transaction_registry` (ER_DUP_ENTRY).
Solution:
* ALTER TABLE makes an implicit commit by a second call. We actually
need to make an insert only when it is a real commit. So is_real
variable is additionally checked.
Incorrect assertion of EXTRA_CACHE for
HA_EXTRA_PREPARE_FOR_UPDATE. The latter is related to read cache, but
must operate without it as a noop.
Related to Bug#55458 and MDEV-20441.
Unit prepare prematurely fixed field which must be fixed via
setup_conds() to correctly update table->covering_keys.
Call vers_setup_conds() directly instead, because actually everything
else is not needed.
Unit prepare prematurely fixed field which must be fixed via
setup_conds() to correctly update table->covering_keys.
Call vers_setup_conds() directly instead, because actually everything
else is not needed.
Wrong assertion condition. SYSTEM_TIME_ALL indicates that
vers_setup_conds() is done. In case FOR SYSTEM_TIME ALL is specified
in command the assertion passes but not checks anything.