When binlog_get_pending_rows_event was refactored, one usage in
binlog_need_stmt_format has not been taken in mind.
As binlog_get_pending_rows_event now requires existing cache_mngr, this check
is now made first.
Delayed_insert has its own THD (initialized at mysql_insert()) and
hence its own LEX. Delayed_insert initalizes a very few parameters for
LEX and 'duplicates' is not in this list. Now we copy this missing
parameter from parser LEX (as well as sql_command).
When INSERT does auto-create for t1 all its handler instances are
closed by alter_close_table(). At this time down the stack
maria_close() clears share->state_history. Later when we unlock the
tables Aria transaction manager accesses old share instance (the one
before t1 was closed) and tries to reset its state_history.
The problem is maria_close() didn't remove table from transaction's
list (used_tables). The fix does _ma_remove_table_from_trnman() which
is triggered by HA_EXTRA_PREPARE_FOR_RENAME.
Two new information_schema views are added:
* PERIOD table -- columns TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME,
PERIOD_NAME, START_COLUMN_NAME, END_COLUMN_NAME.
* KEY_PERIOD_USAGE -- works similar to KEY_COLUMN_USAGE, but for periods.
Columns CONSTRAINT_CATALOG, CONSTRAINT_SCHEMA, CONSTRAINT_NAME,
TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, PERIOD_NAME
Two new columns are added to the COLUMNS view:
IS_SYSTEM_TIME_PERIOD_START, IS_SYSTEM_TIME_PERIOD_END - contain YES/NO.
ha_innobase::check_if_supported_inplace_alter(): Require ALGORITHM=COPY
when creating a FULLTEXT INDEX on a versioned table.
row_merge_buf_add(), row_merge_read_clustered_index(): Remove the parameter
or local variable history_fts that had been added in the attempt to fix
MDEV-25004.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
Changing the way how a the following conditions are evaluated:
WHERE timestamp_column=datetime_const_expr
(for all comparison operators: =, <=>, <, >, <=, >=, <> and for NULLIF)
Before the change it was always performed as DATETIME.
That was not efficient, as involved per-row TIMESTAMP->DATETIME conversion
for timestamp_column. For example, in case of the SYSTEM time zone
it involved a localtime_r() call, which is known to be slow.
After the change it's performed as TIMESTAMP in many cases.
This allows to avoid per-row conversion, as it works the other way around:
datetime_const_expr is converted to TIMESTAMP once before the execution stage.
Note, datetime_const_expr must be inside monotone continuous periods of
the current time zone, i.e. not near these anomalies:
- DST changes (spring forward, fall back)
- leap seconds
In any test that uses wait_all_purged.inc, ensure that InnoDB tables
will be created without persistent statistics.
This is a follow-up to commit cd04673a17
after a similar failure was observed in the innodb_zip.blob test.
Attempt to resolve FOR SYSTEM_TIME expression as field for derived
table is done before derived table is fully prepared, so we fail on
assertion that table_list->table is missing.
Actually Vers_history_point::resolve_unit() is done under the call of
mysql_derived_prepare() itself (sql_derived.cc:824) and the table is
assigned later at 867.
The fix disables unit resolution for field type in FOR SYSTEM_TIME
expression as it does a little sense in any case: making historical
queries based on variable field values produces the result of multiple
time points.
fix_fields_if_needed() in resolve_units() was introduced by 46be31982a
Index values for row_start/row_end was wrongly calculated for inplace
ALTER for some layout of virtual fields.
Possible impact
1. history row is not detected upon build clustered index for
inplace ALTER which may lead to duplicate key errors on
auto-increment and FTS index add.
2. foreign key constraint may falsely fail.
3. after inplace ALTER before server restart trx-based system
versioning can cause server crash or incorrect data written upon
UPDATE.
The motivation of introducing the parameter
innodb_purge_rseg_truncate_frequency in
mysql/mysql-server@28bbd66ea5 and
mysql/mysql-server@8fc2120fed
seems to have been to avoid stalls due to freeing undo log pages
or truncating undo log tablespaces. In MariaDB Server,
innodb_undo_log_truncate=ON should be a much lighter operation
than in MySQL, because it will not involve any log checkpoint.
Another source of performance stalls should be
trx_purge_truncate_rseg_history(), which is shrinking the history list
by freeing the undo log pages whose undo records have been purged.
To alleviate that, we will introduce a purge_truncation_task that will
offload this from the purge_coordinator_task. In that way, the next
innodb_purge_batch_size pages may be parsed and purged while the pages
from the previous batch are being freed and the history list being shrunk.
The processing of innodb_undo_log_truncate=ON will still remain the
responsibility of the purge_coordinator_task.
purge_coordinator_state::count: Remove. We will ignore
innodb_purge_rseg_truncate_frequency, and act as if it had been
set to 1 (the maximum shrinking frequency).
purge_coordinator_state::do_purge(): Invoke an asynchronous task
purge_truncation_callback() to free the undo log pages.
purge_sys_t::iterator::free_history(): Free those undo log pages
that have been processed. This used to be a part of
trx_purge_truncate_history().
purge_sys_t::clone_end_view(): Take a new value of purge_sys.head
as a parameter, so that it will be updated while holding exclusive
purge_sys.latch. This is needed for race-free access to the field
in purge_truncation_callback().
Reviewed by: Vladislav Lesin
Problem:
=========
During commit, server calls prepare_commit_versioned to
determine the transaction modified system-versioned data.
Due to binlog_do_db option, we disable the binlog for the
statement. But prepare_commit_versioned() is being
called only when binlog is enabled for the statement.
Fix:
===
prepare_commit_versioned() should happen irrespective
of binlog state. So if the server has any read-write operation
then we should call prepare_commit_versioned().