When a condition containing NULLIF is pushed into a materialized
view/derived table the clone of the Item_func_nullif item must
be processed in a special way to guarantee that the first argument
points to the same item as the third argument.
... with wsrep_replicate_myisam enabled
Internal updates to system statistical tables could wrongly
trigger an additional total-order replication if wsrep_repli
-cate_myisam is enabled.
Fixed by adding a check to skip total-order replication for
stat tables.
Essentially revert MDEV-6759, which addressed a double free of memory
by removing the freeing altogether, introducing the memory leaks.
No double free was observed when running the test suite -DWITH_ASAN.
Replace some mem_heap_free(foreign->heap) with dict_foreign_free(foreign)
so that the calls can be located and instrumented more easily when needed.
MySQL 5.7 supports only one shared temporary tablespace.
MariaDB 10.2 does not support any other shared InnoDB tablespaces than
the two predefined tablespaces: the persistent InnoDB system tablespace
(default file name ibdata1) and the temporary tablespace
(default file name ibtmp1).
InnoDB is unnecessarily allocating a tablespace ID for the predefined
temporary tablespace on every startup, and it is in several places
testing whether a tablespace ID matches this dynamically generated ID.
We should use a compile-time constant to reduce code size and to avoid
unnecessary updates to the DICT_HDR page at every startup.
Using a hard-coded tablespace ID will should make it easier to remove the
TEMPORARY flag from FSP_SPACE_FLAGS in MDEV-11202.
Essentially revert MDEV-6759, which addressed a double free of memory
by removing the freeing altogether, introducing the memory leaks.
No double free was observed when running the test suite -DWITH_ASAN.
Replace some mem_heap_free(foreign->heap) with dict_foreign_free(foreign)
so that the calls can be located and instrumented more easily when needed.
This test and result had been modified in WL#6204, but I missed it,
because the test had been renamed from
innodb_fts.innodb_fts_plugin to innodb_fts.plugin.
check_contains() fixed. When an item of an array is a complex
structure, it can be half-read after the end of the recursive
check_contains() call. So we just manually get to it's ending.
Thanks to Zhangyuan from Alibaba for pointing out this bug.
btr_page_empty(): When a clustered index root page is emptied,
preserve PAGE_ROOT_AUTO_INC. This would occur during a page split.
page_create_empty(): Preserve PAGE_ROOT_AUTO_INC when a clustered
index root page becomes empty. Use a faster method for writing
the field.
page_zip_copy_recs(): Reset PAGE_MAX_TRX_ID when copying
clustered index pages. We must clear the field when the root page
was a leaf page and it is being split, so that PAGE_MAX_TRX_ID
will continue to be 0 in clustered index non-root pages.
page_create_zip(): Add debug assertions for validating
PAGE_MAX_TRX_ID and PAGE_ROOT_AUTO_INC.
Remove unnecessary restarts by testing multiple tables across a restart.
This change almost halves the execution time.
Some further restarts could be removed with additional effort.
This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with
the notable difference that the file format changes are limited to
repurposing a previously unused data field in B-tree pages.
For persistent InnoDB tables, write the last used AUTO_INCREMENT
value to the root page of the clustered index, in the previously
unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC.
Unlike some other previously unused InnoDB data fields, this one was
actually always zero-initialized, at least since MySQL 3.23.49.
The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the
root page. The SX latch will allow concurrent read access to the root
page. (The field PAGE_ROOT_AUTO_INC will only be read on the
first-time call to ha_innobase::open() from the SQL layer. The
PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so
read/write races are not possible.)
During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level
function btr_cur_search_to_nth_level(), adding no extra page
access. [Adaptive hash index lookup will be disabled during INSERT.]
If some rare UPDATE modifies an AUTO_INCREMENT column, the
PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in
ha_innobase::update_row().
When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC
field.
During ALTER TABLE, the initial AUTO_INCREMENT value will be copied
from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will
update PAGE_ROOT_AUTO_INC in real time.
innodb_col_no(): Determine the dict_table_t::cols[] element index
corresponding to a Field of a non-virtual column.
(The MySQL 5.7 implementation of virtual columns breaks the 1:1
relationship between Field::field_index and dict_table_t::cols[].
Virtual columns are omitted from dict_table_t::cols[]. Therefore,
we must translate the field_index of AUTO_INCREMENT columns into
an index of dict_table_t::cols[].)
Upgrade from old data files:
By default, the AUTO_INCREMENT sequence in old data files would appear
to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain
the value 0 in each clustered index page. In new data files,
PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain
any AUTO_INCREMENT column.
For backward compatibility, we use the old method of
SELECT MAX(auto_increment_column) for initializing the sequence.
btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format
data file.
btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc()
that will resort to reading MAX(auto_increment_column) for data files
that did not use AUTO_INCREMENT yet. It was manually tested that during
the execution of innodb.autoinc_persist the compatibility logic is
not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty
clustered index root pages).
initialize_auto_increment(): Replaces
ha_innobase::innobase_initialize_autoinc(). This initializes
the AUTO_INCREMENT metadata. Only called from ha_innobase::open().
ha_innobase::info_low(): Do not try to lazily initialize
dict_table_t::autoinc. It must already have been initialized by
ha_innobase::open() or ha_innobase::create().
Note: The adjustments to class ha_innopart were not tested, because
the source code (native InnoDB partitioning) is not being compiled.
in slow shutdown mode purge threads really must exit only when there is
nothing to purge. Restore the trx_commit_disallowed check and
don't stop purge threads until all connection thread transactions
are gone.
The patch for bug mdev-10882 tried to fix it by providing an
implementation of the virtual method build_clone for the class
Item_cache. It's turned out that it is not easy provide a valid
implementation for Item_cache::build_clone(). At the same time
if the condition that can be pushed into a materialized view
contains a cached item this item can be substituted for a basic
constant of the same value. In such a way we can avoid building
proper clones for Item_cache objects when constructing pushdown
conditions.
In file sql/opt_range.cc,when calculate_cond_selectivity_for_table() is called with optimizer_use_condition_selectivity=4 then
- thd->no_errors is set to 1
- the original value of thd->no_error is not restored to its original value
- this is causing the assertion to fail in the subsequent queries
Fixed by restoring the original value of thd->no_errors
Tasks:-
Changes in wsrep_dirty_reads variable
1.) Global + Session scope (Current: session-only)
2.) Can be set using command line.
3.) Allow all commands that do not change data (besides SELECT)
4.) Allow prepared Statements that do not change data
5.) Works with wsrep_sync_wait enabled
Reduce the number of calls to encryption_get_key_get_latest_version
when doing key rotation with two different methods:
(1) We need to fetch key information when tablespace not yet
have a encryption information, invalid keys are handled now
differently (see below). There was extra call to detect
if key_id is not found on key rotation.
(2) If key_id is not found from encryption plugin, do not
try fetching new key_version for it as it will fail anyway.
We store return value from encryption_get_key_get_latest_version
call and if it returns ENCRYPTION_KEY_VERSION_INVALID there
is no need to call it again.
strangely enough, ?: variant does not link in some older gcc versions:
sql/sql_table.cc:6409: undefined reference to `Alter_inplace_info::ALTER_STORED_GCOL_EXPR'
sql/sql_table.cc:6409: undefined reference to `Alter_inplace_info::ALTER_VIRTUAL_GCOL_EXPR'