- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension
for partition table. Basically all partition supports
can_be_converted_by_engine() then it should be set to
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
Part#2: cleanup:
In the part 1 of the fix, DS-MRR implementation would peek into
the JOIN_TAB to get the rowid filter from
table->reginfo.join_tab->rowid_filter
This doesn't look good from code isolation standpoint (why should a
storage engine assume it is used through a JOIN_TAB?).
Fixed this by storing the 'un-pushed' rowid_filter in the DsMrr_impl
structure. The filter survives across multi_range_read_init() calls.
It is discarded when somebody calls index_end() or rnd_end() and cleans
up the DsMrr_impl.
This patch fixes the following defects/bugs.
1. If BKA[H] algorithm was used to join a table for which the optimizer
had decided to employ a rowid filter the filter actually was not built.
2. The patch for the bug MDEV-21356 that added the code canceling pushing
rowid filter into an engine for the table joined with employment of
BKA[H] and MRR was not quite correct for Innodb engine because this
cancellation was done after InnoDB code had already bound the the pushed
filter to internal InnoDB structures.
Do not rebuild index when it's key part converted from utf8mb3 to utf8mb4
but key part stays the same.
dict_index_add_to_cache(): assert that prefix_len is divided by mbmaxlen
ha_innobase::compare_key_parts(): compare key part lenght in symbols instead
of bytes.
* Remove those tests that will not be supported on that release.
* Make sure that correct tests are disabled and have MDEVs
* Sort test names
This should not be merged upwards.
* Remove those tests that will not be supported on that release.
* Make sure that correct tests are disabled and have MDEVs
* Sort test names
This should not be merged upwards.
Partitioning storage now supports MRR but doesn't support Index Condition
Pushdown (aka ICP). This causes counter-intuitive query plans for queries
that use BKA and conditions that depend on index fields:
- If the condition refers to other tables, BKA's variant of ICP is used
to handle it.
- If the condition depends on this table only, the optimizer will try to
use regular ICP for it, which will fail because the storage engine
doesn't support ICP.
Make the optimizer be smarter in the second case: if we were not able to
use regular ICP, use BKA's variant of ICP..
compare_keys_but_name(): do not use KEY_PART_INFO::field for
Field::is_equal(). Following the logic of that code we need to
compare fields of a table. But KEY_PART_INFO::field sometimes
(when key part is shorter than table field) is a different field.
In that case Field::is_equal() returns incorrect result and
problems occur.
KEY_PART_INFO::field may become some strange field in
open_frm_error open_table_from_share(). I think this is an
incorrect logic, some tecnhical debt. I'm not fixing it right now,
because I don't have time. But I'm making Field::field_length
a const class member. Then, the only fishy code which changed that
field requires now a const_cast<>. I'm bringing attention to that
code with it. This change should not affect logic of the
program in any way.
Reverted original patch (c2e0a0b).
For consistency with "LOCK TABLE <table_name> READ" and "FLUSH TABLES
WITH READ LOCK", which are forbidden under "BACKUP STAGE", forbid "FLUSH
TABLE <table_name> FOR EXPORT" and "FLUSH TABLE <table_name> WITH READ
LOCK" as well.
It'd allow consistent fixes for problems like MDEV-18643.
Context involves semicolon batching, and the error starts 10.2
No reproducible examples were made yet, but TCP trace suggests
multiple packets that are "squeezed" together (e.g overlong OK packet
that has a trailer which is belongs to another packet)
Remove thd->get_stmt_da()->set_skip_flush() when processing a batch.
skip_flush stems from the COM_MULTI code, which was checked in during
10.2 (and is never used)
The fix is confirmed to work, when evaluated by bug reporter (one of them)
We never reproduced it locally, with multiple tries
thus the root cause analysis is still missing.
* Remove those tests that will not be supported on that release.
* Make sure that correct tests are disabled and have MDEVs
* Sort test names
This should not be merged upwards.
* Remove those tests that will not be supported on that release.
* Make sure that correct tests are disabled and have MDEVs
* Sort test names
This should not be merged upwards.
Do not materialize a semi-join nest if it contains a materialized derived
table /view that potentially can be subject to the split optimization.
Splitting of materialization of such nest would help, but currently there
is no code to support this technique.
- n_ext value may be less than dtuple_get_n_ext(dtuple) when PK is being
updated and new record inherits the externally stored fields from
delete mark old record.
Problem:
=======
After discarding the table, fts_optimize_thread aborts during shutdown.
InnoDB fails to remove the table from fts_optimize_wq and it leads to
the fts_optimize_thread to lookup for the auxiliary table and fails.
Fix:
====
While discarding the fts table, remove the table from fts_optimize_wq.
row_log_table_get_pk_old_col(): For replacing a NULL value for a
column of the being-added primary key, look up the correct
default value, even if columns had been instantly reordered or
dropped earlier. This ought to have been broken ever since
commit 0e5a4ac253 (MDEV-15562).
ha_innobase::commit_inplace_alter_table(): After
ALTER_STORED_COLUMN_ORDER, ensure that the virtual column metadata
will be reloaded also when the table is not being rebuilt.
We need to release global system variables mutex before
doing wsrep_init to avoid race with next show status and
we need to save wsrep_on value as it is changed on wsrep_init.
Added test case.