The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.
This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.
The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.
To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
orig_*, all_set bitmaps are never substituted already.
This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
MY_BITMAP* instead of my_bitmap_map*
These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
Problem:
=======
Upon deleting or updating a row in a parent table (with primary key), if
the child table has virtual column and an associated key with ON UPDATE
CASCADE/ON DELETE CASCADE, it will result in slave crash.
Analysis:
========
Tables which are related through foreign key require prelocking similar to
triggers. i.e If a table has triggers/foreign keys we should add all tables
and routines used by them to the prelocking set. This prelocking happens
during 'open_and_lock_tables' call. Each table being opened is checked for
foreign key references. If foreign key reference exists then the child
table is opened and it is linked to the table_list. Upon any modification
to parent table its corresponding child tables are retried from table_list
and they are updated accordingly. This prelocking work fine on master.
On slave prelocking works for following cases.
- Statement/mixed based replication
- In row based replication when trigger execution is enabled through
'slave_run_triggers_for_rbr=YES/LOGGING/ENFORCE'
Otherwise it results in an assert/crash, as the parent table will not find
the corresponding child table and it will be NULL. Dereferencing NULL
pointer leads to slave server exit.
Fix:
===
Introduce a new 'slave_fk_event_map' flag similar to 'trg_event_map'. This
flag will ensure that when foreign key is enabled in row based replication
all the parent and child tables are prelocked, so that parent is able to
locate the child table.
Note: This issue is specific to slave, hence only slave needs to be
upgraded.
This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.
After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
Reimplement MDEV-14275 Improving memory utilization for information schema
Postpone temp table instantiation until after setup_fields().
Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).
This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.
Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.
Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.
This reverts e2664ee836 and a8458a2345
In case of NATURAL JOIN / USING mark all field (one table can not be opened
in any case so optimisation does not worth it).
IMHO table should be checked for used fields and filled after prepare,
when we will fave whole info about used fields but it is too big change
for a bugfix. Which will be made later by Serg patch
* Allocate items on thd->mem_root while refixing vcol exprs
* Make vcol tree changes register and roll them back after the statement is executed.
Explanation:
Due to collation implementation specifics an Item tree could change while fixing.
The tricky thing here is to make it on a proper arena.
It's usually not a problem when a field is deterministic, however, makes a pain vice-versa, during allocation allocating.
A non-deterministic field should be refixed on each statement, since it depends on the environment state.
Changing the tree will be temporary and therefore it should be reverted after the statement execution.
Starting from 10.3, the optimizer is able to detect that entire outer join
nests are constants (because of "Impossible ON") and remove them (see
mark_join_nest_as_const)
However, this was not properly accounted for in NESTED_JOIN structure
and the way check_interleaving_with_nj() uses its n_tables member to
check if the join prefix order is allowed.
(The result was that the optimizer could conclude that no join prefix is
allowed and fail an assertion)
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.
Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.
- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
atomic variable.
- Simplified away alloc_histograms_for_table_share().
Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.
Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.
TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock
This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.
Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();
The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.
Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
are no tables marked for reopen. (performance)
only MDL-prelock but do not open FK child tables for read-only (RESTRICT)
FK actions.
Tables still needs to be opened for CASCADE actions, see 9180e8666b
- Print the rowid filters that are available for use with each table.
- Make print_best_access_for_table() print which filter it has picked.
- Make best_access_path() print the filter for considered ref accesses.
Wrong assertion condition. SYSTEM_TIME_ALL indicates that
vers_setup_conds() is done. In case FOR SYSTEM_TIME ALL is specified
in command the assertion passes but not checks anything.
Don't do skip_setup_conds() unless all errors are checked.
Fixes following errors:
ER_PERIOD_NOT_FOUND
ER_VERS_QUERY_IN_PARTITION
ER_VERS_ENGINE_UNSUPPORTED
ER_VERS_NOT_VERSIONED
Don't do skip_setup_conds() unless all errors are checked.
Fixes following errors:
ER_PERIOD_NOT_FOUND
ER_VERS_QUERY_IN_PARTITION
ER_VERS_ENGINE_UNSUPPORTED
ER_VERS_NOT_VERSIONED
When view is merged by DT_MERGE_FOR_INSERT it is then skipped from
processing and doesn't update WHERE clause with
vers_setup_conds(). Note that view itself cannot work in
vers_setup_conds() because it doesn't have row_start, row_end
fields. Thus it is required to descend down to material TABLE_LIST
through calls of mysql_derived_prepare() and run vers_setup_conds()
from there. Luckily, all views (views of views, views of views of
views, etc.) are linked in one list through next_global pointer, so we
can skip all views of views and get straight to non-view TABLE_LIST by
checking its merge_underlying_list property for zero value (it is
assigned by DT_MERGE_FOR_INSERT for merged derived tables).
We have to do that only for UPDATE and DELETE. Other DML commands
don't use WHERE clause.
MDEV-21146 Assertion `m_lock_type == 2' in handler::ha_drop_table upon LOAD DATA
LOAD DATA does not use WHERE and the above call of vers_setup_conds()
is not needed. unit->prepare() led to wrongly locked temporary table.
Turn read cache off for update and multi-update for versioned
table. no_cache is reinited on each TABLE open because it is
applicable for specific algorithms.
As a side fix vers_insert_history_row() honors vers_write setting.
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
TODO:
Currently maria_extra() does not support SEQ_READ_APPEND. Probably it
might be possible to use this type of cache.
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables
UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().
Additional DML cases in view.test
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.
When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.
Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
- Any temporary tables created under read-only mode will never be logged
to binary log. Any usage of these tables to update normal tables, even
after read-only has been disabled, will use row base logging (as the
temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.
Other things:
- Removed not used varaibles in
MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
in create_table_from_items()