This assertion fails in thread that removes all table instances for
particular table from table cache (e.g. "DROP TABLE") while another
thread evicts table instance of the same table from table cache
concurrently.
After "MDEV-10296 - Multi-instance table cache" there is a gap in
eviction code of tc_add_table() between removing table from free_tables
and all_tables not protected by any mutexes.
This is now valid table cache state, however assertion wasn't amended
along with original patch. Moved assertion down, after waiting for such
table instances to get closed.
This problem manifested itself when a join query used two or more
materialized CTE such that each of them employed the same recursive CTE.
The bug caused a crash. The crash happened because the cleanup()
function was performed premature for recursive CTE. This clean up was
induced by the cleanup of the first CTE referenced the recusrsive CTE.
This cleanup destroyed the structures that would allow to read from the
temporary table containing the rows of the recursive CTE and an attempt to read
these rows for the second CTE referencing the recursive CTE triggered a
crash.
The clean up for a recursive CTE R should be performed after the cleanup
of the last materialized CTE that uses R.
Because innodb_file_per_table can be enabled at runtime after it
was disabled at startup, it is better to always register the same
innobase_hton->tablefile_extensions. Besides,
innodb_file_per_table=OFF does not prevent loading tables that may
have been created earlier with the .ibd file extension.
Remove plugin-load option from mariabackup. It does not needed to be an
option (we only need to store the plugin-load value during backup phase,
and reuse the same value during --prepare).
Fix is to read plugin-load from backup-my.cnf during prepare.
This regression was introduced in MDEV-16515.
We would fail to drop a temporary table on client disconnect,
because trx_is_interrupted() would hold. To add insult to
injury, in MariaDB 10.1, InnoDB temporary tables are actually
persistent, so the garbage temporary tables will never be dropped.
row_drop_table_for_mysql(): If several iterations of
buf_LRU_drop_page_hash_for_tablespace() are needed,
do not interrupt dropping a temporary table even after
the transaction was marked as killed.
Server shutdown will still terminate the loop, and also DROP TABLE
of persistent tables will keep checking if the execution was aborted.
Field_iterator_table_ref::set_field_iterator
Several functions that processed different prepare statements missed
the DT_INIT flag in last parameter of the open_normal_and_derived_tables()
calls. It made context analysis of derived tables dependent on the order in
which the derived tables were processed by mysql_handle_derived(). This
order was induced by the order of SELECTs in all_select_list.
In 10.4 the order of SELECTs in all_select_list became different and lack
of the DT_INIT flags in some open_normal_and_derived_tables() call became
critical as some derived tables were not identified as such.
Move up-to this revision in the upstream:
commit de1e8c7bfe7c875ea284b55040e8f3cd3a56fcc2
Author: Abhinav Sharma <abhinavsharma@fb.com>
Date: Thu Aug 23 14:34:39 2018 -0700
Log updates to semi-sync whitelist in the error log
Summary:
Plugin variable changes are not logged in the error log even when
log_global_var_changes is enabled. Logging updates to whitelist will help in
debugging.
Reviewed By: guokeno0
Differential Revision: D9483807
fbshipit-source-id: e111cda773d
commit de1e8c7bfe7c875ea284b55040e8f3cd3a56fcc2
Author: Abhinav Sharma <abhinavsharma@fb.com>
Date: Thu Aug 23 14:34:39 2018 -0700
Log updates to semi-sync whitelist in the error log
Summary:
Plugin variable changes are not logged in the error log even when
log_global_var_changes is enabled. Logging updates to whitelist will help in
debugging.
Reviewed By: guokeno0
Differential Revision: D9483807
fbshipit-source-id: e111cda773d
Problem was that a parallel open of a table, overwrote info->state that
was in used by repair.
Fixed by changing _ma_tmp_disable_logging_for_table() to use
a new state buffer state.no_logging to store the temporary state.
Other things:
- Use original number of rows when retrying repair to get rid of a
potential warning "Number of rows changed from X to Y"
- Changed maria_commit() to make it easier to merge with 10.4
- If table is not locked (like with show commands), use the global
number of rows as the local number may not be up to date.
(Minor not critical fix)
- Added some missing DBUG_RETURN
The bug appears because of the Item_func_in::build_clone() method.
The 'array' field for the Item_func_in item that can be pushed into
the materialized view/derived table was built in the wrong way.
It becomes lame after the pushdown of the condition into the first
SELECT that defines that view/derived table. The server crashes in
the pushdown into the next SELECT while trying to use already lame
'array' field.
To fix it Item_func_in::build_clone() was changed.
Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
For example,
CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
INSERT INTO t1 (a) VALUES (1);
UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
Without this fix, it hits the Assertion `dd_val >= last_val' failed in
myrocks::ha_rocksdb::load_auto_incr_value_from_index.
(cherry picked from commit f7154242b8)