Field_iterator_table_ref::set_field_iterator
Several functions that processed different prepare statements missed
the DT_INIT flag in last parameter of the open_normal_and_derived_tables()
calls. It made context analysis of derived tables dependent on the order in
which the derived tables were processed by mysql_handle_derived(). This
order was induced by the order of SELECTs in all_select_list.
In 10.4 the order of SELECTs in all_select_list became different and lack
of the DT_INIT flags in some open_normal_and_derived_tables() call became
critical as some derived tables were not identified as such.
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 4885baf on branch bb-10.3-MDEV-16889
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit 4885baf on branch bb-10.3-MDEV-16889
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
During a server restart that follows a server crash, there may be error
messages that indicate that certain system tables are marked as crashed and
may be corrupted. Upon checking the system tables that are marked as crashed,
it may be found that there is no corruption that needs repair. However, the
error messages that are issued imply that the user needs to nevertheless
manually repair the system tables that were marked as crashed. These issues
have been moved to a separate bug MDEV-17068.
MDEV-16250 addresses the work to make the Spider system tables crash safe.
This work involves the following changes in 10.4:
- Changes to the install_spider.sql script to change the storage engine for
the Spider system tables to Aria. This is implemented in such a way as to
allow the script to be run repeatedly without any harm or errors.
- Changes to the init_spider.inc script that is run during initialization of
every test in the Spider test suites. This script now uses the Aria
storage engine for the Spider system tables in MariaDB Server 10.4 and
later releases.
- Added a test to the Spider test suite to display the storage engine of each
Spider system table, to verify that the correct storage engine is used.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit 73fac2a from branch bb-10.4-MDEV-16250
Move up-to this revision in the upstream:
commit de1e8c7bfe7c875ea284b55040e8f3cd3a56fcc2
Author: Abhinav Sharma <abhinavsharma@fb.com>
Date: Thu Aug 23 14:34:39 2018 -0700
Log updates to semi-sync whitelist in the error log
Summary:
Plugin variable changes are not logged in the error log even when
log_global_var_changes is enabled. Logging updates to whitelist will help in
debugging.
Reviewed By: guokeno0
Differential Revision: D9483807
fbshipit-source-id: e111cda773d
- Changed ERROR to WARNING for MyISAM/Aria message
that are warnings in the check utilities.
This affects for example "client is using or
hasn't closed the table properly".
- Print "Table is fixed" if check succeded in
fixing the table.
commit de1e8c7bfe7c875ea284b55040e8f3cd3a56fcc2
Author: Abhinav Sharma <abhinavsharma@fb.com>
Date: Thu Aug 23 14:34:39 2018 -0700
Log updates to semi-sync whitelist in the error log
Summary:
Plugin variable changes are not logged in the error log even when
log_global_var_changes is enabled. Logging updates to whitelist will help in
debugging.
Reviewed By: guokeno0
Differential Revision: D9483807
fbshipit-source-id: e111cda773d
During a server restart that follows a server crash, there may be error
messages that indicate that certain system tables are marked as crashed and
may be corrupted. Upon checking the system tables that are marked as crashed,
it may be found that there is no corruption that needs repair. However, the
error messages that are issued imply that the user needs to nevertheless
manually repair the system tables that were marked as crashed. These issues
have been moved to a separate bug MDEV-17068.
MDEV-16250 addresses the work to make the Spider system tables crash safe.
This work involves the following changes in 10.4:
- Changes to the install_spider.sql script to change the storage engine for
the Spider system tables to Aria. This is implemented in such a way as to
allow the script to be run repeatedly without any harm or errors.
- Changes to the init_spider.inc script that is run during initialization of
every test in the Spider test suites. This script now uses the Aria
storage engine for the Spider system tables in MariaDB Server 10.4 and
later releases.
- Added a test to the Spider test suite to display the storage engine of each
Spider system table, to verify that the correct storage engine is used.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Problem was that a parallel open of a table, overwrote info->state that
was in used by repair.
Fixed by changing _ma_tmp_disable_logging_for_table() to use
a new state buffer state.no_logging to store the temporary state.
Other things:
- Use original number of rows when retrying repair to get rid of a
potential warning "Number of rows changed from X to Y"
- Changed maria_commit() to make it easier to merge with 10.4
- If table is not locked (like with show commands), use the global
number of rows as the local number may not be up to date.
(Minor not critical fix)
- Added some missing DBUG_RETURN
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
The bug appears because of the Item_func_in::build_clone() method.
The 'array' field for the Item_func_in item that can be pushed into
the materialized view/derived table was built in the wrong way.
It becomes lame after the pushdown of the condition into the first
SELECT that defines that view/derived table. The server crashes in
the pushdown into the next SELECT while trying to use already lame
'array' field.
To fix it Item_func_in::build_clone() was changed.
MDEV-17058: Test failure on wsrep.variables
MDEV-17060: Test failure on galera.galera_var_slave_threads
Fix incorrect calculation of increased applier (slave) threads.
Note that increase change takes effect "immediately" but we should
use proper wait condition to wait it. Reducing the number of
slave threads is not immediate as thread will only exit after a
replication event.
Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
For example,
CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
INSERT INTO t1 (a) VALUES (1);
UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
Without this fix, it hits the Assertion `dd_val >= last_val' failed in
myrocks::ha_rocksdb::load_auto_incr_value_from_index.
(cherry picked from commit f7154242b8)
Implement according to standard SQL specification 2008.
The check_constraints table is used for fetching metadata about
the constraints defined for tables in all databases.
Moved the checks for arguments validation of Item_name_const from the constructor
to Create_func_name_const::create_2_arg
Also reverted the fix bf1c53e9be
An INSERT into a temporary table would fail to set the
index page as modified. If there were no other write operations
(such as UPDATE or DELETE) to the page, and the page was evicted,
we would read back the old contents of the page, causing
corruption or loss of data.
page_cur_insert_rec_write_log(): Call mtr_t::set_modified()
for temporary tables. Normally this is part of the mlog_open()
call, but the mlog_open() call was only present in debug builds.
This regression was caused by
commit 48192f963a
which was preparation for MDEV-11369 and supposed to affect
debug builds only.
Thanks to Thirunarayanan Balathandayuthapani for debugging.
When a table is renamed to an internal #sql2 or #sql-ib name during
a table-rebuilding DDL operation such as OPTIMIZE TABLE or ALTER TABLE,
and shortly after that a purge operation in an index on virtual columns
is attempted, the operation could fail, but purge would fail to release
the table reference.
innodb_acquire_mdl(): Release the reference if the table name is not
valid for acquiring a meta-data lock (MDL).
innodb_find_table_for_vc(): Add a debug assertion if the table name
is not valid. This code path is for DML execution. The table
should have a valid name for executing DML, and furthermore a MDL
will prevent the table from being renamed.
row_vers_build_clust_v_col(): Add a debug assertion that both indexes
must belong to the same table.
Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
For example,
CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
INSERT INTO t1 (a) VALUES (1);
UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
Without this fix, it hits the Assertion `dd_val >= last_val' failed in
myrocks::ha_rocksdb::load_auto_incr_value_from_index.
After iterating all fields and setting PART_INDIRECT_KEY_FLAG as
necessary, TABLE::mark_columns_used_by_virtual_fields() remembers
in TABLE_SHARE that this operation was done and need not be repeated.
But as the flag is set in TABLE_SHARE, PART_INDIRECT_KEY_FLAG must
be set in TABLE_SHARE::field[], not only in TABLE::field[].
Otherwise all new TABLEs opened from this TABLE_SHARE will
never have it.
cleanup
change_fields_versioning_try(): use innodb_col_no() instead of a manual loop
change_fields_versioning_cache(): use innodb_col_no() instead of a manual loop