- Made output to be aligned in aria_chk -d
- Aria engine error texts are now written instead of "Undefined error"
- When running with --check --force, tables with wrong TRN's but otherwise
correct are now zerofilled
- Fixed several bugs in check and recovery related to fulltext
- When doing recovery, store highest found TRID in aria_control_file
Before this, the
Field_iterator_table_ref::set_field_iterator
Several functions that processed different prepare statements missed
the DT_INIT flag in last parameter of the open_normal_and_derived_tables()
calls. It made context analysis of derived tables dependent on the order in
which the derived tables were processed by mysql_handle_derived(). This
order was induced by the order of SELECTs in all_select_list.
In 10.4 the order of SELECTs in all_select_list became different and lack
of the DT_INIT flags in some open_normal_and_derived_tables() call became
critical as some derived tables were not identified as such.
This also fixes MDEV-16313 Assertion `next_free_value % real_increment == offset' fails upon CREATE SEQUENCE in galera cluster
Fixed by adding llabs() to assert.
Also adjusted auto_increment_offset to mod auto_increment_increment.
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit 4885baf on branch bb-10.3-MDEV-16889
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
The bug appears because of the Item_func_in::build_clone() method.
The 'array' field for the Item_func_in item that can be pushed into
the materialized view/derived table was built in the wrong way.
It becomes lame after the pushdown of the condition into the first
SELECT that defines that view/derived table. The server crashes in
the pushdown into the next SELECT while trying to use already lame
'array' field.
To fix it Item_func_in::build_clone() was changed.
MDEV-17058: Test failure on wsrep.variables
MDEV-17060: Test failure on galera.galera_var_slave_threads
Fix incorrect calculation of increased applier (slave) threads.
Note that increase change takes effect "immediately" but we should
use proper wait condition to wait it. Reducing the number of
slave threads is not immediate as thread will only exit after a
replication event.
Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
For example,
CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
INSERT INTO t1 (a) VALUES (1);
UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
Without this fix, it hits the Assertion `dd_val >= last_val' failed in
myrocks::ha_rocksdb::load_auto_incr_value_from_index.
(cherry picked from commit f7154242b8)
Implement according to standard SQL specification 2008.
The check_constraints table is used for fetching metadata about
the constraints defined for tables in all databases.
An INSERT into a temporary table would fail to set the
index page as modified. If there were no other write operations
(such as UPDATE or DELETE) to the page, and the page was evicted,
we would read back the old contents of the page, causing
corruption or loss of data.
page_cur_insert_rec_write_log(): Call mtr_t::set_modified()
for temporary tables. Normally this is part of the mlog_open()
call, but the mlog_open() call was only present in debug builds.
This regression was caused by
commit 48192f963a
which was preparation for MDEV-11369 and supposed to affect
debug builds only.
Thanks to Thirunarayanan Balathandayuthapani for debugging.
When a table is renamed to an internal #sql2 or #sql-ib name during
a table-rebuilding DDL operation such as OPTIMIZE TABLE or ALTER TABLE,
and shortly after that a purge operation in an index on virtual columns
is attempted, the operation could fail, but purge would fail to release
the table reference.
innodb_acquire_mdl(): Release the reference if the table name is not
valid for acquiring a meta-data lock (MDL).
innodb_find_table_for_vc(): Add a debug assertion if the table name
is not valid. This code path is for DML execution. The table
should have a valid name for executing DML, and furthermore a MDL
will prevent the table from being renamed.
row_vers_build_clust_v_col(): Add a debug assertion that both indexes
must belong to the same table.
Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
For example,
CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
INSERT INTO t1 (a) VALUES (1);
UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
Without this fix, it hits the Assertion `dd_val >= last_val' failed in
myrocks::ha_rocksdb::load_auto_incr_value_from_index.
After iterating all fields and setting PART_INDIRECT_KEY_FLAG as
necessary, TABLE::mark_columns_used_by_virtual_fields() remembers
in TABLE_SHARE that this operation was done and need not be repeated.
But as the flag is set in TABLE_SHARE, PART_INDIRECT_KEY_FLAG must
be set in TABLE_SHARE::field[], not only in TABLE::field[].
Otherwise all new TABLEs opened from this TABLE_SHARE will
never have it.
This patch always provides columns of the temporary table used for
materialization of a table value constructor with some names.
Before this patch these names were always borrowed from the items
of the first row of the table value constructor. When this row
contained expressions and expressions were not named then it could cause
different kinds of problems. In particular if the TVC is used as the
specification of a derived table this could cause a crash.
The names given to the expressions used in a TVC are the same as those
given to the columns of the result set from the corresponding SELECT.
value constructor shows wrong number of rows
If the specification of a derived table contained a table value constructor
then the optimizer incorrectly estimated the number of rows in the derived
table. This happened because the optimizer did not take into account the
number of rows in the constructor. The wrong estimate could lead to choosing
inefficient execution plans.
My conflict resolution for the script did not work out after all,
and apparently I was testing a wrong version. Revert MDEV-15511
from MariaDB 10.2 for now.
trx_purge_add_update_undo_to_history(): Relax the too strict assertion
by removing the condition on srv_fast_shutdown (innodb_fast_shutdown).
Rollback is allowed during any form of shutdown.
buf_dump(): Only generate the output when shutdown is in progress.
log_write_up_to(): Only generate the output before actually writing
to the redo log files.
srv_purge_should_exit(): Rate-limit the output, and instead of
displaying the work done, indicate the work that remains to be done
until the completion of the slow shutdown.
The bug appears because of the wrong pushdown into the WHERE clause of the
materialized derived table/view work. For the excl_dep_on_grouping_fields()
method that checks if the condition can be pushed into the WHERE clause
the case when Item_cond is used is missing. For Item_cond elements this
method always returns positive result (that condition can be pushed).
So this condition is pushed even if is shouldn't be pushed.
To fix it new Item_cond::excl_dep_on_grouping_fields() method is added.
using INSERT INTO
This patch allows condition pushdown into a materialized derived / view when
this table is used in INSERT SELECT, multi-table UPDATE and multi-table DELETE.
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
The MySQL 5.7 TRUNCATE TABLE is inherently incompatible
with hot backup, because it is creating and deleting a separate
log file, and it is not writing redo log for all changes of the
InnoDB data dictionary tables. Refuse to create a corrupted backup
if the unsafe form of TRUNCATE was executed.
Note: Undo log tablespace truncation cannot be detected easily.
Also it is incompatible with backup, for similar reasons.
xtrabackup_backup_func(): "Subscribe to" the log events before
the first invocation of xtrabackup_copy_logfile().
recv_parse_or_apply_log_rec_body(): If the function pointer
log_truncate is set, invoke it to report MLOG_TRUNCATE.
Remove the logic for skipping the test if a log checkpoint occurred,
and the logic for tolerating failures. Thanks to MDEV-16791,
MLOG_INDEX_LOAD is supposed to always work.
Amend commit b853b4fd88
that was reverted in commit 29150e2391.
recv_parse_log_recs(): Do check for corrupted redo log or file
system before checking for len==0, but only read *ptr if
it is not past the end of the buffer (end_ptr).
recv_parse_log_rec(): Report incorrect redo log type
in a consistent way with recv_parse_or_apply_log_rec_body().
This is a follow-up to commit f30c5af42e.