1. Add explicit indication that the output is produced by
SHOW EXPLAIN/ANALYZE FORMAT=JSON command.
2. Remove useless "r_total_time_ms" field from SHOW ANALYZE FORMAT=JSON
output when there is no timed statistics gathered.
3. Add "r_query_time_in_progress_ms" to the output of SHOW ANALYZE FORMAT=JSON.
EXPLAIN FOR CONNECTION is a MySQL-compatible syntax for SHOW EXPLAIN.
This commit also adds support for FORMAT=JSON to SHOW EXPLAIN,
so the possible options to get JSON-formatted output are:
- SHOW EXPLAIN FORMAT=JSON FOR $con
- EXPLAIN FORMAT=JSON FOR CONNECTION $con
Problem:
========
The test logic checked for the wrong condition to validate that the
slave had caught up with the master. Specifically, it used the
thread stage of the IO and SQL thread to be in the “Waiting for
master to send event” and “Slave has read all relay log; waiting for
more updates” states, respectively. The problem exposed by this MDEV
is that, this state is also the initial slave state before reading
data from the primary (whereas the intended state was having already
read all available events from the primary and now waiting for new
events). This made the MTR test validate data that it had not yet
received, and thereby fail.
Solution:
========
Instead of using the IO/SQL thread states, use the existing helper
functions save_master_gtid.inc and sync_with_master_gtid.inc. Note
that the test result file also needed to be updated to reflect
this fix.
Special thanks to Angelique Sklavounos for pointing out that
--stop-position was not specified in any buildbot failures, as this
led to an IF block in the MTR test that was the source of the test
failure.
Reviewed By
============
Andrei Elkin <andrei.elkin@mariadb.com>
Expression_cache_tmptable object uses an Expression_cache_tracker object
to report the statistics.
In the common scenario, Expression_cache_tmptable destructor sets
tracker->cache=NULL. The tracker object survives after the expression
cache is deleted and one may call cache_tracker->fetch_current_stats()
for it with no harm.
However a degenerate cache with no parameters does not set
tracker->cache=NULL in Expression_cache_tmptable destructor which
results in an attempt to use freed data in the
cache_tracker->fetch_current_stats() call.
Fixed by setting tracker->cache to NULL and wrapping the assignment into
a function.
As agreed with Serg, renaming class Yesno to Yes_or_empty,
to reflect better its behavior.
This helper class is used to define INFORMATION_SCHEMA columns
that return either "Yes" or an empty string.
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
btr_insert_into_right_sibling(): Inherit any gap lock from the
left sibling to the right sibling before inserting the record
to the right sibling and updating the node pointer(s).
lock_update_node_pointer(): Update locks in case a node pointer
will move.
Based on mysql/mysql-server@c7d93c274f
- The problem:
==============
- Commit f7216fa63d created the check function for
default temporary storage engine and in case the SE doesn't support temporary tables
the error `ER_ILLEGAL_HA_CREATE_OPTION` is raised.
Before that commit in such cases temporary tables were created by silently substituting
default SE (RocksDB, Connect, PerfSchema) with MyISAM.
- The test `pr_diagnostics.test` was modified in that commit with raising the error,
since I didn't check the root cause of test itself.
- The solution:
===============
- This commit update the root case: procedure `ps_setup_save()` that uses temporary
tables created from performance schema tables definition using `LIKE`, what is not supported.
The suggested fix is to use InnoDB table by using `AS SELECT`.
- Note that test `pr_diagnostics` will raise this error for `medium/full` third argument,
but not for `current` value of third argument.
- Additionally this patch updates the test case of commit f7216fa, by adding missing relation
between temporary tables and Performance schema in `perfschema.misc` test.
- Reviewed by: <wlad@mariadb.com>
buf_flush_page(): Never wait for a page latch, even in checkpoint
flushing (flush_type == BUF_FLUSH_LIST), to prevent a hang of the
page cleaner threads when a large number of pages is latched.
In mysql/mysql-server@9542f3015b
it was claimed that such a hang only affects CREATE FULLTEXT INDEX.
Their fix was to retain buffer-fix but release exclusive latch
on non-leaf pages, and subsequently write to those pages while
they are not associated with the mini-transaction, which would
trip a debug assertion in the MariaDB version of
mtr_t::memo_modify_page() and cause potential corruption
when using the default MariaDB setting innodb_log_optimize_ddl=OFF.
This change essentially backports a small part of
commit 7cffb5f6e8 (MDEV-23399)
from MariaDB Server 10.5.7.
This commit fixes a crash reported as MDEV-28377 and a number
of other crashes in automated tests with mtr that are related
to broken .cnf files in galera and galera_3nodes suites, which
happened when automatically migrating MDEV-26171 from 10.3 to
subsequent higher versions.
Two bugs here:
1. CHECKSUM TABLE asserted that all fields in the table are arranged
sequentially in the record, but virtual columns are always at the
end, violating this assertion
2. virtual columns were not calculated for CHECKSUM, so CHECKSUM
was using, essentially, garbage left from the previous statement.
(that's why the test must use INSERT IGNORE to have this "previous
statement" mark vcols not null)
Fix: don't include virtual columns into the table CHECKSUM. Indeed,
they cannot be included as the engine does not see virtual columns,
so in-engine checksum cannot include them, meaning in-server checksum
should not either
This follows up the previous fix in
commit c3c53926c4 (MDEV-26554).
ha_innobase::delete_table(): Work around the insufficient
metadata locking (MDL) during DML operations by acquiring exclusive
InnoDB table locks on all child tables. Previously, this was only
done on TRUNCATE and ALTER.
ibuf_delete_rec(), btr_cur_optimistic_delete(): Do not invoke
lock_update_delete() during change buffer operations.
The revised trx_t::commit(std::vector<pfs_os_file_t>&) will
hold exclusive lock_sys.latch while invoking fil_delete_tablespace(),
which in turn may invoke ibuf_delete_rec().
dict_index_t::has_locking(): A new predicate, replacing the dummy
!dict_table_is_locking_disabled(index->table). Used for skipping lock
operations during ibuf_delete_rec().
trx_t::commit(std::vector<pfs_os_file_t>&): Release the locks
and remove the table from the cache while holding exclusive
lock_sys.latch.
trx_t::commit_in_memory(): Skip release_locks() if dict_operation holds.
trx_t::commit(): Reset dict_operation before invoking commit_in_memory()
via commit_persist().
lock_release_on_drop(): Release locks while lock_sys.latch is
exclusively locked.
lock_table(): Add a parameter for a pointer to the table.
We must not dereference the table before a lock_sys.latch has
been acquired. If the pointer to the table does not match the table
at that point, the table is invalid and DB_DEADLOCK will be returned.
row_ins_foreign_check_on_constraint(): Improve the checks.
Remove a bogus DB_LOCK_WAIT_TIMEOUT return that was needed
before commit c5fd9aa562 (MDEV-25919).
row_upd_check_references_constraints(),
wsrep_row_upd_check_foreign_constraints(): Simplify checks.
Precision should be kept below DECIMAL_MAX_SCALE for computations.
It can be bigger in Item_decimal. I'd fix this too but it changes the
existing behaviour so problemmatic to ix.
The cause of crash:
remove_redundant_subquery_clauses() removes redundant item expressions.
The primary goal of this is to remove the subquery items.
The removal process unlinks the subquery from SELECT_LEX tree, but does
not remove it from SELECT_LEX:::ref_pointer_array or from JOIN::all_fields.
Then, setup_subquery_caches() tries to wrap the subquery item in an
expression cache, which fails, the first reason for failure being that
the item doesn't have a query plan.
Solution: do not wrap eliminated items with expression cache.
(also added an assert to check that we do not attempt to execute them).
This may look like an incomplete fix: why don't we remove any mention
of eliminated item everywhere? The difficulties here are:
* items can be "un-removed" (see set_fake_select_as_master_processor)
* it's difficult to remove an element from ref_pointer_array: Item_ref
objects refer to elements of that array, so one can't shift elements in
it. Replacing eliminated subselect with a dummy Item doesn't look like a
good idea, either.
The macro MY_GNUC_PREREQ() was used for testing for some minor
GCC 4 versions before GCC 4.8.5, which is the oldest version
that supports C++11, which we depend on ever since
commit d9613b750c
- InnoDB should avoid bulk insert operation when table has active
DDL. Because bulk insert writes only one undo log as TRX_UNDO_EMPTY
and logging of concurrent DML happens at commit time uses undo log
record to parse and get the value and operation.
- Removed ROW_T_EMPTY, ROW_OP_EMPTY and their associated functions
and also the test case which tries to log the ROW_OP_EMPTY
when table has active DDL.
Analysis: When trying to find path and handling the match for path,
value at current index is not set to 0 for array_counters. This causes wrong
current step value which eventually causes wrong cur_step->type value.
Fix: Set the value at current index for array_counters to 0.
constructor
Analysis: counter does not increment while sending rows for table value
constructor and so row_number assumes the default value (0 in this case).
Fix: Increment the counter to avoid counter using default value.
Analysis: There were two kinds of failing tests on buildbot with UBSAN.
1) runtime error: signed integer overflow and
2) runtime error: load of value is not valid value for type
Signed integer overflow was occuring because addition of two integers
(size of json array + item number in array) was causing overflow in
json_path_parts_compare. This overflow happens because a->n_item_end
wasn't set.
The second error was occuring because c_path->p.types_used is not
initialized but the value is used later on to check for negative path index.
Fix: For signed integer overflow, use a->n_item_end only in case of range
so that it is set.
constructor
Analysis: counter does not increment while sending rows for table value
constructor and so row_number assumes the default value (0 in this case).
Fix: Increment the counter to avoid counter using default value.
upon HANDLER READ
Analysis: The error state is not stored while checking condition and key
name.
Fix: Return true while checking condition and key name if error is reported
because geometry object can't be created from the data in the index value
for HANDLER READ.
Analysis: When trying to compare json paths, the array_sizes variable is
NULL when beginning. But trying to access address by adding to the NULL
pointer while recursive calling json_path_parts_compare() for handling
double wildcard, it causes undefined behaviour and the array_sizes
variable eventually becomes non-null (has some address).
This eventually results in crash.
Fix: If array_sizes variable is NULL then pass NULL recursively as well.
This commit fixes a crash reported as MDEV-28377 and a number
of other crashes in automated tests with mtr that are related
to broken .cnf files in galera and galera_3nodes suites, which
happened when automatically migrating MDEV-26171 from 10.3 to
subsequent higher versions.