Synopsis: If SELECT returned answer from Query Cache it is not really executed.
The reason for firing of assertion
DBUG_ASSERT((mem_root->flags & ROOT_FLAG_READ_ONLY) == 0);
is that in case the query_cache is on and the same query run by different
stored routines the following use case can take place:
First, lets say that bodies of routines used by the test case are the same
and contains the only query 'SELECT * FROM t1';
call p1() -- a result set is stored in query cache for further use.
call p2() -- the same query is run against the table t1, that result in
not running the actual query but using its cached result.
On finishing execution of this routine, its memory root is
marked for read only since every SP instruction that this
routine contains has been executed.
INSERT INT t1 VALUE (1); -- force following invalidation of query cache
call p2() -- query the table t1 will result in assertion failure since its
execution would require allocation on the memory root that
has been already marked as read only memory root
The root cause of firing the assertion is that memory root of the stored
routine 'p2' was marked as read only although actual execution of the query
contained inside hadn't been performed.
To fix the issue, mark a SP instruction as not yet run in case its execution
doesn't result in real query processing and a result set got from query cache
instead.
Note that, this issue relates server built in debug mode AND with the protect
statement memory root feature turned on. It doesn't affect server built
in release mode.
It was updated for 10.6+ in MDEV-7317. Because a lower version spider
node may connect to a higher version data node, we need to change this
for 10.4 and 10.5 as well.
In the case if some unique key fields are nullable, there can be
several records with the same key fields in unique index with at least
one key field equal to NULL, as NULL != NULL.
When transaction is resumed after waiting on the record with at least one
key field equal to NULL, and stored in persistent cursor record is
deleted, persistent cursor can be restored to the record with all key
fields equal to the stored ones, but with at least one field equal to
NULL. And such record is wrongly treated as a record with the same unique
key as stored in persistent cursor record one, what is wrong as
NULL != NULL.
The fix is to check if at least one unique field is NULL in restored
persistent cursor position, and, if so, then don't treat the record as
one with the same unique key as in the stored record key.
dict_index_t::nulls_equal was removed, as it was initially developed for
never existed in MariaDB "intrinsic tables", and there is no code, which
would set it to "true".
Reviewed by Marko Mäkelä.
In commit d74d95961a (MDEV-18543)
there was an error that would cause the hidden metadata record
to be deleted, and therefore cause the table to appear corrupted
when it is reloaded into the data dictionary cache.
PageConverter::update_records(): Do not delete the metadata record,
but do validate it.
RecIterator::open(): Make the API more similar to 10.6, to simplify
merges.
Same as MDEV-29579. For some reason, libodbc does not clean up
properly if unloaded too early with the dlclose() of spider. So we add
UNIQUE symbols to spider so the spider does not reload in dlclose().
This change, however, uncovers some hidden problems in the spider
codebase, for which we move the initialisation of some spider global
variables into the initialisation of spider itself.
Spider has some global variables. Their initialisation should be done
in the initialisation of spider itself, otherwise, if spider were
re-initialised without these symbol being unloaded, the values could
be inconsistent and causing issues.
One such issue is caused by the variables
spider_mon_table_cache_version and spider_mon_table_cache_version_req.
They are used for resetting the spider monitoring table cache and have
initial values of 0 and 1 respectively. We have that always
spider_mon_table_cache_version_req >= spider_mon_table_cache_version,
and when the relation is strict, the cache is reset,
spider_mon_table_cache_version is brought to be equal to
spider_mon_table_cache_version_req, and the cache is searched for
matching table_name, db_name and link_idx. If the relation is equal,
no reset would happen and the cache would be searched directly.
When spider is re-inited without resetting the values of
spider_mon_table_cache_version and spider_mon_table_cache_version_req
that were set to be equal in the previous cache reset action, the
cache was emptied in the previous spider deinit, which would result in
HA_ERR_KEY_NOT_FOUND unexpectedly.
An alternative way to fix this issue would be to call the spider udf
spider_flush_mon_cache_table(), which increments
spider_mon_table_cache_version_req thus making sure the inequality is
strict. However, there's no reason for spider to initialise these
global variables on dlopen(), rather than on spider init, which is
cleaner and "purer".
To reproduce this issue, simply revert the changes involving the two
variables and then run:
mtr --no-reorder spider.ha{,_part}
Commit 6dce6aeceb breaks out of a loop
in ha_partition::info when some partitions aren't opened, in which
case auto_increment_value assertion will fail. This commit patches
that hole.
The spider group by handler is created in
JOIN::make_aggr_tables_info(), by which time calls to
substitute_for_best_equal_field() should have already removed all the
multiple equalities (i.e. Item_equal, with MULT_EQUAL_FUNC func_type).
Therefore, if there is still such items, it is deemed as an optimizer
bug and should be skipped.
Also removed ITEM_FUNC_TIMESTAMPDIFF_ARE_PUBLIC.
Similar to pr#2225, with the testcase adapted from that patch:
--8<---------------cut here---------------start------------->8---
From 884f7c6df1 Mon Sep 17 00:00:00 2001
From: "Norio Akagi (norakagi)" <norakagi@amazon.com>
Date: Wed, 3 Aug 2022 23:30:34 -0700
Subject: [PATCH] [MDEV-28992] Push down TIMESTAMP_DIFF in spider
This changes so that TIMESTAMP_DIFF function in a query is pushed down and works natively in Spider.
Instead of directly accessing item's member, now we can rely on a public accessor method to make it work.
Unit tests are added under spider.pushdown_timestamp_diff.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
--8<---------------cut here---------------end--------------->8---
- InnoDB reserves the free extents unnecessarily during blob
page allocation even though btr_page_alloc() can handle
reserving the extent when the existing ran out of pages to be used.
JOIN_CACHE has a light-weight initialization mode that's targeted at
EXPLAINs. In that mode, JOIN_CACHE objects are not able to execute.
Light-weight mode was used whenever the statement was an EXPLAIN. However
the EXPLAIN can execute subqueries, provided they enumerate less than
@@expensive_subquery_limit rows.
Make sure we use light-weight initialization mode only when the select is
more expensive @@expensive_subquery_limit.
Also add an assert into JOIN_CACHE::put_record() which prevents its use
if it was initialized for EXPLAIN only.
Fix a case of stack-use-after-return reported by ASAN in
Wsrep_schema_impl::open_table(). This function has a stack allocated
TABLE_LIST object and return TABLE_LIST::table to the caller.
Changed the function to take a TABLE_LIST pointer as argument.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Partial documentation due to time constraints. Will improve over time.
Also removed a redundant parameter link_idx from
spider_get_sys_tables_connect_info().
And deleted some commented out code.
- Fix to avoid mysqltest client getting killed abruptly during
mysql_shutdown(). When Galera replication is shutdown, wait for
THDs with `thd->stmt_da()->is_eof()` to disconnect (these are about
to disconnect anyway).
- Extract duplicate code from `wsrep_stop_replication()` and
`wsrep_shutdown_replication()` in a new function.
- No need to use a custom `shutdown_mysqld.inc` in galera
suite. Delete it, so that the one in `mysql-test/include/` is used.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Background encryption threads wait for stop flag
to exit early from the tablespace. Alter operation
fails to set the stop flag before waiting for the
encryption thread to stop using the tablespace.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Fix function `remove_fragment()` in wsrep_schema so that no error is
raised if the fragment to be removed is not found in the
wsrep_streaming_log table. This is necessary to handle the case where
streaming transaction in idle state is BF aborted. This may result in
the case where the rollbacker thread successfully removes the
transaction's fragments, followed by the applier's attempt to remove
the same fragments. Causing the node to leave the cluster after
reporting a "Failed to apply write set" error.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
MTR buildbot output suggest that buildbot can lose some stdout information
by overwriting it with stderr, which is captured separately
This is bad, since stdout contains information about failing test.
So, this is an attempt to minimize the damage by excluding most frequent
stderr messages - those about restart.
MDEV-33558 Fatal error InnoDB: Clustered record field for column x not found
This is issue is about row ID filtering used with index on virtual
column(s). We hit debug assert and crash while building the record
template in Innodb. The primary reason is that we try to force the code
path to use the ICP path. With ICP, we don't support index with virtual
column and we validate it while index condition is pushed.
Simplify the code for building template to handle both ICP and Row ID
filtering by skipping virtual columns.
Problem:
======
- InnoDB fail to do instant operation while adding the variable
length column. Problem is that InnoDB wrongly assumes that
variable character length can never part of externally stored
page.
Solution:
========
instant_alter_column_possible(): Variable length
character field can be stored as externally stored page.
There were several races in the main.kill_processlist-6619 testcase:
- Lingering connections from a previous test case could be visible in SHOW
PROCESSLIST and cause .result diff.
- A sync point "dispatch_command_end" was ineffective, as it was consumed at
the end of the SET DEBUG command itself.
- The signal from sync point "before_execute_sql_command" could override an
earlier signal, causing DEBUG_SYNC timeout and test failure.
- The final SHOW PROCESSLIST could occasionally see a connection in state
"Busy" instead of the expected "Sleep".
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
In case there is a view that queried from a stored routine or
a prepared statement and this temporary table is dropped between
executions of SP/PS, then it leads to hitting an assertion
at the SELECT_LEX::fix_prepare_information. The fired assertion
was added by the commit 85f2e4f8e8
(MDEV-32466: Potential memory leak on executing of create view statement).
Firing of this assertion means memory leaking on execution of SP/PS.
Moreover, if the added assert be commented out, different result sets
can be produced by the statement SELECT * FROM the hidden table.
Both hitting the assertion and different result sets have the same root
cause. This cause is usage of temporary table's metadata after the table
itself has been dropped. To fix the issue, reload the cache of stored
routines. To do it cache of stored routines is reset at the end of
execution of the function dispatch_command(). Next time any stored routine
be called it will be loaded from the table mysql.proc. This happens inside
the method Sp_handler::sp_cache_routine where loading of a stored routine
is performed in case it missed in cache. Loading is performed unconditionally
while previously it was controlled by the parameter lookup_only. By that
reason the signature of the method Sroutine_hash_entry::sp_cache_routine
was changed by removing unused parameter lookup_only.
Clearing of sp caches affects the test main.lock_sync since it forces
opening and locking the table mysql.proc but the test assumes that each
statement locks its tables once during its execution. To keep this invariant
the debug sync points with names "before_lock_tables_takes_lock" and
"after_lock_tables_takes_lock" are not activated on handling the table
mysql.proc
When rolling back and retrying a transaction in parallel replication, don't
release the domain ownership (for --gtid-ignore-duplicates) as part of the
rollback. Otherwise another master connection could grab the ownership and
double-apply the transaction in parallel with the retry.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
UPDATE statement that is run in PS mode and uses positional parameter
handles columns declared with the clause DEFAULT NULL incorrectly in
case the clause DEFAULT is passed as actual value for the positional
parameter of the prepared statement. Similar issue happens in case
an expression specified in the DEFAULT clause of table's column definition.
The reason for incorrect processing of columns declared as DEFAULT NULL
is that setting of null flag for a field being updated was missed
in implementation of the method Item_param::assign_default().
The reason for incorrect handling of an expression in DEFAULT clause is
also missed saving of a field inside implementation of the method
Item_param::assign_default().
signal_hand(): Remove the cmake -DWITH_DBUG_TRACE=ON instrumentation.
It can cause a crash on shutdown when the only other thread is
waiting in wait_for_signal_thread_to_end().