The problem was that Item_default_value::associate_with_target_field
assigned passed as argument field as an argument which changed argument
in case of default() call with certain field (i.e. deault(field)).
There is no way to get wrong field in constructor so we will not reassign
parameter.
Problem:
========
mariabackup --prepare fails to write the pages in encrypted format.
This issue happens only for default encrypted table when
innodb_encrypt_tables variable is enabled.
Fix:
====
backup process should write the value of innodb_encrypt_tables
variable in configuration file. prepare should enable the
variable based on configuration file.
Test was waiting INSERT-clause to make rollback but
wait_condition was too tight. State could be
Freeing items or Rollback. Fixed wait_condition
to expect one of them.
Synopsis: If SELECT returned answer from Query Cache it is not really executed.
The reason for firing of assertion
DBUG_ASSERT((mem_root->flags & ROOT_FLAG_READ_ONLY) == 0);
is that in case the query_cache is on and the same query run by different
stored routines the following use case can take place:
First, lets say that bodies of routines used by the test case are the same
and contains the only query 'SELECT * FROM t1';
call p1() -- a result set is stored in query cache for further use.
call p2() -- the same query is run against the table t1, that result in
not running the actual query but using its cached result.
On finishing execution of this routine, its memory root is
marked for read only since every SP instruction that this
routine contains has been executed.
INSERT INT t1 VALUE (1); -- force following invalidation of query cache
call p2() -- query the table t1 will result in assertion failure since its
execution would require allocation on the memory root that
has been already marked as read only memory root
The root cause of firing the assertion is that memory root of the stored
routine 'p2' was marked as read only although actual execution of the query
contained inside hadn't been performed.
To fix the issue, mark a SP instruction as not yet run in case its execution
doesn't result in real query processing and a result set got from query cache
instead.
Note that, this issue relates server built in debug mode AND with the protect
statement memory root feature turned on. It doesn't affect server built
in release mode.
In the case if some unique key fields are nullable, there can be
several records with the same key fields in unique index with at least
one key field equal to NULL, as NULL != NULL.
When transaction is resumed after waiting on the record with at least one
key field equal to NULL, and stored in persistent cursor record is
deleted, persistent cursor can be restored to the record with all key
fields equal to the stored ones, but with at least one field equal to
NULL. And such record is wrongly treated as a record with the same unique
key as stored in persistent cursor record one, what is wrong as
NULL != NULL.
The fix is to check if at least one unique field is NULL in restored
persistent cursor position, and, if so, then don't treat the record as
one with the same unique key as in the stored record key.
dict_index_t::nulls_equal was removed, as it was initially developed for
never existed in MariaDB "intrinsic tables", and there is no code, which
would set it to "true".
Reviewed by Marko Mäkelä.
In commit d74d95961a (MDEV-18543)
there was an error that would cause the hidden metadata record
to be deleted, and therefore cause the table to appear corrupted
when it is reloaded into the data dictionary cache.
PageConverter::update_records(): Do not delete the metadata record,
but do validate it.
RecIterator::open(): Make the API more similar to 10.6, to simplify
merges.
Commit 6dce6aeceb breaks out of a loop
in ha_partition::info when some partitions aren't opened, in which
case auto_increment_value assertion will fail. This commit patches
that hole.
- InnoDB reserves the free extents unnecessarily during blob
page allocation even though btr_page_alloc() can handle
reserving the extent when the existing ran out of pages to be used.
JOIN_CACHE has a light-weight initialization mode that's targeted at
EXPLAINs. In that mode, JOIN_CACHE objects are not able to execute.
Light-weight mode was used whenever the statement was an EXPLAIN. However
the EXPLAIN can execute subqueries, provided they enumerate less than
@@expensive_subquery_limit rows.
Make sure we use light-weight initialization mode only when the select is
more expensive @@expensive_subquery_limit.
Also add an assert into JOIN_CACHE::put_record() which prevents its use
if it was initialized for EXPLAIN only.
- Fix to avoid mysqltest client getting killed abruptly during
mysql_shutdown(). When Galera replication is shutdown, wait for
THDs with `thd->stmt_da()->is_eof()` to disconnect (these are about
to disconnect anyway).
- Extract duplicate code from `wsrep_stop_replication()` and
`wsrep_shutdown_replication()` in a new function.
- No need to use a custom `shutdown_mysqld.inc` in galera
suite. Delete it, so that the one in `mysql-test/include/` is used.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Fix function `remove_fragment()` in wsrep_schema so that no error is
raised if the fragment to be removed is not found in the
wsrep_streaming_log table. This is necessary to handle the case where
streaming transaction in idle state is BF aborted. This may result in
the case where the rollbacker thread successfully removes the
transaction's fragments, followed by the applier's attempt to remove
the same fragments. Causing the node to leave the cluster after
reporting a "Failed to apply write set" error.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
MTR buildbot output suggest that buildbot can lose some stdout information
by overwriting it with stderr, which is captured separately
This is bad, since stdout contains information about failing test.
So, this is an attempt to minimize the damage by excluding most frequent
stderr messages - those about restart.
MDEV-33558 Fatal error InnoDB: Clustered record field for column x not found
This is issue is about row ID filtering used with index on virtual
column(s). We hit debug assert and crash while building the record
template in Innodb. The primary reason is that we try to force the code
path to use the ICP path. With ICP, we don't support index with virtual
column and we validate it while index condition is pushed.
Simplify the code for building template to handle both ICP and Row ID
filtering by skipping virtual columns.
Problem:
======
- InnoDB fail to do instant operation while adding the variable
length column. Problem is that InnoDB wrongly assumes that
variable character length can never part of externally stored
page.
Solution:
========
instant_alter_column_possible(): Variable length
character field can be stored as externally stored page.
There were several races in the main.kill_processlist-6619 testcase:
- Lingering connections from a previous test case could be visible in SHOW
PROCESSLIST and cause .result diff.
- A sync point "dispatch_command_end" was ineffective, as it was consumed at
the end of the SET DEBUG command itself.
- The signal from sync point "before_execute_sql_command" could override an
earlier signal, causing DEBUG_SYNC timeout and test failure.
- The final SHOW PROCESSLIST could occasionally see a connection in state
"Busy" instead of the expected "Sleep".
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
In case there is a view that queried from a stored routine or
a prepared statement and this temporary table is dropped between
executions of SP/PS, then it leads to hitting an assertion
at the SELECT_LEX::fix_prepare_information. The fired assertion
was added by the commit 85f2e4f8e8
(MDEV-32466: Potential memory leak on executing of create view statement).
Firing of this assertion means memory leaking on execution of SP/PS.
Moreover, if the added assert be commented out, different result sets
can be produced by the statement SELECT * FROM the hidden table.
Both hitting the assertion and different result sets have the same root
cause. This cause is usage of temporary table's metadata after the table
itself has been dropped. To fix the issue, reload the cache of stored
routines. To do it cache of stored routines is reset at the end of
execution of the function dispatch_command(). Next time any stored routine
be called it will be loaded from the table mysql.proc. This happens inside
the method Sp_handler::sp_cache_routine where loading of a stored routine
is performed in case it missed in cache. Loading is performed unconditionally
while previously it was controlled by the parameter lookup_only. By that
reason the signature of the method Sroutine_hash_entry::sp_cache_routine
was changed by removing unused parameter lookup_only.
Clearing of sp caches affects the test main.lock_sync since it forces
opening and locking the table mysql.proc but the test assumes that each
statement locks its tables once during its execution. To keep this invariant
the debug sync points with names "before_lock_tables_takes_lock" and
"after_lock_tables_takes_lock" are not activated on handling the table
mysql.proc
When rolling back and retrying a transaction in parallel replication, don't
release the domain ownership (for --gtid-ignore-duplicates) as part of the
rollback. Otherwise another master connection could grab the ownership and
double-apply the transaction in parallel with the retry.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
UPDATE statement that is run in PS mode and uses positional parameter
handles columns declared with the clause DEFAULT NULL incorrectly in
case the clause DEFAULT is passed as actual value for the positional
parameter of the prepared statement. Similar issue happens in case
an expression specified in the DEFAULT clause of table's column definition.
The reason for incorrect processing of columns declared as DEFAULT NULL
is that setting of null flag for a field being updated was missed
in implementation of the method Item_param::assign_default().
The reason for incorrect handling of an expression in DEFAULT clause is
also missed saving of a field inside implementation of the method
Item_param::assign_default().
Problem:
========
- InnoDB reads the length of the variable length field wrongly
while applying the modification log of instant table.
Solution:
========
rec_init_offsets_comp_ordinary(): For the temporary instant
file record, InnoDB should read the length of the variable length
field from the record itself.
Aria temporary tables account allocated memory as specific to the current
THD. But this fails for slave threads, where the temporary tables need to be
detached from any specific THD.
Introduce a new flag to mark temporary tables in replication as "global",
and use that inside Aria to not account memory allocations as thread
specific for such tables.
Based on original suggestion by Monty.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
ha_innobase::check_if_supported_inplace_alter(): Require ALGORITHM=COPY
when creating a FULLTEXT INDEX on a versioned table.
row_merge_buf_add(), row_merge_read_clustered_index(): Remove the parameter
or local variable history_fts that had been added in the attempt to fix
MDEV-25004.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
This patch fixes the issue with passing the DEFAULT or IGNORE values to
positional parameters for some kind of SQL statements to be executed
as prepared statements.
The main idea of the patch is to associate an actual value being passed
by the USING clause with the positional parameter represented by
the Item_param class. Such association must be performed on execution of
UPDATE statement in PS/SP mode. Other corner cases that results in
server crash is on handling CREATE TABLE when positional parameter
placed after the DEFAULT clause or CALL statement and passing either
the value DEFAULT or IGNORE as an actual value for the positional parameter.
This case is fixed by checking whether an error is set in diagnostics
area at the function pack_vcols() on return from the function pack_expression()
row_discard_tablespace(): Do not invoke dict_index_t::clear_instant_alter()
because that would corrupt any adaptive hash index entries in the table.
row_import_for_mysql(): Invoke dict_index_t::clear_instant_alter()
after detaching any adaptive hash index entries.
In commit 179424db the file lowercase_table2.result was made executable
for no known reason, most likely just a mistake. Test result files
definitely should not be executable.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to
count how many format descriptor events (FDEs) have been executed,
to attempt to pause on a specific relay log FDE after executing
transactions. However, depending on when the IO thread is stopped,
it can send an extra FDE before sending the transactions, forcing
the test to pause before executing any transactions, resulting in a
table not existing, that is attempted to be read for COUNT.
This patch fixes this by no longer counting FDEs, but rather by
programmatically waiting until the SQL thread has executed the
transaction and then automatically activating the DEBUG_SYNC point
to trigger at the next relay log FDE.
Add wait_condition between debug sync SIGNAL points and other
expected state conditions and refactor actual sync point for
easier to use in test case.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Correct used configuration and force server restarts before test
case. Add wait condition instead of sleep to verify that
all expected nodes are back to cluster.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add more inserts before wsrep_slave_threads is set to 1 and
add wait_condition to wait all of them are replicated before
wait_condition about number of wsrep_slave_threads.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to
count how many format descriptor events (FDEs) have been executed,
to attempt to pause on a specific relay log FDE after executing
transactions. However, depending on when the IO thread is stopped,
it can send an extra FDE before sending the transactions, forcing
the test to pause before executing any transactions, resulting in a
table not existing, that is attempted to be read for COUNT.
This patch fixes this by no longer counting FDEs, but rather by
programmatically waiting until the SQL thread has executed the
transaction and then automatically activating the DEBUG_SYNC point
to trigger at the next relay log FDE.
Problem was that if wsrep_load_data_splitting was used
streaming replication (SR) parameters were set
for MyISAM table. Galera does not currently support SR for
MyISAM.
Fix is to ignore wsrep_load_data_splitting setting (with
warning) if table is not InnoDB table.
This is 10.4-10.5 case of fix.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem is that Galera starts TOI (total order isolation) i.e.
it sends query to all nodes. Later it is discovered that
used engine or other feature is not supported by Galera.
Because TOI is executed parallelly in all nodes appliers
could execute given TOI and ignore the error and
start inconsistency voting causing node to leave from
cluster or we might have a crash as reported.
For example SEQUENCE engine does not support GEOMETRY data
type causing either inconsistency between nodes (because
some errors are ignored on applier) or crash.
Fixed my adding new function wsrep_check_support to check
can Galera support provided CREATE TABLE/SEQUENCE before TOI is
started and if not clear error message is provided to
the user.
Currently, not supported cases:
* CREATE TABLE ... AS SELECT when streaming replication is used
* CREATE TABLE ... WITH SYSTEM VERSIONING AS SELECT
* CREATE TABLE ... ENGINE=SEQUENCE
* CREATE SEQUENCE ... ENGINE!=InnoDB
* ALTER TABLE t ... ENGINE!=InnoDB where table t is SEQUENCE
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>