This is a port of the Percona Server commit 5265f42e290573e9591f8ca28ab66afc051f89a3
which is the same as their bug PXB-1807: xtrabackup does not accept fractional values for
innodb_max_dirty_pages_pct
Problem:
Variable specified as double in MySQL server, but read as long in the
xtrabackup. This causes xtrabackup to fail at startup when the value
contains decimal point.
Fix:
Make xtrabackup to interpret the value as double to be compatible with
server.
calculate auto-inc value even if long duplicate check fails -
this is what the engine does for normal uniques.
auto-inc value is needed if it's a REPLACE
* treat FUNC/ARRAY variables as SESSION (otherwise they won't be shown)
* allow SHOW_SIMPLE_FUNC everywhere where SHOW_FUNC is
* increase row buffer size to avoid "too short" assert
test uses +d,getnameinfo_fake_long_host to return a fake long
hostname in ip_to_hostname(). But this dbug keyword is only checked
after the lookup in the hostname cache.
the test has to flush the hostname cache in case previous tests
had it populated with fake ip addresses (perfschema tests do that)
also, remove redundant `connection` directives
* restore old ENUM values order, in case someone used
SET @@s3_protocol_version=1
* restore old behavior for Original and Amazon protocol values,
they behave as "Auto" and do not force a specific protocol version
Approved by Andrew Hutchings <andrew@mariadb.org>
ha_innobase::check_if_supported_inplace_alter(): On ALTER_OPTIONS,
if innodb_file_per_table=1 and the table resides in the system tablespace,
require that the table be rebuilt (and moved to an .ibd file).
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
This test was using a sleep of 1 second in an attempt to ensure that the
timestamp that is part of an InnoDB status string would increase.
This not only prolongs the test execution time by 1+1 seconds, but it
also is inaccurate. It is possible that the actual sleep duration is
less than a second.
Let us wait for the creation of the file ib_buffer_pool and then wait
for the buffer pool dump completion. In that way, the test can complete
in a dozen or two milliseconds (1% of the previous duration) and work
more reliably.
Add OPTION_GTID_BEGIN to applying side thread. This is needed to avoid
intermediate commits when CREATE TABLE AS SELECT is applied, causing
one more GTID to be consumed with respect to executing node.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Return an error if user attempts to use SEQUENCEs in combination with
streaming replication in a Galera cluster. This is currently not
supported.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
AKA rpl.rpl_parallel, binlog_encryption.rpl_parallel fails in
buildbot with timeout in include
A replication parallel worker thread can deadlock with another
connection running SHOW SLAVE STATUS. That is, if the replication
worker thread is in do_gco_wait() and is killed, it will already
hold the LOCK_parallel_entry, and during error reporting, try to
grab the err_lock. SHOW SLAVE STATUS, however, grabs these locks in
reverse order. It will initially grab the err_lock, and then try to
grab LOCK_parallel_entry. This leads to a deadlock when both threads
have grabbed their first lock without the second.
This patch implements the MDEV-31894 proposed fix to optimize the
workers_idle() check to compare the last in-use relay log’s
queued_count==dequeued_count for idleness. This removes the need for
workers_idle() to grab LOCK_parallel_entry, as these values are
atomically updated.
Huge thanks to Kristian Nielsen for diagnosing the problem!
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
The problem is that these tests run optimistic parallel replication with
non-transactional mysql.gtid_slave_pos table. Very occasionally InnoDB stats
update may race with one another and cause a parallel replication deadlock
kill after the mysql.gtid_slave_pos table has been updated. Since
mysql.gtid_slave_pos is non-transactional, the update cannot be rolled back,
and transaction retry will fail with a duplicate key error in
mysql.gtid_slave_pos.
Fixed by altering the storage engine to InnoDB for the table.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The testcase had a race in two places where a KILL QUERY is made towards a
running query in another connection. The query can complete early so the kill
is lost, and the test fails due to expecting ER_QUERY_INTERRUPTED.
Fix by removing the KILL QUERY. It is not needed, as the query completes by
itself after SHOW EXPLAIN FOR.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Add a test case that demonstrates a working setup as described in MDEV-26632.
This requires --gtid-ignore-duplicates=1 and --gtid-strict-mode=0.
In A->B->C, B filters some (but not all) events from A. C is promoted to
create A->C->B, and the current GTID position in B contains a GTID from A that
is not present in C (due to filtering). Demonstrate that B can still connect
with GTID to C, starting at the "hole" in the binlog stream on C originating
from A.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Omit `state` when selecting processlist to verify which threads are running.
The state changes as threads are running (enter_state()), and this causes
sporadic test failures.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Make sure the old binlog dump thread is not still running when manipulating
binlog files; otherwise there is a small chance it will see an invalid
partial file and report an I/O error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Let us remove a test that frequently fails with a result difference.
This test had been added in fc279d7ea2
to cover a bug in thd_destructor_proxy(), which was replaced with
simpler logic in 5e62b6a5e0 (MDEV-16264).
The test was populating unnecessarily large tables and
restarting the server several times for no real reason.
Let us hope that a smaller version of the test will produce more
stable results. Occasionally, some unencrypted contents in the table t2
was revealed in the old test.
Crash was caused by referencing a null pointer on getting
the number of the nesting levels of the set function for the current
select_lex at the method Item_field::fix_fields.
The current select for processing is taken from Name_resolution_context
that filled in at the function set_new_item_local_context() and
where initialization of the data member Name_resolution_context
was mistakenly removed by the commit
d6ee351bbb
(Revert "MDEV-24454 Crash at change_item_tree")
To fix the issue, correct initialization of data member
Name_resolution_context::select_lex
that was removed by the commit d6ee351bbb
is restored.
This patch fixes too strong condition in assert at the method
Item_func_group_concat::fix_fields
that is true in case of a stored routine and obviously broken
for a prepared statement.
Enable unusable key notes for non-equality predicates:
<, <=, =>, >, BETWEEN, IN, LIKE
Note, in some scenarios it displays duplicate notes, e.g.
for queries with ORDER BY:
SELECT * FROM t1
WHERE indexed_string_column >= 10
ORDER BY indexed_string_column
LIMIT 5;
This should be tolarable. Getting rid of the diplicate note
completely would need a much more complex patch, which is
not desiable in 10.6.
Details:
- Changing RANGE_OPT_PARAM::note_unusable_keys from bool
to a new data type Item_func::Bitmap, so the caller can
choose with a better granuality which predicates
should raise unusable key notes inside the range optimizer:
a. all predicates (=, <=>, <, <=, =>, >, BETWEEN, IN, LIKE)
b. all predicates except equality (=, <=>)
c. none of the predicates
"b." is needed because in some scenarios equality predicates (=, <=>)
send unusable key notes at an earlier stage, before the range optimizer,
during update_ref_and_keys(). Calling the range optimizer with
"all predicates" would produce duplicate notes for = and <=> in such cases.
- Fixing get_quick_record_count() to call the range optimizer
with "all predicates except equality" instead of "none of the predicates".
Before this change the range optimizer suppressed all notes for
non-equality predicates: <, <=, =>, >, BETWEEN, IN, LIKE.
This actually fixes the reported problem.
- Fixing JOIN::make_range_rowid_filters() to call the range optimizer
with "all predicates except equality" instead of "all predicates".
Before this change the range optimizer produced duplicate notes
for = and <=> during a rowid_filter optimization.
- Cleanup:
Adding the op_collation argument to Field::raise_note_cannot_use_key_part()
and displaying the operation collation rather than the argument collation
in the unusable key note. This is important for operations with more than
two arguments: BETWEEN and IN, e.g.:
SELECT * FROM t1
WHERE column_utf8mb3_general_ci
BETWEEN 'a' AND 'b' COLLATE utf8mb3_unicode_ci;
SELECT * FROM t1
WHERE column_utf8mb3_general_ci
IN ('a', 'b' COLLATE utf8mb3_unicode_ci);
The note for 'a' now prints utf8mb3_unicode_ci as the collation.
which is the collation of the entire operation:
Cannot use key key1 part[0] for lookup:
"`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
"'a'" of collation `utf8mb3_unicode_ci`
Before this change it printed the collation of 'a',
so the note was confusing:
Cannot use key key1 part[0] for lookup:
"`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
"'a'" of collation `utf8mb3_general_ci`"
The data type of the column INFORMATION_SCHEMA.GLOBAL_STATUS.VARIABLE_VALUE
is a character string. Therefore, if we want to compare some values as
integers, we must explicitly cast them to integer type, to avoid an
awkward comparison where '10'<'9' because the first digit is smaller.
Also in the startup, lets not "Error" on attempting to install a
mysql.plugin that is already there. We use the 'if_not_exists'
parameter to true to downgrade this to a "Note".
Also corrects: MDEV-32041 "plugin already loaded" should be a Warning, not an Error
recv_recovery_from_checkpoint_start(): Relax a too strict debug assertion
that occasionally fails in the test encryption.innodb-redo-nokeys
when fil_ibd_load() returns FIL_LOAD_INVALID due to missing crypt_info.
This assertion had been removed in MariaDB Server 10.8 as part of
commit 685d958e38 (MDEV-14425).
buf_flush_page_cleaner(): Pass pct_lwm=srv_max_dirty_pages_pct_lwm
(innodb_max_dirty_pages_pct_lwm) to
page_cleaner_flush_pages_recommendation() unless the dirty page ratio
of the buffer pool is below that. Starting with
commit d4265fbde5 we used to always
pass pct_lwm=0.0, which was not intended.
Reviewed by: Vladislav Vaintroub
Because --delete-master-logs immediately purges logs after flushing,
it is possible the binlog dump thread would still be using the old
log when the purge executes, disallowing the file from being
deleted.
This patch institutes a work-around in the test as follows:
1) temporarily stop the slave so there is no chance the old binlog
is still being referenced.
2) set master_use_gtid=Slave_pos so the slave can still appear
up-to-date on the master after the master flushes/purges its logs
(while the slave is offline). Otherwise (i.e. if using binlog
file/pos), the slave would point to a purged log file, and receive
an error immediately upon connecting to the master.
Reviewed By
============
Andrei Elkin <andrei.elkin@mariadb.com>
The previous commit for MDEV-32884 fixed the s3_protocol_version option,
which was previous only using "Auto", no matter what it was set to. This
patch does several things to keep the old behaviour whilst correcting
for new behaviour and laying the groundwork for the future. This
includes:
* `Original` now means v2 protocol, which it would have been due to the
option not working, so upgrades will stil work.
* A new `Legacy` option has been added to mean v1 protocol.
* Options `Path` and `Domain` have been added, these will be the only
two options apart from `Auto` in a future release, and are more
aligned with what this variable means.
* Fixed the s3.debug test so that it works with v2 protocol.
* Fixed the s3.amazon test so that it works with region subdomains.
* Added additional modes to the s3.amazon test.
* Added s3.not_amazon test for the remaining modes.
This replaces PR #2902.
- Split the doublewrite test into two test (doublewrite,
doublewrite_debug) to reduce the execution time of the test
- Removed big_test tag for the newly added test case
- Made doublewrite test as non-debug test
- Added search pattern to make sure that InnoDB uses doublewrite buffer
- Replaced all kill_mysqld.inc with shutdown_mysqld.inc and
zero shutdown timeout
- Removed the case where fsp_flags got corrupted. Because from commit
3da5d047b8 (MDEV-31851) onwards,
doublewrite buffer removes the conversion the fsp flags from buggy
10.1 format
Thanks to Marko Mäkelä for providing the non-debug test
srv_export_innodb_status(): Update
export_vars.innodb_buffer_pool_read_requests
with buf_pool.stat.n_page_gets. This is caused due
to incorrect merge commit 44c9008ba6
Remove ORACLE from the (session) sql_mode in connections made with sql
service to run init queries
The connection is new and the global variable value takes effect
rather than the session value from the caller of spider_db_init.
This should fix certain CI builds where the spider suite test files
and the main suite test files do not follow the same relative paths
relations as the mariadb source.
$MYSQLD_CMD uses .1 as the defaults-group-suffix, which could cause
the use of the default port (3306) or socket, which will fail in
environment where these defaults are already in use by another server.
Adding an extra --defaults-group-suffix=.1.1 does not help, because
the first flag wins.
So we use $MYSQLD_LAST_CMD instead, which uses the correct suffix.
The extra innodb buffer pool warning is irrelevant to the goal of the
test (running --wsrep-recover with --plug-load-add=ha_spider should
not cause hang)
Fix spider init bugs (MDEV-22979, MDEV-27233, MDEV-28218) while
preventing regression on old ones (MDEV-30370, MDEV-29904)
Two things are changed:
First, Spider initialisation is made fully synchronous, i.e. it no
longer happens in a background thread. Adapted from the original fix
by nayuta for MDEV-27233. This change itself would cause failure when
spider is initialised early, by plugin-load-add, due to dependency on
Aria and udf function creation, which are fixed in the second and
third parts below. Requires SQL Service, thus porting earlier versions
requires MDEV-27595
Second, if spider is initialised before udf_init(), create udf by
inserting into `mysql.func`, otherwise do it by `CREATE FUNCTION` as
usual. This change may be generalised in MDEV-31401.
Also factor out some clean-up queries from deinit_spider.inc for use
of spider init tests.
A minor caveat is that early spider initialisation will fail if the
server is bootstrapped for the first time, due to missing `mysql`
database which needs to be created by the bootstrap script.
Removing procedures that were created and dropped during init.
This also fixes a race condition where mtr test with
plugin-load-add=ha_spider.so causes post test check to fail as it
expects the procedures to still be there.
There are several plugins in ha_spider: spider, spider_alloc_mem,
spider_wrapper_protocols, spider_rewrite etc.
INSTALL PLUGIN foo SONAME ha_spider causes all the other ones to be
installed by the init queries where foo is any of the plugins.
This introduces unnecessary complexiy. For example it reads
mysql.plugins to find all other plugins, causing the hack of moving
spider plugin init to a separate thread.
To install all spider related plugins, install soname ha_spider should
be used instead.
This also fixes spurious rows in mysql.plugin when installing say only
the spider plugin with `plugin-load-add=SPIDER=ha_spider.so`:
select * from mysql.plugin;
name dl
spider_alloc_mem ha_spider.so # should not be here
spider_wrapper_protocols ha_spider.so # should not be here
Adapted from part of the reverted commit
c160a115b8.
This commit fixes a bug where IST could be rejected in favor of SST
when ssl-mode=VERIFY_CA and when mariabackup is used. It also contains
a test and small code simplifications that will make it easier to find
bugs in the future.