Problem:
sp_cache erroneously looked up fully qualified SP names (e.g. `DB`.`SP`),
in case insensitive style. It was wrong, because only the "name"
part is always case insensitive, while the "db" part should be compared
according to lower_case_table_names (case sensitively for 0,
case insensitively for 1 and 2).
Fix:
Adding a "casedn_name" parameter make_qname() to tell
if the name part should be lower cased:
`DB1`.`SP` -> "DB1.SP" (when casedn_name=false)
`DB1`.`SP` -> "DB1.sp" (when casedn_name=true)
and using make_qname() with casedn_name=true when creating
sp_cache hash lookup keys.
Details:
As a result, it now works as follows:
- sp_head::m_db is converted to lower case if lower_case_table_names>0
during the sp_name initialization phase. So when make_qname() is called,
sp_head::m_db is already normalized. There are no changes in here.
- The initialization phase of sp_head when creating sp_head::m_qname
now calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to sp_head::m_qname in lower case.
- sp_cache_lookup() now also calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to the temporary lookup key in lower case.
- sp_cache::m_hashtable now uses case sensitive comparison
Part#1 A non-functional change
Changing the signature of Identifier_chain2::make_qname() from
bool make_qname(MEM_ROOT *mem_root, LEX_CSTRING *dst) const;
to
LEX_CSTRING make_qname(MEM_ROOT *mem_root) const;
Now the result is returned as LEX_CSTRING from the function rather than
is passed as a parameter.
The return value {NULL,0} means "EOM".
This is a requirement step to fix and merge easier
MDEV-33019 The database part is not case sensitive in SP names
The original MDEV-31991 commit commend:
- Moving some of Database_qualified_name methods into a new class
Identifier_chain2.
- Changing the data type of the following variables from
Database_qualified_name to Identifier_chain2:
* q_pkg_proc in LEX::call_statement_start()
* q_pkg_func in LEX::make_item_func_call_generic()
Rationale:
The data type of Database_qualified_name::m_db will be changed
to Lex_ident_db soon. So Database_qualified_name won't be able
to store the `pkg.routine` part of `db.pkg.routine` any more,
because `pkg` must not depend on lower-case-table-names.
MDEV-31003 has introduced second execution for SELECTs that execute
under ps-protocol. The following tests in galera suites do not support
this mode of execution, disable it:
galera.MDEV-27862
galera.galera_log_output_csv
galera.galera_query_cache
galera.galera_query_cache_sync_wait
galera_3nodes_sr.GCF-336
galera_3nodes_sr.galera_sr_isolate_master
galera_sr.galera_sr_large_fragment
galera_sr.galera_sr_many_fragments
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Modified galera_sr.mysql-wsrep-features#165 test to be deterministic:
Added one wait condition to catch execution state after --send command.
Changed another wait condition to better match the execution state of the test thread.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
AES block cipher mode CTR is available at the moment
only from OpenSSL 1.0.1. Do not run this testcase
using CTR combination if it is not available.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Could not reproduce and bug report is incomplete i.e. there
is no error logs to analyze and 10.4 branch commit where
failure was seen is not mentioned. Enable test to get more information.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
In spider_db_mbase_util::print_item_func(), if the sql item_func has
an UNKNOWN_FUNC type, by default the spider group by handler (gbh)
transform infix to prefix. But regexp should remain infix, so we add
an if condition to account for this.
This patch fixes cases where a transaction caused empty writeset to be
replicated. This could happen in the case where a transaction executes
a statement that initially manages to modify some data and therefore
appended keys some for certification. The statement is however rolled
back at some later stage due to some error (for example, a duplicate
key error). After statement rollback the transaction is still alive,
has no other changes. When committing such transaction, an empty
writeset was replicated through Galera.
The fix is to avoid calling into commit hook only when transaction
has appended one or keys for certification *and* has some data in
binlog cache to replicate. Otherwise, the commit is considered empty,
and goes through usual empty commit path.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Remove DB_LOCK_WAIT return code check as it should have been resolved to
one of the other errors by that point.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The previous patch for MDEV-10653 changes the rpl_parallel::workers_idle()
function to use Relay_log_info::last_inuse_relaylog to check for idle
workers. But the code was missing a NULL check. Also, there was one place
during SQL slave thread start which was missing mutex synchronisation when
updating inuse_relaylog.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The error-injection inject_mdev8031 simulates a deadlock kill in a specific
place, by setting killed_for_retry to RETRY_KILL_KILLED directly. If a real
deadlock kill triggers at the same time, it is possible for the thread to
complete its transaction retry and set rgi_slave to NULL before the real
readlock kill can complete in the background. This will cause a segfault
due to null-pointer access.
Fix by changing the error injection to do a real background deadlock kill,
which ensures that the thread will wait for any pending background kills to
complete.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Item::val_str() sets the Item::null_value flag, so call it before checking
the flag, not after.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Consider this query
SELECT t1.* FROM t1, (SELECT t2.b FROM t2 WHERE NOT EXISTS
(SELECT 1 FROM t3) GROUP BY b) sq where sq.b = t1.a;
If SELECT 1 FROM t3 is expensive, for example t3 has >
thd->variables.expensive_subquery_limit, first evaluation is deferred to
mysql_derived_fill(). There it is noted that, in the above case
NOT EXISTS (SELECT 1 FROM t3) is constant and false.
This causes the join variable zero_result_cause to be set to
"Impossible WHERE noticed after reading const tables" and the handler
for this join is never "opened" via handler::ha_open.
When mysql_derived_fill() is called for the next group of results, this
unopened handler is not taken into account.
reviewed by Igor Babaev (igor@mariadb.com)
row_ins_clust_index_entry_low(): Invoke btr_set_instant() in the same
mini-transaction that has successfully inserted the metadata record.
In this way, if inserting the metadata record fails before any
undo log record was written for it, the index root page will remain
consistent.
innobase_instant_try(): Remove the btr_set_instant() call.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
Like all IF NOT EXISTS syntax, a Note should be generated.
The original commit of Seqeuences cleared the IF NOT EXISTS part
in the sql/sql_yacc.yy with lex->create_info.init(). Without this
bit set there was no way it could do anything other than error.
To remedy this removal, the sql_yacc.yy components have been
minimised as they where all set at the beginning of the ALTER.
This way the opt_if_not_exists correctly set the IF_EXISTS flag.
In MDEV-13005 (bb4dd70e7c) the error code changed, requiring
ER_UNKNOWN_SEQUENCES to be handled in the function
No_such_table_error_handler::handle_condition.
Allow for a CI system to be almost out of space, or having so
little use, that the Total space is the same as available or used.
Thanks Otto Kekäläinen for the bug report and testing.
This is a port of the Percona Server commit 5265f42e290573e9591f8ca28ab66afc051f89a3
which is the same as their bug PXB-1807: xtrabackup does not accept fractional values for
innodb_max_dirty_pages_pct
Problem:
Variable specified as double in MySQL server, but read as long in the
xtrabackup. This causes xtrabackup to fail at startup when the value
contains decimal point.
Fix:
Make xtrabackup to interpret the value as double to be compatible with
server.
ha_innobase::check_if_supported_inplace_alter(): On ALTER_OPTIONS,
if innodb_file_per_table=1 and the table resides in the system tablespace,
require that the table be rebuilt (and moved to an .ibd file).
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
This test was using a sleep of 1 second in an attempt to ensure that the
timestamp that is part of an InnoDB status string would increase.
This not only prolongs the test execution time by 1+1 seconds, but it
also is inaccurate. It is possible that the actual sleep duration is
less than a second.
Let us wait for the creation of the file ib_buffer_pool and then wait
for the buffer pool dump completion. In that way, the test can complete
in a dozen or two milliseconds (1% of the previous duration) and work
more reliably.
Add OPTION_GTID_BEGIN to applying side thread. This is needed to avoid
intermediate commits when CREATE TABLE AS SELECT is applied, causing
one more GTID to be consumed with respect to executing node.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Return an error if user attempts to use SEQUENCEs in combination with
streaming replication in a Galera cluster. This is currently not
supported.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
AKA rpl.rpl_parallel, binlog_encryption.rpl_parallel fails in
buildbot with timeout in include
A replication parallel worker thread can deadlock with another
connection running SHOW SLAVE STATUS. That is, if the replication
worker thread is in do_gco_wait() and is killed, it will already
hold the LOCK_parallel_entry, and during error reporting, try to
grab the err_lock. SHOW SLAVE STATUS, however, grabs these locks in
reverse order. It will initially grab the err_lock, and then try to
grab LOCK_parallel_entry. This leads to a deadlock when both threads
have grabbed their first lock without the second.
This patch implements the MDEV-31894 proposed fix to optimize the
workers_idle() check to compare the last in-use relay log’s
queued_count==dequeued_count for idleness. This removes the need for
workers_idle() to grab LOCK_parallel_entry, as these values are
atomically updated.
Huge thanks to Kristian Nielsen for diagnosing the problem!
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
Add a test case that demonstrates a working setup as described in MDEV-26632.
This requires --gtid-ignore-duplicates=1 and --gtid-strict-mode=0.
In A->B->C, B filters some (but not all) events from A. C is promoted to
create A->C->B, and the current GTID position in B contains a GTID from A that
is not present in C (due to filtering). Demonstrate that B can still connect
with GTID to C, starting at the "hole" in the binlog stream on C originating
from A.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Omit `state` when selecting processlist to verify which threads are running.
The state changes as threads are running (enter_state()), and this causes
sporadic test failures.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The test was populating unnecessarily large tables and
restarting the server several times for no real reason.
Let us hope that a smaller version of the test will produce more
stable results. Occasionally, some unencrypted contents in the table t2
was revealed in the old test.
This patch fixes too strong condition in assert at the method
Item_func_group_concat::fix_fields
that is true in case of a stored routine and obviously broken
for a prepared statement.
The data type of the column INFORMATION_SCHEMA.GLOBAL_STATUS.VARIABLE_VALUE
is a character string. Therefore, if we want to compare some values as
integers, we must explicitly cast them to integer type, to avoid an
awkward comparison where '10'<'9' because the first digit is smaller.
Also in the startup, lets not "Error" on attempting to install a
mysql.plugin that is already there. We use the 'if_not_exists'
parameter to true to downgrade this to a "Note".
Also corrects: MDEV-32041 "plugin already loaded" should be a Warning, not an Error
Because --delete-master-logs immediately purges logs after flushing,
it is possible the binlog dump thread would still be using the old
log when the purge executes, disallowing the file from being
deleted.
This patch institutes a work-around in the test as follows:
1) temporarily stop the slave so there is no chance the old binlog
is still being referenced.
2) set master_use_gtid=Slave_pos so the slave can still appear
up-to-date on the master after the master flushes/purges its logs
(while the slave is offline). Otherwise (i.e. if using binlog
file/pos), the slave would point to a purged log file, and receive
an error immediately upon connecting to the master.
Reviewed By
============
Andrei Elkin <andrei.elkin@mariadb.com>
Remove ORACLE from the (session) sql_mode in connections made with sql
service to run init queries
The connection is new and the global variable value takes effect
rather than the session value from the caller of spider_db_init.
This should fix certain CI builds where the spider suite test files
and the main suite test files do not follow the same relative paths
relations as the mariadb source.
$MYSQLD_CMD uses .1 as the defaults-group-suffix, which could cause
the use of the default port (3306) or socket, which will fail in
environment where these defaults are already in use by another server.
Adding an extra --defaults-group-suffix=.1.1 does not help, because
the first flag wins.
So we use $MYSQLD_LAST_CMD instead, which uses the correct suffix.
The extra innodb buffer pool warning is irrelevant to the goal of the
test (running --wsrep-recover with --plug-load-add=ha_spider should
not cause hang)