Problem:
At some point, we made stored rountines fail at CREATE time
instead of execution time in case of this syntax:
IF unknown_variable
...
END IF
As a result, a trigger created before this change and contained an unknown
variable worked in a bad way after upgrade:
- It was displayed with an empty trigger name by SHOW CREATE TRIGGER
- It was displayed with an empty trigger name by INFORMATION_SCHEMA.TRIGGERS
- An attempt to DROP this trigger returned errors - nothing happened.
- DROP TABLE did not remove the .TRN file corresponding to this broken trigger.
Underlying code observations:
The old code assumed that the trigger name resides in the current lex:
if(thd->lex->spname)
m_trigger_name= &thd->lex->spname->m_name;
This is not always the case. Some SP statements (e.g. IF)
do the following in their beginning:
- create a separate local LEX
- set thd->lex to this new local LEX
- push the new local LEX to the stack in sp_head::m_lex
and the following at the end of the statement:
- pop the previous LEX from the stack sp_head::m_lex
- set thd->lex back to the popped value
So when the parse error happens inside e.g. IF statement, thd->lex->spname
is a NULL pointer, because thd->lex points to the local LEX (without SP name)
rather than the top level LEX (with SP name).
Fix:
- Adding a new method sp_head::find_spname_recursive()
which walks inside the LEX stack sp_head::m_lex from
the top (the newest, most local) to the bottom (the oldest),
and finds the one which contains a non-zero spname pointer.
- Using the new method inside
Deprecated_trigger_syntax_handler::handle_condition():
First it still tests thd->lex->spname (like before this change),
and uses it in case it is not empty.
Otherwise (if thd->lex->spname is empty), it calls
sp_head::find_spname_recursive() to find the LEX with a
non-empty spname inside the LEX stack of the current sphead.
The --skip-write-binary-log added to mysql_tzinfo_to_sql in
MDEV-18778 was only effective if galera was enabled on the server.
This is because it tied together three concepts under one option:
1. binary logging
2. wsrep replication, and
3. using innodb as a transitional table type.
Change 1: small change in help option to reflect this.
To solve the performance problem with Aria tables, LOCK TABLES WRITE
is used to eliminate the need to fdatasync until the UNLOCK TABLES.
If galera isn't enabled, then we also want to use the LOCK TABLE WRITE
mechanism.
The START TRANSACTION added in MDEV-23440 needed to be moved to
before LOCK TABLES otherwise it would cancel their effect.
TRUNCATE TABLE statements also need to be before the LOCK TABLES.
When changing back from InnoDB to Aria, include the ORDER BY that
was originally there in 6aaccbcbf7 and matching the final ALTER
TABLE in the timezonedir branch.
Running: mariadb-tzinfo-to-sql --skip-write-binlog /usr/share/zoneinfo
now generates 16 Aria_transaction_log_syncs from 7053.
The Spider storage engine ignored the implicit grouping when
aggregation was converted to constant by the query optimizer.
As a result, the Spider SE returned rows more than expected.
To fix the problem, we notify the Spider SE of the existence of
the implicit grouping via Query::distinct.
We need to start Galera transaction for select NEXT VALUE FOR
sequence if it is not yet started. Note that ALTER is handled
as TOI and transaction is already started.
If the optimizer decides to rewrites a NOT IN predicand of the form
outer_expr IN (SELECT inner_col FROM ... WHERE subquery_where)
into the EXISTS subquery
EXISTS (SELECT 1 FROM ... WHERE subquery_where AND
(outer_expr=inner_col OR inner_col IS NULL))
then the pushed equality predicate outer_expr=inner_col can be used for
ref[or_null] access if inner_col is a reference to an indexed column.
In this case if there is a selective range condition over this column then
a Rowid filter may be employed coupled the with ref[or_null] access. The
filter is 'pushed' into the engine and in InnoDB currently it cannot be
used with index look-ups by primary key. The ref[or_null] access can be
used only when outer_expr is not NULL. Otherwise the original predicand
is evaluated to TRUE only if the result set returned by the query
SELECT 1 FROM ... WHERE subquery_where
is empty. When performing this evaluation the executor switches to the
table scan by primary key. Before this patch the pushed filter still
remained marked as active and the engine tried to apply the filter. This
was incorrect and in InnoDB this attempt to use the filter led to an
assertion failure.
This patch fixes the problem by disabling usage of the filter when
outer_expr is evaluated to NULL.
Federated and Federatex cannot be used with ROR scans
Federated::position() and Federatex::position() is storing in 'ref' a
pointer into a local result set buffer. This means that one cannot
compare 'ref' from different handler instances to see if they point to the
same physical record.
This bug caused federated.federatedx to return wrong results when the
optimizer tried to use index_merge to resolve some queries.
Fixed by introducing table flag HA_NON_COMPARABLE_ROWID and using this
with the above handlers.
Todo:
- Fix multi_delete(), multi_update and read_records() to use primary key
instead of 'ref' if case HA_NON_COMPARABLE_ROWID is set. The current
code only works if we have only one range (like table scan) for the
tables that will be updated in the second pass.
- Enable DBUG_ASSERT() in ha_federated::cmp_ref() and
ha_federatedx::cmp_ref().
This is the first part of the fixes for MDEV-24097. This commit
contains the fixes for instability when testing Galera and when
restarting nodes quickly:
1) Protection against a "stuck" old SST process during the execution
of the new SST (after restarting the node) is now implemented for
mariabackup / xtrabackup, which should help to avoid almost all
conflicts due to the use of the same ports - both during testing
with mtr, so and when restarting nodes quickly in a production
environment.
2) Added more protection to scripts against unexpected return of
the rc != 0 (in the commands for deleting temporary files, etc).
3) Added protection against unexpected crashes during binlog transfer
(in SST scripts for rsync).
4) Spaces and some special characters in binlog filenames shouldn't
be a problem now (at the script level).
5) Daemon process termination tracking has been made more robust
against crashes due to unexpected termination of the previous SST
process while new scripts are running.
6) Reading ssl encryption parameters has been moved from specific
SST scripts to a common wsrep_sst_common.sh script, which allows
unified error handling, unified diagnostics and simplifies script
revisions in the future.
7) Improved diagnostics of errors related to the use of openssl.
8) Corrections have been made for xtrabackup-v2 (both in tests and in
the script code) that restore the work of xtrabackup with updated
versions of innodb.
9) Fixed some tests for galera_3nodes, although the complete solution
for the problem of starting three nodes at the same time on fast
machines will be done in a separate commit.
No additional tests are required as this commit fixes problems with
existing tests.
make_join_select() calls const_cond->val_int(). There are edge cases
where const_cond may have a not-yet optimized subquery.
(The subquery will have used_tables() covered by join->const_tables. It
will still have const_item()==false, so other parts of the optimizer
will not try to evaluate it. We should probably mark such subqueries
as constant but that is outside the scope of this MDEV)
This could cause out of order wsrep checkpoints due wsrep specific leader
code not being executed in `MYSQL_BIN_LOG::write_transaction_to_binlog_events`.
Move original result assignment to before wsrep logic to prevent that.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
MDEV-22617 Galera node crashes when trying to log to slow_log table in
streaming replication mode
Other things:
- Changed name of wsrep_after_row(two arguments) to
wsrep_after_row_internal(one argument) to not depended on the
function signature with unused arguments.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Added test case
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
The old code erroneously used default_charset_info to compare field names.
default_charset_info can point to any arbitrary collation,
including ucs2*, utf16*, utf32*, including those that do not
support strcasecmp().
my_charset_utf8mb4_unicode_ci, which is used in this scenario:
CREATE TABLE t1 ENGINE=InnoDB WITH SYSTEM VERSIONING AS SELECT 0;
does not support strcasecmp().
Fixing the code to use Lex_ident::streq(), which uses
system_charset_info instead of default_charset_info.
vsnprintf takes the space need for trailing '\0' in consideration, and copies only n-1 characters to destination buffer.
With the old code, only sizeof(buf)-2 characters were copied, this caused that last character of message could be lost.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Since commit fb335b48b5 we may have
a null pointer in purge_sys.query when fetch_data_into_cache() is
invoked and innodb_force_recovery>4. This is because the call to
purge_sys.create() would be skipped.
fetch_data_into_cache(): Load the purge_sys pseudo transaction pointer
to a local variable (null pointer if purge_sys is not initialized).
This commit contains a fix, where the replication write set for a CREATE TABLE
will contain, as certification keys, table names for all FK references.
With this, all DML for the FK parent tables will conflict with the CREATE TABLE
statement.
There is also new test galera.MDEV-27276 to verify the fix.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This commit has a mtr test where two two transactions delete a row from
two separate tables, which will cascade a FK delete for the same row in
a third table. Second replica node is configured with 2 applier threads,
and the test will fail if these two transactions are applied in parallel.
The actual fix, in this commit, is to mark a transaction as unsafe for
parallel applying when it traverses into cascade delete operation.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
1) Removed symlinks that are not very well supported in tar under Windows.
2) Added comment + changed code formatting in viosslfactories.c
3) Fixed a small bug in the yassl code.
4) Fixed a typo in the script code.
MDEV-25803 excluded some cases from key sort upon alter table. That
particularly depends on ALTER_ADD_INDEX flag. Creating a column of
SERIAL data type missed that flag. Though equivalent operation
alter table t1 add x bigint unsigned not null auto_increment unique;
has ALTER_ADD_INDEX flag.
Repeating execution of a query containing the clause IN with string literals
in environment where the server variable in_predicate_conversion_threshold
is set results in server abnormal termination in case the query is run
as a Prepared Statement and conversion of charsets for string values in the
query are required.
The reason for server abnormal termination is that instances of the class
Item_string created on transforming the IN clause into subquery were created
on runtime memory root that is deallocated on finishing execution of Prepared
statement. On the other hand, references to Items placed on deallocated memory
root still exist in objects of the class table_value_constr. Subsequent running
of the same prepared statement leads to dereferencing of pointers to already
deallocated memory that could lead to undefined behaviour.
To fix the issue the values being pushed into a values list for TVC are created
by cloning their original items. This way the cloned items are allocate on
the PS memroot and as consequences no dangling pointer does more exist.
Consider the following use case:
MariaDB [test]> CREATE TABLE t1 (field1 BIGINT DEFAULT -1);
MariaDB [test]> CREATE VIEW v1 AS SELECT DISTINCT field1 FROM t1;
Repeated execution of the following query as a Prepared Statement
MariaDB [test]> PREPARE stmt FROM 'SELECT * FROM v1 WHERE field1 <=> NULL';
MariaDB [test]> EXECUTE stmt;
results in a crash for a server built with DEBUG.
MariaDB [test]> EXECUTE stmt;
ERROR 2013 (HY000): Lost connection to MySQL server during query
Assertion failed: (!result), function convert_const_to_int, file item_cmpfunc.cc, line 476.
Abort trap: 6 (core dumped)
The crash inside the function convert_const_to_int() happens by the reason
that the value -1 is stored in an instance of the class Field_longlong
on restoring its original value in the statement
result= field->store(orig_field_val, TRUE);
that leads to assigning the value 1 to the variable 'result' with subsequent
crash in the DBUG_ASSERT statement following it
DBUG_ASSERT(!result);
The main matter here is why this assertion failure happens on the second
execution of the prepared statement and doens't on the first one.
On first handling of the statement
'EXECUTE stmt;'
a temporary table is created for serving the query involving the view 'v1'.
The table is created by the function create_tmp_table() in the following
calls trace: (trace #1)
JOIN::prepare (at sql_select.cc:725)
st_select_lex::handle_derived
LEX::handle_list_of_derived
TABLE_LIST::handle_derived
mysql_handle_single_derived
mysql_derived_prepare
select_union::create_result_table
create_tmp_table
Note, that the data member TABLE::status of a TABLE instance returned by the
function create_tmp_table() has the value 0.
Later the function setup_table_map() is called on the TABLE instance just
created for the sake of the temporary table (calls trace #2 is below):
JOIN::prepare (at sql_select.cc:737)
setup_tables_and_check_access
setup_tables
setup_table_map
where the data member TABLE::status is set to the value STATUS_NO_RECORD.
After that when execution of the method JOIN::prepare reaches calling of
the function setup_without_group() the following calls trace is invoked
JOIN::prepare
setup_without_group
setup_conds
Item_func::fix_fields
Item_func_equal::fix_length_and_dec
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func::convert_const_compared_to_int_field
convert_const_to_int
There is the following code snippet in the function convert_const_to_int()
at the line item_cmpfunc.cc:448
bool save_field_value= (field_item->const_item() ||
!(field->table->status & STATUS_NO_RECORD));
Since field->table->status has bits STATUS_NO_RECORD set the variable
save_field_value is false and therefore neither the method
Field_longlong::val_int() nor the method Field_longlong::store is called
on the Field instance that has the numeric value -1.
That is the reason why first execution of the Prepared Statement for the query
'SELECT * FROM v1 WHERE field1 <=> NULL'
is successful.
On second running of the statement 'EXECUTE stmt' a new temporary tables
is also created by running the calls trace #1 but the trace #2 is not executed
by the reason that data member SELECT_LEX::first_cond_optimization has been set
to false on first execution of the prepared statemet (in the method
JOIN::optimize_inner()). As a consequence, the data member TABLE::status for
a temporary table just created doesn't have the flags STATUS_NO_RECORD set and
therefore on re-execution of the prepared statement the methods
Field_longlong::val_int() and Field_longlong::store() are called for the field
having the value -1 and the DBUG_ASSERT(!result) is fired.
To fix the issue the data member TABLE::status has to be assigned the value
STATUS_NO_RECORD in every place where the macros empty_record() is called
to emptify a record for just instantiated TABLE object created on behalf
the new temporary table.
Example build: ./BUILD/compile-pentium64-valgrind-max
Fixes:
- sp-no-valgrind failed if binary was built for valgrind as in this case
mem_root is allocated in very small hunks which the test cannot handle.
Fixed by testing of valgrind build
- truncate_notembedded failed in reap because of more memory used.
Fixed by allowing reap to fail too
Followup to fix for MDEV-25858: When test_if_skip_sort_order() decides
to use an index to satisfy ORDER BY ... LIMIT clause, it should
disable "Range Checked for Each Record" optimization.
Do this in all cases.
This enables optimizer_trace output for the next SQL command.
Identical as if one would have done:
- Store value of @@optimizer_trace
- Set @optimizer_trace="enabled=on"
- Run query
- SELECT * from OPTIMIZER_TRACE
- Restore value of @@optimizer_trace
This is a great time saver when one wants to quickly check the optimizer
trace for a query in a mtr test.
The issue is that when recovery is about to create a new data or index
page it check if the page already exits.
If the page does not exists (file is too short) or contains wrong checksum,
then the recovery code will recreate the page.
The bug was that the code that checked if the page existed didn't take
into account encrypted pages.
Fixed by adding a check if page could not be encrypted solved the issue.
I also added some code to silence decryption errors for new pages.
Test case and some inspiration for how to solve this come from
the pull request by alexandr.miloslavsky
create_log_files(): Check log_set_capacity() before modifying
or creating any log files.
innobase_start_or_create_for_mysql(): If create_log_files()
fails and we were initializing a new database, delete the
system tablespace files before exiting.
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
mtr test galera.galera_UK_conflict, has rather distorted logic in test scenario 2.
The test would fail always with:
CURRENT_TEST: galera.galera_UK_conflict
mysqltest: In included file "./include/galera_wait_sync_point.inc":
included from /home/seppo/work/wsrep/mariadb-server/mysql-test/suite/galera/t/galera_UK_conflict.test at line 216:
At line 3: query 'SET SESSION wsrep_on = 0' failed: 1179: You are not allowed to execute this command in a transaction
This happens because wait_condition is called in wrong connection (node_1, which is excuting MST transaction)
This test is fixed by using contl connection for wait contdition check, and new result is recorded as well
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
Fixes to make the galera.MDEV-20793 test deterministic.
Specifically, after --send COMMIT, there is now a sync point to catch a known state of the COMMIT execution
it's not printed, not cleaned up without perfschema,
so isn't supposed to be written into either
this fixes "Memory not freed" warnings when early command line
options produce warnings in non-perfschema builds