InnoDB could sometimes hang when triggering a log checkpoint. This is
due to commit 7b1252c03d (MDEV-24278),
which introduced an untimed wait to buf_flush_page_cleaner().
The hang was noticed by occasional failures of IMPORT TABLESPACE tests,
such as innodb.innodb-wl5522, which would (unnecessarily) invoke
log_make_checkpoint() from row_import_cleanup().
The reason of the hang was that buf_flush_page_cleaner() would enter
untimed sleep despite buf_flush_sync_lsn being set. The exact failure
scenario is unclear, because buf_flush_sync_lsn should actually be
protected by buf_pool.flush_list_mutex. We prevent the hang by
invoking buf_pool.page_cleaner_set_idle(false) whenever we are
setting buf_flush_sync_lsn and signaling buf_pool.do_flush_list.
The bulk of these changes was originally developed as a preparation
for MDEV-26827, to invoke buf_flush_list() from fewer threads,
and tested on 10.6 by Matthias Leich.
This fix was tested by running 100 repetitions of 100 concurrent instances
of the test innodb.innodb-wl5522 on a RelWithDebInfo build, using ext4fs
and innodb_flush_method=O_DIRECT on a SATA SSD with 4096-byte block size.
During the test, the call to log_make_checkpoint() in row_import_cleanup()
was present.
buf_flush_list(): Make static.
buf_flush_wait(): Wait for buf_pool.get_oldest_modification()
to reach a target, by work done in the buf_flush_page_cleaner.
If buf_flush_sync_lsn is going to be set, we will invoke
buf_pool.page_cleaner_set_idle(false).
buf_flush_ahead(): If buf_flush_sync_lsn or buf_flush_async_lsn
is going to be set and the page cleaner woken up, we will invoke
buf_pool.page_cleaner_set_idle(false).
buf_flush_wait_flushed(): Invoke buf_flush_wait().
buf_flush_sync(): Invoke recv_sys.apply() at the start in case
crash recovery is active. Invoke buf_flush_wait().
buf_flush_sync_batch(): A lower-level variant of buf_flush_sync()
that is only called by recv_sys_t::apply().
buf_flush_sync_for_checkpoint(): Do not trigger log apply
or checkpoint during recovery.
buf_dblwr_t::create(): Only initiate a buffer pool flush, not
a checkpoint.
row_import_cleanup(): Do not unnecessarily invoke log_make_checkpoint().
Invoking buf_flush_list_space() before starting to generate redo log
for the imported tablespace should suffice.
srv_prepare_to_delete_redo_log_file():
Set recv_sys.recovery_on in order to prevent
buf_flush_sync_for_checkpoint() from initiating a checkpoint
while the log is inaccessible. Remove a wait loop that is already
part of buf_flush_sync().
Do not invoke fil_names_clear() if the log is being upgraded,
because the FILE_MODIFY record is specific to the latest format.
create_log_file(): Clear recv_sys.recovery_on only after calling
log_make_checkpoint(), to prevent buf_flush_page_cleaner from
invoking a checkpoint.
innodb_shutdown(): Simplify the logic in mariadb-backup --prepare.
os_aio_wait_until_no_pending_writes(): Update the function comment.
Apart from row_quiesce_table_start() during FLUSH TABLES...FOR EXPORT,
this is being called by buf_flush_list_space(), which is invoked
by ALTER TABLE...IMPORT TABLESPACE as well as some encryption operations.
Upstream Salsa-CI refactored the build process in
58880fcef5
This broke our custom direct invocation of install-build-deps.sh as the
Salsa-CI images no longer contain them. Adapt the .build-script
equivalent to follow new Salsa-CI method so builds work again.
trx_purge_truncate_history(): Avoid a deadlock with
buf_pool_t::release_freed_page(). Page latches are not supposed
to be waited for while holding a mutex like buf_pool.mutex or
buf_pool.flush_list_mutex.
This regression was caused by
commit aaef2e1d8c (MDEV-27058).
Before that, trx_purge_truncate_history() would buffer-fix the block,
release buf_pool.flush_list_mutex, and then wait for the
exclusive page latch.
This bug led to occasional failures of the test
innodb.undo_truncate_recover.
The reason of the double lock was an extraneous ha_flush_logs().
Unlike the upstream it is unnecessary in Mariadb that exploits a binlog
checkpoint mechanism for not letting PURGE or RESET-MASTER to trouble
transaction recovery. That is in case should a trx
be prepared but its binlog file gone, the trx then is committed on disk too.
Those facts have been always verified by existing tests of
binlog.binlog_{checkpoint,xa_recover}.test.
A regression test for the bug is included though.
This is the first part of the fixes for MDEV-24097. This commit
contains the fixes for instability when testing Galera and when
restarting nodes quickly:
1) Protection against a "stuck" old SST process during the execution
of the new SST (after restarting the node) is now implemented for
mariabackup / xtrabackup, which should help to avoid almost all
conflicts due to the use of the same ports - both during testing
with mtr, so and when restarting nodes quickly in a production
environment.
2) Added more protection to scripts against unexpected return of
the rc != 0 (in the commands for deleting temporary files, etc).
3) Added protection against unexpected crashes during binlog transfer
(in SST scripts for rsync).
4) Spaces and some special characters in binlog filenames shouldn't
be a problem now (at the script level).
5) Daemon process termination tracking has been made more robust
against crashes due to unexpected termination of the previous SST
process while new scripts are running.
6) Reading ssl encryption parameters has been moved from specific
SST scripts to a common wsrep_sst_common.sh script, which allows
unified error handling, unified diagnostics and simplifies script
revisions in the future.
7) Improved diagnostics of errors related to the use of openssl.
8) Corrections have been made for xtrabackup-v2 (both in tests and in
the script code) that restore the work of xtrabackup with updated
versions of innodb.
9) Fixed some tests for galera_3nodes, although the complete solution
for the problem of starting three nodes at the same time on fast
machines will be done in a separate commit.
No additional tests are required as this commit fixes problems with
existing tests.
make_join_select() calls const_cond->val_int(). There are edge cases
where const_cond may have a not-yet optimized subquery.
(The subquery will have used_tables() covered by join->const_tables. It
will still have const_item()==false, so other parts of the optimizer
will not try to evaluate it. We should probably mark such subqueries
as constant but that is outside the scope of this MDEV)
This could cause out of order wsrep checkpoints due wsrep specific leader
code not being executed in `MYSQL_BIN_LOG::write_transaction_to_binlog_events`.
Move original result assignment to before wsrep logic to prevent that.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
MDEV-22617 Galera node crashes when trying to log to slow_log table in
streaming replication mode
Other things:
- Changed name of wsrep_after_row(two arguments) to
wsrep_after_row_internal(one argument) to not depended on the
function signature with unused arguments.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Added test case
Problem:
========
When writing an XA based event to the binary log, an assert was
always referencing thd->lex->xa_opt. This variable, however, is
only set when using XA START, XA END, and XA COMMIT. When an
XA PREPARE statement is being processed, it is not guaranteed that
the xa_opt variable will be set (e.g. if existing within a stored
procedure). This caused valgrind to complain about accessing an
uninitialized variable.
Solution:
========
Before referencing xa_opt, ensure the context is valid such that
it is set.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
The old code erroneously used default_charset_info to compare field names.
default_charset_info can point to any arbitrary collation,
including ucs2*, utf16*, utf32*, including those that do not
support strcasecmp().
my_charset_utf8mb4_unicode_ci, which is used in this scenario:
CREATE TABLE t1 ENGINE=InnoDB WITH SYSTEM VERSIONING AS SELECT 0;
does not support strcasecmp().
Fixing the code to use Lex_ident::streq(), which uses
system_charset_info instead of default_charset_info.
eval_substr(): Do not allow the string buffer of the first argument
to be extended. Trim the length of the returned result if it would
exceed the end of the buffer.
In case when filesort does not use addon field packing (because of
too small potential savings) and uses fixed width addon fields instead,
the field->pack() call can store less bytes when the field maximum
possible field length, e.g. in case of VARCHAR().
The memory between the packed length and addonf->length (the maximum length)
stayed uninitialized, which was reported by Valgrind/MSAN.
The problem was introduced by f52bf92014 in 10.5,
which removed the tail initialization (probably unintentionally).
Restoring the bzero() in the fixed length branch,
so in case when pack() stores less bytes than addonf->length says,
the trailing bytes gets initialized.
Note, before f52bf92014, the bzero()
was under HAVE_valgrind conditional compilation. Now it's being added
unconditionally:
- MSAN also reported the problem, so it's not only Valgrind specific.
- As Serg proposed, conditional initialization is bad - it can have
potentional security problems as the non-initialized memory fragments
can store various pieces of essential information, e.g. passwords.
- InnoDB fails to write the page0 while trying to attempt recover
the page0 from doublewrite buffer and incorrect size is being passed
to the os_file_write(). Fix is that InnoDB should
proper close the parenthesis for function os_file_write() in
deferred_dblwr() and InnoDB should free the newly created tablespace
in case of error in deferred_dblwr().
vsnprintf takes the space need for trailing '\0' in consideration, and copies only n-1 characters to destination buffer.
With the old code, only sizeof(buf)-2 characters were copied, this caused that last character of message could be lost.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Since commit fb335b48b5 we may have
a null pointer in purge_sys.query when fetch_data_into_cache() is
invoked and innodb_force_recovery>4. This is because the call to
purge_sys.create() would be skipped.
fetch_data_into_cache(): Load the purge_sys pseudo transaction pointer
to a local variable (null pointer if purge_sys is not initialized).
This commit contains a fix, where the replication write set for a CREATE TABLE
will contain, as certification keys, table names for all FK references.
With this, all DML for the FK parent tables will conflict with the CREATE TABLE
statement.
There is also new test galera.MDEV-27276 to verify the fix.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
In mysql_execute_command(), move optimizer trace initialization to be
after run_set_statement_if_requested() call.
Unfortunately, mysql_execute_command() code uses "goto error" a lot, and
this means optimizer trace code cannot use RAII objects. Work this around
by:
- Make Opt_trace_start a non-RAII object, add init() method.
- Move the code that writes the top-level object and array into
Opt_trace_start::init().
This commit has a mtr test where two two transactions delete a row from
two separate tables, which will cascade a FK delete for the same row in
a third table. Second replica node is configured with 2 applier threads,
and the test will fail if these two transactions are applied in parallel.
The actual fix, in this commit, is to mark a transaction as unsafe for
parallel applying when it traverses into cascade delete operation.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
1) Removed symlinks that are not very well supported in tar under Windows.
2) Added comment + changed code formatting in viosslfactories.c
3) Fixed a small bug in the yassl code.
4) Fixed a typo in the script code.
MDEV-25803 excluded some cases from key sort upon alter table. That
particularly depends on ALTER_ADD_INDEX flag. Creating a column of
SERIAL data type missed that flag. Though equivalent operation
alter table t1 add x bigint unsigned not null auto_increment unique;
has ALTER_ADD_INDEX flag.
Repeating execution of a query containing the clause IN with string literals
in environment where the server variable in_predicate_conversion_threshold
is set results in server abnormal termination in case the query is run
as a Prepared Statement and conversion of charsets for string values in the
query are required.
The reason for server abnormal termination is that instances of the class
Item_string created on transforming the IN clause into subquery were created
on runtime memory root that is deallocated on finishing execution of Prepared
statement. On the other hand, references to Items placed on deallocated memory
root still exist in objects of the class table_value_constr. Subsequent running
of the same prepared statement leads to dereferencing of pointers to already
deallocated memory that could lead to undefined behaviour.
To fix the issue the values being pushed into a values list for TVC are created
by cloning their original items. This way the cloned items are allocate on
the PS memroot and as consequences no dangling pointer does more exist.
Consider the following use case:
MariaDB [test]> CREATE TABLE t1 (field1 BIGINT DEFAULT -1);
MariaDB [test]> CREATE VIEW v1 AS SELECT DISTINCT field1 FROM t1;
Repeated execution of the following query as a Prepared Statement
MariaDB [test]> PREPARE stmt FROM 'SELECT * FROM v1 WHERE field1 <=> NULL';
MariaDB [test]> EXECUTE stmt;
results in a crash for a server built with DEBUG.
MariaDB [test]> EXECUTE stmt;
ERROR 2013 (HY000): Lost connection to MySQL server during query
Assertion failed: (!result), function convert_const_to_int, file item_cmpfunc.cc, line 476.
Abort trap: 6 (core dumped)
The crash inside the function convert_const_to_int() happens by the reason
that the value -1 is stored in an instance of the class Field_longlong
on restoring its original value in the statement
result= field->store(orig_field_val, TRUE);
that leads to assigning the value 1 to the variable 'result' with subsequent
crash in the DBUG_ASSERT statement following it
DBUG_ASSERT(!result);
The main matter here is why this assertion failure happens on the second
execution of the prepared statement and doens't on the first one.
On first handling of the statement
'EXECUTE stmt;'
a temporary table is created for serving the query involving the view 'v1'.
The table is created by the function create_tmp_table() in the following
calls trace: (trace #1)
JOIN::prepare (at sql_select.cc:725)
st_select_lex::handle_derived
LEX::handle_list_of_derived
TABLE_LIST::handle_derived
mysql_handle_single_derived
mysql_derived_prepare
select_union::create_result_table
create_tmp_table
Note, that the data member TABLE::status of a TABLE instance returned by the
function create_tmp_table() has the value 0.
Later the function setup_table_map() is called on the TABLE instance just
created for the sake of the temporary table (calls trace #2 is below):
JOIN::prepare (at sql_select.cc:737)
setup_tables_and_check_access
setup_tables
setup_table_map
where the data member TABLE::status is set to the value STATUS_NO_RECORD.
After that when execution of the method JOIN::prepare reaches calling of
the function setup_without_group() the following calls trace is invoked
JOIN::prepare
setup_without_group
setup_conds
Item_func::fix_fields
Item_func_equal::fix_length_and_dec
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func::convert_const_compared_to_int_field
convert_const_to_int
There is the following code snippet in the function convert_const_to_int()
at the line item_cmpfunc.cc:448
bool save_field_value= (field_item->const_item() ||
!(field->table->status & STATUS_NO_RECORD));
Since field->table->status has bits STATUS_NO_RECORD set the variable
save_field_value is false and therefore neither the method
Field_longlong::val_int() nor the method Field_longlong::store is called
on the Field instance that has the numeric value -1.
That is the reason why first execution of the Prepared Statement for the query
'SELECT * FROM v1 WHERE field1 <=> NULL'
is successful.
On second running of the statement 'EXECUTE stmt' a new temporary tables
is also created by running the calls trace #1 but the trace #2 is not executed
by the reason that data member SELECT_LEX::first_cond_optimization has been set
to false on first execution of the prepared statemet (in the method
JOIN::optimize_inner()). As a consequence, the data member TABLE::status for
a temporary table just created doesn't have the flags STATUS_NO_RECORD set and
therefore on re-execution of the prepared statement the methods
Field_longlong::val_int() and Field_longlong::store() are called for the field
having the value -1 and the DBUG_ASSERT(!result) is fired.
To fix the issue the data member TABLE::status has to be assigned the value
STATUS_NO_RECORD in every place where the macros empty_record() is called
to emptify a record for just instantiated TABLE object created on behalf
the new temporary table.
... '/var/lib/mysql/aria_log_control' for exclusive use, error: 11
As such don't lock under aria_readonly (which is set by opt_help in the handler
initialization.
Fixes: 8eba777c2b
Example build: ./BUILD/compile-pentium64-valgrind-max
Fixes:
- sp-no-valgrind failed if binary was built for valgrind as in this case
mem_root is allocated in very small hunks which the test cannot handle.
Fixed by testing of valgrind build
- truncate_notembedded failed in reap because of more memory used.
Fixed by allowing reap to fail too
Followup to fix for MDEV-25858: When test_if_skip_sort_order() decides
to use an index to satisfy ORDER BY ... LIMIT clause, it should
disable "Range Checked for Each Record" optimization.
Do this in all cases.
This enables optimizer_trace output for the next SQL command.
Identical as if one would have done:
- Store value of @@optimizer_trace
- Set @optimizer_trace="enabled=on"
- Run query
- SELECT * from OPTIMIZER_TRACE
- Restore value of @@optimizer_trace
This is a great time saver when one wants to quickly check the optimizer
trace for a query in a mtr test.
The issue is that when recovery is about to create a new data or index
page it check if the page already exits.
If the page does not exists (file is too short) or contains wrong checksum,
then the recovery code will recreate the page.
The bug was that the code that checked if the page existed didn't take
into account encrypted pages.
Fixed by adding a check if page could not be encrypted solved the issue.
I also added some code to silence decryption errors for new pages.
Test case and some inspiration for how to solve this come from
the pull request by alexandr.miloslavsky
create_log_files(): Check log_set_capacity() before modifying
or creating any log files.
innobase_start_or_create_for_mysql(): If create_log_files()
fails and we were initializing a new database, delete the
system tablespace files before exiting.
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).