Some statements are always replicated in STATEMENT binlog format.
So upon their execution, the current binlog format is temporarily
switched to STATEMENT even though the session's format is different.
This state, stored in THD's current_stmt_binlog_format, was getting
incorrectly masked by wsrep_forced_binlog_format, causing assertions
and unintended generation of row events.
Backported galera.galera_forced_binlog_format and added a test
specific to this case.
In RHEL7/RHEL7.1 libcrack behavior seem to have been modified so that
"foobar" password is considered bad (due to descending "ba") earlier than
expected. For details google for cracklib-2.9.0-simplistic.patch.
Adjusted affected passwords not to have descending and ascending sequences.
Analysis:
-- InnoDB has n (>0) redo-log files.
-- In the first page of redo-log there is 2 checkpoint records on fixed location (checkpoint is not encrypted)
-- On every checkpoint record there is up to 5 crypt_keys containing the keys used for encryption/decryption
-- On crash recovery we read all checkpoints on every file
-- Recovery starts by reading from the latest checkpoint forward
-- Problem is that latest checkpoint might not always contain the key we need to decrypt all the
redo-log blocks (see MDEV-9422 for one example)
-- Furthermore, there is no way to identify is the log block corrupted or encrypted
For example checkpoint can contain following keys :
write chk: 4 [ chk key ]: [ 5 1 ] [ 4 1 ] [ 3 1 ] [ 2 1 ] [ 1 1 ]
so over time we could have a checkpoint
write chk: 13 [ chk key ]: [ 14 1 ] [ 13 1 ] [ 12 1 ] [ 11 1 ] [ 10 1 ]
killall -9 mysqld causes crash recovery and on crash recovery we read as
many checkpoints as there is log files, e.g.
read [ chk key ]: [ 13 1 ] [ 12 1 ] [ 11 1 ] [ 10 1 ] [ 9 1 ]
read [ chk key ]: [ 14 1 ] [ 13 1 ] [ 12 1 ] [ 11 1 ] [ 10 1 ] [ 9 1 ]
This is problematic, as we could still scan log blocks e.g. from checkpoint 4 and we do
not know anymore the correct key.
CRYPT INFO: for checkpoint 14 search 4
CRYPT INFO: for checkpoint 13 search 4
CRYPT INFO: for checkpoint 12 search 4
CRYPT INFO: for checkpoint 11 search 4
CRYPT INFO: for checkpoint 10 search 4
CRYPT INFO: for checkpoint 9 search 4 (NOTE: NOT FOUND)
For every checkpoint, code generated a new encrypted key based on key
from encryption plugin and random numbers. Only random numbers are
stored on checkpoint.
Fix: Generate only one key for every log file. If checkpoint contains only
one key, use that key to encrypt/decrypt all log blocks. If checkpoint
contains more than one key (this is case for databases created
using MariaDB server version 10.1.0 - 10.1.12 if log encryption was
used). If looked checkpoint_no is found from keys on checkpoint we use
that key to decrypt the log block. For encryption we use always the
first key. If the looked checkpoint_no is not found from keys on checkpoint
we use the first key.
Modified code also so that if log is not encrypted, we do not generate
any empty keys. If we have a log block and no keys is found from
checkpoint we assume that log block is unencrypted. Log corruption or
missing keys is found by comparing log block checksums. If we have
a keys but current log block checksum is correct we again assume
log block to be unencrypted. This is because current implementation
stores checksum only before encryption and new checksum after
encryption but before disk write is not stored anywhere.
don't allocate all the stack, leave some stack for
function calls.
To test I added the following line:
alloca_size = available_stack_size() - X
at X=4096 or less mysqld crashed, at 8192 mtr test passed.
This was because Tablist can be NULL when no lacal tables are in the list.
modified: storage/connect/tabtbl.cpp
modified: storage/connect/mysql-test/connect/r/tbl.result
modified: storage/connect/mysql-test/connect/t/tbl.test
This was because Tablist can be NULL when no lacal tables are in the list.
modified: storage/connect/tabtbl.cpp
modified: storage/connect/mysql-test/connect/r/tbl.result
modified: storage/connect/mysql-test/connect/t/tbl.test
It could have happened that one of previous tests already executed
buffer pool dump and set the status variable value, so when it's been
checked, the check passes too early, before the dump starts and
the dump file is created. See more detailed explanation in MDEV-9713.
Fixed by waiting for the current time to change in case it equals
to the timestamp in the status variable, and then checking that
the status variable not only matches the expected pattern, but also
differs from the previous value, whatever it was.
Make sure that on decrypt we do not try to reference
NULL pointer and if page contains undefined
FIL_PAGE_FILE_FLUSH_LSN field on when page is not
the first page or page is not in system tablespace,
clear it.
In row_search_for_mysql function on XtraDB there was a old logic
where null bytes were inited. This caused server to think that
key value is null and continue on incorrect path.
Logrotate script assumed an error if mysqladmin failed to connect to server
and there's mysqld process exists. However there can be non-system instance of
mysqld running (e.g. in docker) making this assumption wrong.
Check pid file existance instead.
* only copy args[0] to args[2] after fix_fields (when all item
substitutions have already happened)
* change QT_ITEM_FUNC_NULLIF_TO_CASE (that allows to print NULLIF
as CASE) to QT_ITEM_ORIGINAL_FUNC_NULLIF (that prohibits it).
So that NULLIF-to-CASE is allowed by default and only disabled
explicitly for SHOW VIEW|FUNCTION|PROCEDURE and mysql_make_view.
By default it is allowed (in particular in error messages and
debug output, that can happen anytime before or after optimizer).
don't do special SUM_FUNC_ITEM treatment in NULLIF for views
(as before), but do it for derived tables (when
context_analysis_only == CONTEXT_ANALYSIS_ONLY_DERIVED)
There was a race between end_slave() and cleanup code at the end of
handle_slave_sql(). This could cause access to master_info_index and
global_rpl_thread_pool after they had been freed.
Fix by skipping that cleanup if server shutdown is in progress, as is done
in other parts of the code as well (the cleanup, which stops worker threads
that are not needed anymore, is redundant anyway when the server is shutting
down).
During SST, since wsrep_sst_rsync waits for mysqld to create
"tables_flushed" file after it has successfully executed FTWRL,
it would wait forever if FTWRL fails.
Fixed by introducing a mechanism to report failure to the script.
The main.merge test case was failing when tested using row based
binlog format.
While analyzing the issue it was found the following issues:
a) The server is calling binlog related code even when a statement will
not be binlogged;
b) The child table list was not present into table structure by the time
to generate the create table statement;
c) The tables in the child table list will not be opened yet when
generating table create info using row based replication;
d) CREATE TABLE LIKE TEMP_TABLE does not preserve original table storage
engine when using row based replication;
This patch addressed all above issues.
@ sql/sql_class.h
Added a function to determine if the binary log is disabled to
the current session. This is related with issue (a) above.
@ sql/sql_table.cc
Added code to skip binary logging related code if the statement
will not be binlogged. This is related with issue (a) above.
Added code to add the children to the query list of the table that
will have its CREATE TABLE generated. This is related with issue (b)
above.
Added code to force the storage engine to be generated into the
CREATE TABLE. This is related with issue (d) above.
@ storage/myisammrg/ha_myisammrg.cc
Added a test to skip a table getting info about a child table if the
child table is not opened. This is related to issue (c) above.