* mysqld_safe: Since wsrep_on variable is mandatory in 10.1, skip wsrep
position recovery if its OFF.
* mysqld: Remove "-wsrep" from server version
* mysqld: Remove wsrep patch version from @@version_comment
* mysqld: Introduce @@wsrep_patch_version
that was apparently lost in 20c23048:
commit 20c23048c1
Author: Jan Lindström <jan.lindstrom@mariadb.com>
Date: Sun May 17 14:14:16 2015 +0300
MDEV-8164: Server crashes in pfs_mutex_enter_func after fil_crypt_is_closing
This also reverts 8635c4b4:
commit 8635c4b4e6
Author: Jan Lindström <jan.lindstrom@mariadb.com>
Date: Thu May 21 11:02:03 2015 +0300
Fix test failure.
When the slave processes the master restart format_description event,
parallel replication needs to complete any prior events before processing
the restart event (which closes temporary tables and such stuff).
This happens in wait_for_workers_idle(), however it was not waiting long
enough. The wait was using wait_for_prior_commit(), but at that points table
can still be open. This lead to assertion in this case.
So change wait_for_workers_idle() to wait until all worker threads have
reached finish_event_group(), at which point all tables should have been
closed.
Make sure that when we publish the crypt_data we access the
memory cache of the tablespace crypt_data. Make sure that
crypt_data is stored whenever it is really needed.
All this is not yet enough in my opinion because:
sql/encryption.cc has DBUG_ASSERT(scheme->type == 1) i.e.
crypt_data->type == CRYPT_SCHEME_1
However, for InnoDB point of view we have global crypt_data
for every tablespace. When we change variables on crypt_data
we take mutex. However, when we use crypt_data for
encryption/decryption we use pointer to this global
structure and no mutex to protect against changes on
crypt_data.
Tablespace encryption starts in fil_crypt_start_encrypting_space
from crypt_data that has crypt_data->type = CRYPT_SCHEME_UNENCRYPTED
and later we write page 0 CRYPT_SCHEME_1 and finally whe publish
that to memory cache.
Analysis: Problem was that tablespaces not encrypted might not have
crypt_data stored on disk.
Fixed by always creating crypt_data to memory cache of the tablespace.
MDEV-8138: strange results from encrypt-and-grep test
Analysis: crypt_data->type is not updated correctly on memory
cache. This caused problem with state tranfer on
encrypted => unencrypted => encrypted.
Fixed by updating memory cache of crypt_data->type correctly based on
current srv_encrypt_tables value to either CRYPT_SCHEME_1 or
CRYPT_SCHEME_UNENCRYPTED.
that is, after
commit 2300fe2e0e
Author: Sergei Golubchik <serg@mariadb.org>
Date: Wed May 13 21:57:24 2015 +0200
Identical key derivation code in XtraDB/InnoDB/Aria
* Extract it into the "encryption_scheme" service.
* Make these engines to use the service, remove duplicate code.
* Change MY_AES_xxx error codes, to return them safely
from encryption_scheme_encrypt/decrypt without conflicting
with ENCRYPTION_SCHEME_KEY_INVALID error
Analysis: Problem was that we did create crypt data for encrypted table but
this new crypt data was not written to page 0. Instead a default crypt data
was written to page 0 at table creation.
Fixed by explicitly writing new crypt data to page 0 after successfull
table creation.
fix encryption of the last partial block
* now really encrypt it, using key and iv
* support the case of very short plaintext (less than one block)
* recommend aes_ctr over aes_cbc, because the former
doesn't have problems with partial blocks
.. wsrep.binlog_format, wsrep.mdev_6832 fail in buildbot
Galera-3.9 logs an additional warning in the error log if
it fails to find gvwstate.dat file. Update wsrep/suite.pm.
In optimistic parallel replication, it is not safe to try to run a following
transaction in parallel with a DDL statement, and there is code to prevent
this.
However, the code was missing the case where the DDL is the very first event
after slave start. In this case, following transactions could run in
parallel with the DDL, which can cause the slave to hang or even corrupt
slave in unlucky cases.
table
Performance schema discovery fails if connection has no active database set.
This happened due to restriction in SQL parser: table name with no database name
is ambiguous in such case.
Fixed by temporary substitution of default database with being discovered table
database.
Problem was that information schema tables innodb_tablespaces_encryption and
innodb_tablespaces_scrubbing where missing required check is InnoDB enabled
or not.
The slave SQL thread was clearing serial_rgi->thd before deleting
serial_rgi, which could cause access to NULL THD.
The clearing was introduced in commit
2e100cc5a4 and is just plain wrong. So revert
that part (single line) of that commit.
Thanks to Daniel Black for bug analysis and test case.
There was a rare race, where a deadlock error might not be correctly
handled, causing the slave to stop with something like this in the error
log:
150423 14:04:10 [ERROR] Slave SQL: Connection was killed, Gtid 0-1-2, Internal MariaDB error code: 1927
150423 14:04:10 [Warning] Slave: Connection was killed Error_code: 1927
150423 14:04:10 [Warning] Slave: Deadlock found when trying to get lock; try restarting transaction Error_code: 1213
150423 14:04:10 [Warning] Slave: Connection was killed Error_code: 1927
150423 14:04:10 [Warning] Slave: Connection was killed Error_code: 1927
150423 14:04:10 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'master-bin.000001 position 1234
The problem was incorrect error handling. When a deadlock is detected, it
causes a KILL CONNECTION on the offending thread. This error is then later
converted to a deadlock error, and the transaction is retried.
However, the deadlock error was not cleared at the start of the retry, nor
was the lingering kill signal. So it was possible to get another deadlock
kill early during retry. If this happened with particular thread
scheduling/timing, it was possible that the new KILL CONNECTION error was
masked by the earlier deadlock error, so that the second kill was not
properly converted into a deadlock error and retry.
This patch adds code that clears the old error and killed flag before
starting the retry. It also adds code to handle a deadlock kill caught in a
couple of places where it was not handled before.
This was a regression from the patch for MDEV-7668.
A test was incorrect, so the slave would not properly handle re-using
temporary tables, which lead to replication failure in this case.
Make sure that in parallel replication, we execute wait_for_prior_commit()
before setting table->in_use for a temporary table. Otherwise we can end up
with two parallel replication worker threads competing with each other for
use of a temporary table.
Re-factor the use of find_temporary_table() to be able to handle errors
in the caller (as wait_for_prior_commit() can return error in case of
deadlock kill).