- Adding a new class Item_args, represending regular function or
aggregate function arguments array.
- Adding a new class Item_func_or_sum,
a parent class for Item_func and Item_sum
- Moving Item_result_field::name() to Item_func_or_sum(),
as name() is not needed on Item_result_field level.
Add some suppressions that were missing. They are for if a STOP SLAVE is
executed early during IO thread startup, when it is negotiating with the
master. The master connection may be killed in the middle of a
mysql_real_query(), which is not a test failure if it is a network error.
This also caught one real code error, fixed with this commit: The I/O thread
would fail to automatically reconnect if a network error happened while
fetching the value of @@GLOBAL.gtid_domain_id.
Make sure that in parallel replication, we execute wait_for_prior_commit()
before setting table->in_use for a temporary table. Otherwise we can end up
with two parallel replication worker threads competing with each other for
use of a temporary table.
Re-factor the use of find_temporary_table() to be able to handle errors
in the caller (as wait_for_prior_commit() can return error in case of
deadlock kill).
[This commit cherry-picked to be able to merge MDEV-7936, of which it
is a pre-requisite, into both 10.0 and 10.1.]
Parallel replication depends on locking (table locks, row locks, etc.) to
prevent two conflicting transactions from running and committing in parallel.
But temporary tables are designed to be visible only to one thread, and have
no such locking.
In the concrete issue, an intermediate master could commit a CREATE TEMPORARY
TABLE in the same group commit as in INSERT into that table. Thus, a
lower-level master could attempt to run them in parallel and get an error.
More generally, we need protection from parallel replication trying to run
transactions in parallel that access a common temporary table.
This patch simply causes use of a temporary table from parallel replication
to wait for all previous transactions to commit, serialising the replication
at that point.
(A more fine-grained locking could be added later, possibly. However,
using temporary tables in statement-based replication is in any case
normally undesirable; for example a restart of the server will lose
temporary tables and can break replication).
Note that row-based replication is not affected, as it does not do any
temporary tables on the slave-side.
This patch also cleans up the locking around protecting the list of
temporary tables in Relay_log_info. This used to take the
rli->data_lock at the end of every statement, which is very bad for
concurrency. With this patch, the lock is not taken unless temporary
tables (with statement-based binlogging) are in use on the slave.
that is, after
commit dd8f931957
Author: Sergei Golubchik <serg@mariadb.org>
Date: Fri Apr 10 02:36:54 2015 +0200
be less annoying about sysvar-based table attributes
do not *always* add them to the create table definition,
but only when a sysvar value is different from a default.
also, when adding them - don't quote numbers
If a flag is supported only for C or C++ - add it to the
corresponding compiler option list. Old behavior was to
add always to both, but only if supported in both.
do not *always* add them to the create table definition,
but only when a sysvar value is different from a default.
also, when adding them - don't quote numbers
fix sys_var->is_default() method (that was using default_val property
in a global sys_var object to track per-session state):
* move timestamp to a dedicated Sys_var_timestamp class
(in fact, rename Sys_var_session_special_double to Sys_var_timestamp)
* make session_is_default a virtual method with a special implementation
for timestamps
* other variables don't have a special behavior for default values
and can have session_is_default() to be always false.
With changes:
* update tests to pass (new encryption/encryption_key_id syntax).
* not merged the code that makes engine aware of the encryption mode
(CRYPT_SCHEME_1_CBC, CRYPT_SCHEME_1_CTR, storing it on disk, etc),
because now the encryption plugin is handling it.
* compression+encryption did not work in either branch before the
merge - and it does not work after the merge. it might be more
broken after the merge though - some of that code was not merged.
* page checksumming code was not moved (moving of page checksumming
from fil_space_encrypt() to fil_space_decrypt was not merged).
* restored deleted lines in buf_page_get_frame(), otherwise
innodb_scrub test failed.