row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
These assertions were disabled in MariaDB 10.1.1 in
commit df4dd593f2
with a bogus comment referring to the function wsrep_fake_trx_id()
that was introduced in the very same commit.
Problem:
The command was:
find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+
Which was supposed to work as
* skipping $paths directories themselves (-mindepth 1)
* see if the dir/file name matches $cpat (-regex)
* if yes - don't dive into the directory, skip it (-prune)
* otherwise (-o)
* remove it and everything inside (-exec)
Now -exec ... \+ works like this:
every new found path is appended to the end of the command line.
when accumulated command line length reaches `getconf ARG_MAX` (~2Gb)
it's executed, and find continues, appending to a new command line.
What happens here, find appends some directory to the command line,
then dives into it, and starts appending files from that directory.
At some point command line overflows, rm -rf gets executed and removes
the whole directory. Now find tries to continue scanning the directory
that was already removed.
Fix: don't dive into directories that will be recursively removed
anyway, use -prune for them. Basically, we should be pruning both paths
that have matched $cpat and paths that have not matched it. This is
achived by pruning unconditionally, before the regex is tested:
find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+
Patch Credit:- Serg
Using systemd we can automate creating users and directories. So
generate and install the configuration files.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
Small change in cmake/install_layout.cmake compared to original contributor
patch to also install SYSTEMD_SYSUSERS and SYSTEMD_TMPFILES directories. The
variables were being set, but the loop which defines the final install files
was not updated.
In the function make_sortkey a tmp buffer was defined and in the absence of
param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
has a length defined in sort_field->length, while param->tmp_buffer is
stored in param->rec_length. Make sure to use the appropriate length
based on which buffer we are using otherwise we'll overflow.
Also added a type cast to size_t during the calculation of the sort keys
buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
galera_events test shows a regression with the original fix for MW-416
Reason was that Events::drop_event() can be called also from inside event
execution, and there we have a speacial treatment for event, which executes
"DROP EVENT" statement, and runs TOI replication inside the event processing body.
This resulted in executing WSREP_TO_ISOLATION two times for such DROP EVENT statement.
Fix is to call WSREP_TO_ISOLATION_BEGIN only in Events::drop_event()
Changed return code for replicatio error to TRUE.
This is aligned with native mysql convention to return TRUE (defined to 1) or FALSE (defined to 0) from a bool function.
This is wrong, but follows the mysql conventiosn, at least...
find_type_or_exit() client helper did exit(1) on error, exit(1) moved to
clients.
mysql_read_default_options() did exit(1) on error, error is passed through and
handled now.
my_str_malloc_default() did exit(1) on error, replaced my_str_ allocator
functions with normal my_malloc()/my_realloc()/my_free().
sql_connect.cc did many exit(1) on hash initialisation failure. Removed error
check since my_hash_init() never fails.
my_malloc() did exit(1) on error. Replaced with abort().
my_load_defaults() did exit(1) on error, replaced with return 2.
my_load_defaults() still does exit(0) when invoked with --print-defaults.
Problem was that crypt_data->min_key_version is not a reliable way
to detect is tablespace encrypted and could lead that in first page
of the second (page 192 and similarly for other files if more configured)
system tablespace file used key_version is replaced with zero leading
a corruption as in next startup page is though to be corrupted.
Note that crypt_data->min_key_version is updated only after all
pages from tablespace have been processed (i.e. key rotation is done)
and flushed.
fil_write_flushed_lsn
Use crypt_data->should_encrypt() instead.
Whenever we call merge_role_privileges on a role, we make use of
the role->counter variable to check if all it's children have had their
privileges merged. Only if all children have had their privileges merged,
do we update the privileges on parent. This is done to prevent extra work.
The same idea is employed during flush privileges. You only begin merging
from "leaf" roles. The recursive calls will merge their parents at some point.
A problem arises when we try to "re-merge" a parent. Take the following graph:
{noformat}
A (0) ---- C (2) ---- D (2) ---- USER
/ /
B (0) ----/ /
/
E (0) --------------/
{noformat}
In parentheses we have the "counter" value right before we start to iterate
through the roles hash and propagate values. It represents the number of roles
granted to the current role. The order in which we iterate through the roles
hash is alphabetical.
* First merge A, which leads to decreasing the counter for C to 1. Since C is
not 0, we don't proceed with merging into C.
* Second we merge B, which leads to decreasing the counter for C to 0. Now
we proceed with merging into C. This leads to reducing the counter for D to 1
as part of C merge process.
* Third as we iterate through the hash, we see that C has counter 0, thus we
start the merge process *again*. This leads to reducing the counter for
D to 0! We then attempt to merge D.
* Fourth we start merging E. When E sees D as it's parent (according to the code)
it attempts to reduce D's counter, which leads to overflow. Now D's counter is
a very large number, thus E's privileges are not forwarded to D yet.
To correct this behavior we must make sure to only start merging from initial
leaf nodes.
When granting a role to another role, DB privileges get propagated. If
the grantee had no previous DB privileges, an extra ACL_DB entry is created to
house those "indirectly received" privileges. If, afterwards, DB
privileges are granted to the grantee directly, we must make sure to not
create a duplicate ACL_DB entry.
The InnoDB background DROP TABLE queue is something that we should
really remove, but are unable to until we remove dict_operation_lock
so that DDL and DML operations can be combined in a single transaction.
Because the queue is not persistent, it is not crash-safe. In stable
versions of MariaDB, we can only try harder to drop all enqueued
tables before server shutdown.
row_mysql_drop_t::table_id: Replaces table_name.
row_drop_tables_for_mysql_in_background():
Do not remove the entry from the list as long as the table exists.
In this way, the table should eventually be dropped.
Moved TOI replication to happen after ACL checking for commands:
SQLCOM_CREATE_EVENT
SQLCOM_ALTER_EVENT
SQLCOM_DROP_EVENT
SQLCOM_CREATE_VIEW
SQLCOM_CREATE_TRIGGER
SQLCOM_DROP_TRIGGER
SQLCOM_INSTALL_PLUGIN
SQLCOM_UNINSTALL_PLUGIN
wrep_sst_common: Setting "-c ''" for my_print_defaults just takes no values from config at all. $MY_PRINT_DEFAULTS is already set at the top of the script to have --defaults-file and --defaults-extra-file. If WSREP_SST_OPT_CONF if set to "--defaults-file=/etc/my.cnf --defaults-extra-file=/etc/my.extra.cnf", then "my_print_defaults -c "" --defaults-file=/etc/my.cnf" succeeds, but if WSREP_SST_OPT_CONF is empty - no default values are taken at all.
wsrep_sst_xtrabackup-v2: innobackupex does not support --defaults-extra-file, so ${WSREP_SST_OPT_CONF} cannot be used as an argument, it has been changed to ${WSREP_SST_OPT_DEFAULT}. Removed --defaults-file= from INNOMOVE line, because WSREP_SST_OPT_CONF already includes it (INNOBACKUP was fine, INNOMOVE - not).
trx_roll_must_shutdown(): During the rollback of recovered transactions,
report progress and check if the rollback should be interrupted because
of a pending shutdown.
trx_roll_max_undo_no, trx_roll_progress_printed_pct: Remove, along with
the messages that were interleaved with other messages.
row_undo_step(), trx_rollback_active(): Abort the rollback of a
recovered ordinary transaction if fast shutdown has been initiated.
trx_rollback_resurrected(): Convert an aborted-rollback transaction
into a fake XA PREPARE transaction, so that fast shutdown can proceed.
trx_rollback_resurrected(): If shutdown was initiated, fake all
remaining active transactions to XA PREPARE state, so that shutdown
can proceed. Also, make the parameter "all" an output that will be
assigned to FALSE in this case.
trx_rollback_or_clean_recovered(): Remove the shutdown check
(it was moved to trx_rollback_resurrected()).
trx_undo_free_prepared(): Relax assertions.
This was missing bug fix from MySQL wsrep i.e. Galera.
Problem was that if stored procedure declares a handler that
catches deadlock error, then the error may have been
cleared in method sp_rcontext::handle_sql_condition().
Use wsrep_conflict_state correctly to determine is the
error already sent to client.
Add test case for both this bug and MDEV-12837: WSREP: BF
lock wait long. Test requires both fixes to pass.
This is 10.1 version where no merge error exists.
wsrep_on_check
New check function. Galera can't be enabled
if innodb-lock-schedule-algorithm=VATS.
innobase_kill_query
In Galera async kill we could own lock mutex.
innobase_init
If Variance-Aware-Transaction-Sheduling Algorithm (VATS) is
used on Galera we refuse to start InnoDB.
Changed innodb-lock-schedule-algorithm as read-only parameter
as it was designed to be.
lock_rec_other_has_expl_req,
lock_rec_other_has_conflicting,
lock_rec_lock_slow
lock_table_other_has_incompatible
lock_rec_insert_check_and_lock
Change pointer to conflicting lock to normal pointer as this
pointer contents could be changed later.