Whenever we call merge_role_privileges on a role, we make use of
the role->counter variable to check if all it's children have had their
privileges merged. Only if all children have had their privileges merged,
do we update the privileges on parent. This is done to prevent extra work.
The same idea is employed during flush privileges. You only begin merging
from "leaf" roles. The recursive calls will merge their parents at some point.
A problem arises when we try to "re-merge" a parent. Take the following graph:
{noformat}
A (0) ---- C (2) ---- D (2) ---- USER
/ /
B (0) ----/ /
/
E (0) --------------/
{noformat}
In parentheses we have the "counter" value right before we start to iterate
through the roles hash and propagate values. It represents the number of roles
granted to the current role. The order in which we iterate through the roles
hash is alphabetical.
* First merge A, which leads to decreasing the counter for C to 1. Since C is
not 0, we don't proceed with merging into C.
* Second we merge B, which leads to decreasing the counter for C to 0. Now
we proceed with merging into C. This leads to reducing the counter for D to 1
as part of C merge process.
* Third as we iterate through the hash, we see that C has counter 0, thus we
start the merge process *again*. This leads to reducing the counter for
D to 0! We then attempt to merge D.
* Fourth we start merging E. When E sees D as it's parent (according to the code)
it attempts to reduce D's counter, which leads to overflow. Now D's counter is
a very large number, thus E's privileges are not forwarded to D yet.
To correct this behavior we must make sure to only start merging from initial
leaf nodes.
When granting a role to another role, DB privileges get propagated. If
the grantee had no previous DB privileges, an extra ACL_DB entry is created to
house those "indirectly received" privileges. If, afterwards, DB
privileges are granted to the grantee directly, we must make sure to not
create a duplicate ACL_DB entry.
During the user-defined variable defined by the recursive CTE handling procedure
check_dependencies_in_with_clauses that checks dependencies between the tables
that are defined in the CTE and find recursive definitions wasn't called.
The InnoDB background DROP TABLE queue is something that we should
really remove, but are unable to until we remove dict_operation_lock
so that DDL and DML operations can be combined in a single transaction.
Because the queue is not persistent, it is not crash-safe. In stable
versions of MariaDB, we can only try harder to drop all enqueued
tables before server shutdown.
row_mysql_drop_t::table_id: Replaces table_name.
row_drop_tables_for_mysql_in_background():
Do not remove the entry from the list as long as the table exists.
In this way, the table should eventually be dropped.
Moved TOI replication to happen after ACL checking for commands:
SQLCOM_CREATE_EVENT
SQLCOM_ALTER_EVENT
SQLCOM_DROP_EVENT
SQLCOM_CREATE_VIEW
SQLCOM_CREATE_TRIGGER
SQLCOM_DROP_TRIGGER
SQLCOM_INSTALL_PLUGIN
SQLCOM_UNINSTALL_PLUGIN
wrep_sst_common: Setting "-c ''" for my_print_defaults just takes no values from config at all. $MY_PRINT_DEFAULTS is already set at the top of the script to have --defaults-file and --defaults-extra-file. If WSREP_SST_OPT_CONF if set to "--defaults-file=/etc/my.cnf --defaults-extra-file=/etc/my.extra.cnf", then "my_print_defaults -c "" --defaults-file=/etc/my.cnf" succeeds, but if WSREP_SST_OPT_CONF is empty - no default values are taken at all.
wsrep_sst_xtrabackup-v2: innobackupex does not support --defaults-extra-file, so ${WSREP_SST_OPT_CONF} cannot be used as an argument, it has been changed to ${WSREP_SST_OPT_DEFAULT}. Removed --defaults-file= from INNOMOVE line, because WSREP_SST_OPT_CONF already includes it (INNOBACKUP was fine, INNOMOVE - not).
one) and affected result files.
Specifically to rpl_semi_sync_after_sync*, the changed results reflect
a fact that thanks to fixes in the dump thread functionality
there's no longer zombie thread to kill neither such thread represent
a semisync client (so the counter drops).
and specifically the ack receiving functionality.
Semisync is turned to be static instead of plugin so its functions
are invoked at the same points as RUN_HOOKS.
The RUN_HOOKS and the observer interface remain to be removed by later
patch.
Todo:
React on killed status by repl_semisync_master.wait_after_sync(). Currently
Repl_semi_sync_master::commit_trx does not check the killed status.
There were few bugfixes found that are present in mysql and its unclear
whether/how they are covered. Those include:
Bug#15985893: GTID SKIPPED EVENTS ON MASTER CAUSE SEMI SYNC TIME-OUTS
Bug#17932935 CALLING IS_SEMI_SYNC_SLAVE() IN EACH FUNCTION CALL
HAS BAD PERFORMANCE
Bug#20574628: SEMI-SYNC REPLICATION PERFORMANCE DEGRADES WITH A HIGH NUMBER OF THREADS
RUN_HOOK() is only called if semisync is enabled
As the server can't disable the hooks if something is in progress, I added
a new variable, run_hooks_enabled, that is set the first time semi sync is
used. This means that RUN_HOOK will have no overhead, unless semi sync
master or slave has been enabled once.
Some of the changes was just to get rid of warnings for embedded server
Part of MDEV-13073 AliSQL Optimize performance of semisync
The idea it to use a dedicated lock detecting if there is new data in
the master's binary log instead of the overused LOCK_log.
Changes:
- Use dedicated COND variables for the relay and binary log signaling.
This was needed as we where the old 'update_cond' variable was used
with different mutex's, which could cause deadlocks.
- Relay log uses now COND_relay_log_updated and LOCK_log
- Binary log uses now COND_bin_log_updated and LOCK_binlog_end_pos
- Renamed signal_cnt to relay_signal_cnt (as we now have two signals)
- Added some missing error handling in MYSQL_BIN_LOG::new_file_impl()
- Reformatted some comments with old style
- Renamed m_key_LOCK_binlog_end_pos to key_LOCK_binlog_end_pos
- Changed 'signal_update()' to update_binlog_end_pos() which works for
both relay and binary log
Part of MDEV-13073 AliSQL Optimize performance of semisync
Did the following renames to match other similar variables
key_ss_mutex_LOCK_binlog_ > key_LOCK_bing
key_ss_cond_COND_binlog_send_ -> key_COND_binlog_send
COND_binlog_send_ -> COND_binlog_send
LOCK_binlog_ -> LOCK_binlog
debian/mariadb-server-10.2.install does not install semisync libs.
* The version of tcmalloc is written to the system variable
'version_malloc_library' if tcmalloc is used, similarly to
jemalloc
* Extracted method guess_malloc_library()
The 'data' field in the XA RECOVER resultset changed
to be charset_bin. It seems to me right and also
--binary-as-hex starts working. The XA RECOVER FORMAT='SQL' option
implemented. It returns the XID string that fits to be an argument for the
XA ... statements.
The error
"Unsupported collation on string indexed column %s Use
binary collation (latin1_bin, binary, utf8_bin)."
is misleading. Change it:
- It is now a warning
- It is printed only for collations that do not support index-only access
(reversible collations that use unpack_info are ok)
- The new warning text is:
Indexed column %s.%s uses a collation that does not allow index-only
access in secondary key and has reduced disk space efficiency
in primary key.
lock_move_rec_list_start(): Relax a too strict assertion.
This function can be invoked on the leftmost leaf page, after all.
So, the first record of each page can be a 'default row' record,
but the 'default row' record must never be locked.
Contrary to what the comment said, trx_resurrect_table_locks()
does associate table locks with every recovered transaction that
modified any records, ever since this bug fix in MySQL 5.6.12:
Bug#16593427 ROLLBACK OF RECOVERED TRANSACTION CORRUPTS NON-ONLINE ADD INDEX
Actually there are 2 issues in the case of invisible columns
1st `select fields from t1` will have more fields then `select * from t1`.
So instead of `select * from t1` we are using `select a,b,invisible from t1`
these fields are supplied from `select fields from t1`.
2nd We are using --complete-insert when we detect that this table is using
invisible columns.