Adding an intermediate volatile variable to avoid using co-processor registers
on some platforms (e.g. 32-bit x86).
This change makes test results stable accross all platforms.
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
This patch contains a fix for the MDEV-17262/17243 issues and
new mtr test.
These issues (MDEV-17262/17243) have two reasons:
1) After an intermediate commit, a transaction loses its status
of "transaction that registered in the MySQL for 2pc coordinator"
(in the InnoDB) due to the fact that since version 10.2 the
write_row() function (which located in the ha_innodb.cc) does
not call trx_register_for_2pc(m_prebuilt->trx) during the processing
of split transactions. It is necessary to restore this call inside
the write_row() when an intermediate commit was made (for a split
transaction).
Similarly, we need to set the flag of the started transaction
(m_prebuilt->sql_stat_start) after intermediate commit.
The table->file->extra(HA_EXTRA_FAKE_START_STMT) called from the
wsrep_load_data_split() function (which located in sql_load.cc)
will also do this, but it will be too late. As a result, the call
to the wsrep_append_keys() function from the InnoDB engine may be
lost or function may be called with invalid transaction identifier.
2) If a transaction with the LOAD DATA statement is divided into
logical mini-transactions (of the 10K rows) and binlog is rotated,
then in rare cases due to the wsrep handler re-registration at the
boundary of the split, the last portion of data may be lost. Since
splitting of the LOAD DATA into mini-transactions is technical,
I believe that we should not allow these mini-transactions to fall
into separate binlogs. Therefore, it is necessary to prohibit the
rotation of binlog in the middle of processing LOAD DATA statement.
https://jira.mariadb.org/browse/MDEV-17262 and
https://jira.mariadb.org/browse/MDEV-17243
In the function make_cond_for_table_from_pred a call of ix_fields()
missed checking of the return code. As a result an extracted constant
condition could be not well formed and this caused an assertion failure.
Includes:
MDEV-17302 Add support for ALTER USER command in prepared statement
and
MDEV-17673 main.cte_recursive fails in bb-10.4-ps branch in --ps
Set correct SELECT_LEX linkage for recursive CTEs.
Do not delegate this job to TABLE_LIST::set_as_with_table,
because it is only run on prepare, while With_element::move_anchors_ahead
is run both on prepare and execute (fix by Igor)
* MDEV-16509 Improve wsrep commit performance with binlog disabled
Release commit order critical section early after trx_commit_low() if
binlog is not transaction coordinator. In order to avoid two phase commit,
binlog_hton is not registered for THD during IO_CACHE population.
Implemented a test which verifies that the transactions release
commit order early.
This optimization will change behavior during recovery as the commit
is not two phase when binlog is off. Fixed and recorded wsrep-recover-v25
and wsrep-recover to match the behavior.
* MDEV-18730 Ordering for wsrep binlog group commit
Previously out of order execution was allowed for wsrep commits.
Established proper ordering by populating wait_for_commit
for every wsrep THD and making group commit leader to wait for
prior commits before proceeding to trx_group_commit_leader().
* MDEV-18730 Added a test case to verify correct commit ordering
* MDEV-16509, MDEV-18730 Review fixes
Use WSREP_EMULATE_BINLOG() macro to decide if the binlog_hton
should be registered. Whitespace/syntax fixes and cleanups.
* MDEV-16509 Require binlog for galera_var_innodb_disallow_writes test
If the commit to InnoDB is done in one phase, the native InnoDB behavior
is that the transaction is committed in memory before it is persisted to
disk. This means that the innodb_disallow_writes=ON may not prevent
transaction to become visible to other readers before commit is completely
over. On the other hand, if the commit is two phase (as it is with binlog),
the transaction will be blocked in prepare phase.
Fixed the test to use binlog, which enforces two phase commit, which
in turn makes commit to block before the changes become visible to
other connections. This guarantees that the test produces expected
result.
If an IN-subquery is used in a table-less select the current code
should never consider it as candidate for semi-join optimizations.
Yet the function check_and_do_in_subquery_rewrites() improperly
checked the property "to be a table-less select". As a result
such select in IN subquery was used in INSERT .. SELECT then
the IN subquery by mistake was registered as a semi-join subquery
and convert_subq_to_sj() was called for it. However the code of
this function does not assume that the parent select of the subquery
could be a table-less select.
There were two newly enabled warnings:
1. cast for a function pointers. Affected sql_analyse.h, mi_write.c
and ma_write.cc, mf_iocache-t.cc, mysqlbinlog.cc, encryption.cc, etc
2. memcpy/memset of nontrivial structures. Fixed as:
* the warning disabled for InnoDB
* TABLE, TABLE_SHARE, and TABLE_LIST got a new method reset() which
does the bzero(), which is safe for these classes, but any other
bzero() will still cause a warning
* Table_scope_and_contents_source_st uses `TABLE_LIST *` (trivial)
instead of `SQL_I_List<TABLE_LIST>` (not trivial) so it's safe to
bzero now.
* added casts in debug_sync.cc and sql_select.cc (for JOIN)
* move assignment method for MDL_request instead of memcpy()
* PARTIAL_INDEX_INTERSECT_INFO::init() instead of bzero()
* remove constructor from READ_RECORD() to make it trivial
* replace some memcpy() with c++ copy assignments
attempt to drop BLOB with long index
Restore long table->key_info, So that in the case of unsuccessful table
alter table->key_info should have a correct value. In the case of successful
table alter old table is flushed so that is why we don't see this error in
the case of successful alter.
Problem was that we skipped background persistent statistics calculation
on applier nodes if thread is marked as high priority (a.k.a BF).
However, on applier nodes all DDL which is replicate will be executed
as high priority i.e BF.
Fixed by allowing background persistent statistics calculation on
applier nodes even when thread is marked as BF. This could lead
BF lock waits but for queries on that node needs that statistics.
Removed redundant plugin_thdvar_cleanup() from end_connection(): called by
THD::free_connection(), which always follows end_connection().
Saves at least one lock(LOCK_plugin) and one
rdlock(LOCK_system_variables_hash).
Benchmarked on a 2socket/20core/40threads Broadwell system using sysbench
connect brencmark @40 threads (with select 1 disabled).
10.2 shows moderate improvement: 136219.93 -> 137766.31 CPS.
10.3 is improvement is somewhat better: 93018.29 -> 101379.77 CPS.
Also backported MyRocks memory leak fix from 10.4, which turned out to
be unrelated.
The patches features an optional shutdown behavior to hold on until
after all connected slaves have been sent the last binlogged event.
The connected slave is one whose START SLAVE has been acknowledged and
that was not stopped since that though it could be technically
reconnecting in background.
The solution therefore disallows killing the dump thread until is has
found EOF of the latest binlog file. It is up to the shutdown
requester (DBA) to set up a sufficiently large shutdown timeout value
for shudown to wait patiently until lagging behind slaves have been
synchronized. On the other hand if a specific slave needs exclusion
from synchronization the DBA would have to stop it manually which
would terminate its dump thread.
`mysqladmin shutdown' is extended with a `--wait_for_all_slaves' option
which translates to `SHUTDOW WAIT FOR ALL SLAVES' sql query
to enable the feature on the client side.
The patch also performs a small refactoring of the server shutdown
around close_connections() to introduce kill thread phases which
are two as of current.
The issue here was when we had a subquery and a window function in an expression in
the select list then subquery was getting computed after window function computation.
This resulted in incorrect results because the subquery was correlated and the fields
in the subquery was pointing to the base table instead of the temporary table.
The approach to fix this was to have an additional field in the temporary table
for the subquery and to execute the subquery before window function execution.
After execution the values for the subquery were stored in the temporary table
and then when we needed to calcuate the expression, all we do is read the values
from the temporary table for the subquery.
Make mysqltest to use --ps-protocol more
use prepared statements for everything that server supports
with the exception of CALL (for now).
Fix discovered test failures and bugs.
tests:
* PROCESSLIST shows Execute state, not Query
* SHOW STATUS increments status variables more than in text protocol
* multi-statements should be avoided (see tests with a wrong delimiter)
* performance_schema events have different names in --ps-protocol
* --enable_prepare_warnings
mysqltest.cc:
* make sure run_query_stmt() doesn't crash if there's
no active connection (in wait_until_connected_again.inc)
* prepare all statements that server supports
protocol.h
* Protocol_discard::send_result_set_metadata() should not send
anything to the client.
sql_acl.cc:
* extract the functionality of getting the user for SHOW GRANTS
from check_show_access(), so that mysql_test_show_grants() could
generate the correct column names in the prepare step
sql_class.cc:
* result->prepare() can fail, don't ignore its return value
* use correct number of decimals for EXPLAIN columns
sql_parse.cc:
* discard profiling for SHOW PROFILE. In text protocol it's done in
prepare_schema_table(), but in --ps it is called on prepare only,
so nothing was discarding profiling during execute.
* move the permission checking code for SHOW CREATE VIEW to
mysqld_show_create_get_fields(), so that it would be called during
prepare step too.
* only set sel_result when it was created here and needs to be
destroyed in the same block. Avoid destroying lex->result.
* use the correct number of tables in check_show_access(). Saying
"as many as possible" doesn't work when first_not_own_table isn't
set yet.
sql_prepare.cc:
* use correct user name for SHOW GRANTS columns
* don't ignore verbose flag for SHOW SLAVE STATUS
* support preparing REVOKE ALL and ROLLBACK TO SAVEPOINT
* don't ignore errors from thd->prepare_explain_fields()
* use select_send result for sending ANALYZE and EXPLAIN, but don't
overwrite lex->result, because it might be needed to issue execute-time
errors (select_dumpvar - too many rows)
sql_show.cc:
* check grants for SHOW CREATE VIEW here, not in mysql_execute_command
sql_view.cc:
* use the correct function to check privileges. Old code was doing
check_access() for thd->security_ctx, which is invoker's sctx,
not definer's sctx. Hide various view related errors from the invoker.
sql_yacc.yy:
* initialize lex->select_lex for LOAD, otherwise it'll contain garbage
data that happen to fail tests with views in --ps (but not otherwise).
This solves the following issues:
* unlike lex->m_sql_cmd and lex->sql_command, thd->query_plan_flags
is not reset in Prepared_statement::execute, it survives
till the log_slow_statement(), so slow logging behaves correctly in --ps
* using thd->query_plan_flags for both slow_log_filter and
log_slow_admin_statements means the definition of "admin" statements
for the slow log is the same no matter how it is filtered out.
This was caused by a combination of factors:
* MyISAM/Aria temporary tables historically never saved the state
to disk (MYI/MAI), because the state never needed to persist
* certain ALTER TABLE operations modify the original TABLE structure
and if they fail, the original table has to be reopened to
revert all changes (m_needs_reopen=1)
as a result, when ALTER fails and MyISAM/Aria temp table gets reopened,
it reads the stale state from the disk.
As a fix, MyISAM/Aria tables now *always* write the state to disk
on close, *unless* HA_EXTRA_PREPARE_FOR_DROP was done first. And
the server now always does HA_EXTRA_PREPARE_FOR_DROP before dropping
a temporary table.
ALTER TABLE ... ADD FOREIGN KEY may trigger assertion failure when
it has LOCK=EXCLUSIVE clause or concurrent FLUSH TABLES is being
executed.
In both cases being altered table is marked as flushed, which forces
subsequent attempt to open parent table to re-open. Which in turn is
not allowed while transaction is running.
Rather than opening parent table, just take appropriate MDL lock.
Also removed table_already_fk_prelocked() check: MDL itself has much
better methods to handle duplicate locks. E.g. the former won't acquire
MDL_SHARED_NO_WRITE if it already has MDL_SHARED_READ.
Removed redundant initialisation in unireg_init(): already done by
mysql_init_variables().
Slave threads already check THD::killed, which eliminates the need to
check abort_loop.
Removed unused wsrep_kill_mysql().
Part#2 (final): rewritting the code to pass the correct enum_sp_aggregate_type
to the sp_head constructor, so sp_head never changes its aggregation type
later on. The grammar has been simplified and defragmented.
This allowed to check aggregate specific instructions right after
a routine body has been scanned, by calling new LEX methods:
sp_body_finalize_{procedure|function|trigger|event}()
Moving some C++ code from *.yy to a few new helper methods in LEX.
using Item_cond
This bug is similar to the bug MDEV-16765.
It appears because of the wrong pushdown into HAVING clause while this
pushdown shouldn't be made at all.
This happens because function that checks if Item_cond can be pushed
always returns that it can be pushed.
To fix it new method Item_cond::excl_dep_on_table() was added.
slave_list was used to provide data for SHOW SLAVE HOSTS and
Slaves_connected status variable.
Introduced binlog_dump_thread_count which is exposed via Slaves_connected
(replaces slave_list.records).
Store Slave_info on THD and access it by iterating server_threads
(replaces slave_list).
Added:
THD::slave_info
binlog_dump_thread_count
show_slave_hosts_callback()
Removed:
slave_list
SLAVE_LIST_CHUNK
SLAVE_ERRMSG_SIZE
slave_list_key()
slave_info_free()
init_slave_list()
end_slave_list()
all_slave_list_mutexes
init_all_slave_list_mutexes()
key_LOCK_slave_list
LOCK_slave_list
Moved:
SLAVE_INFO -> Slave_info
register_slave() -> THD::register_slave()
unregister_slave() -> THD::unregister_slave()
Also removed redundant end_slave() from close_connections(): it is called
again soon afterwards by clean_up().
Pre-requisite for clean MDEV-18450 solution.
If a splittable materialized derived table / view T is used in a inner nest
of an outer join with impossible ON condition then T is marked as a
constant table. Yet the execution plan to build T is still searched for
in spite of the fact that is not needed. So it should be set.
With wsrep_gtid_mode=ON, the appropriate commit hooks were not
called in all cases for applied streaming transactions.
As a fix, removed all special handling of commit order critical
section from Wsrep_high_priority_service and Wsrep_storage_service.
Now commit order critical section is always entered in ha_commit_trans().
Check for wsrep_run_commit_hook is now done in handler.cc, log.cc.
This makes it explicit that the transaction is an active wsrep
transaction which must go through commit hooks.
When the chosen execution plan accesses a join table employing a range
rowid filter a quick select to scan this range has to be built. This
quick select is built by a call of SQL_SELECT::test_quick_select().
At this call the function should allow to evaluate only single index
range scans. In order to be able to do this a new parameter was added
to this function.
1. Always drop merged_for_insert flag on cleanup (there could be errors which prevent TABLE to be assigned)
2. Make more precise cleanup of select parts which was touched
st_select_lex::handle_derived() and mysql_handle_list_of_derived() had
exactly the same implementations.
- Adding a new method LEX::handle_list_of_derived() instead
- Removing public function mysql_handle_list_of_derived()
- Reusing LEX::handle_list_of_derived() in st_select_lex::handle_derived()
* update system versioning fields before generaled columns
* don't presume that ha_write_row() means INSERT. It could still be UPDATE
* use the correct handler in check_duplicate_long_entry_key()
close table->update_handler in close_thread_tables().
it's not enough to do it in sql_update.cc only, because
sql_insert.cc can also do updates (REPLACE) and even
sql_delete.cc can (DELETE ... FOR PORTION OF)
when auto-adding a virtual LONG_UNIQUE_HASH_FIELD, fill in
a Virtual_column_info for it, so that fill_alter_inplace_info()
would know we're adding a virtual field (ALTER_ADD_VIRTUAL_COLUMN).
The bug manifested itself when executing a query with materialized
view/derived/CTE whose specification was a SELECT query contained
another materialized derived and impossible WHERE/HAVING condition
was detected for this SELECT.
As soon as such condition is detected the join structures of all
derived tables used in the SELECT are destroyed. So optimization
of the queries specifying these derived tables is impossible. Besides
it's not needed.
In 10.3 optimization of a materialized derived table is performed before
detection of impossible WHERE/HAVING condition in the embedding SELECT.
server shutdown code.
Fix fixes a race condition, if an active connection either writes,
or will be writing to the socket after it is closed.
Previous call to socket shutdown() is fully enough to wake up and idle
connection, so that close_connection is obsolete and dangerous.
Refactored wsrep patch to not use LOCK_thread_count and COND_thread_count anymore.
This has partially been replaced by using old LOCK_wsrep_slave_threads mutex.
For slave thread count change waiting, new COND_wsrep_slave_threads signal has been added
Added LOCK_wsrep_cluster_config mutex to control that cluster address change cannot happen in parallel
Protected wsrep_slave_threads variable changes with LOCK_cluster_config mutex
This is for avoiding concurrent slave thread count and cluster joining operations to happen
Fixes according to Teemu's review
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
The problem happened because Item_ident_for_show did not implement val_native().
Solution:
- Removing class Item_ident_for_show
- Implementing a new method Protocol::send_list_fields() instead,
which accepts a List<Field> instead of List<Item> as input.
Now no any Item creation is done during mysqld_list_fields().
Adding helper methods, to reuse the code easier:
- Moved a part of Protocol::send_result_set_metadata(),
responsible for sending an individual field metadata,
into a new method Protocol_text::store_field_metadata().
Reusing it in both send_list_fields() and send_result_set_metadata().
- Adding Protocol_text::store_field_metadata()
- Adding Protocol_text::store_field_metadata_for_list_fields()
Note, this patch also automatically fixed another bug:
MDEV-18685 mysql_list_fields() returns DEFAULT 0 instead of DEFAULT NULL for view columns
The reason for this bug was that Item_ident_for_show::val_xxx() and get_date()
did not check field->is_null() before calling field->val_xxx()/get_date().
Now the default value is correctly sent by Protocol_text::store(Field*).
Wsrep-lib is now guaranteed to hold the underlying mutex
which is wrapped in lock object passed to Wsrep_client_service
interrupted() call. The library part will now take care of
checking the wsrep::transaction specific state, so it is
enough to check the thd->killed state for the result.
The InnoDB DeadlockChecker::check_and_resolve() was missing a
call to wsrep_handle_SR_rollback() in the case when the
transaction running deadlock detection was chosen as victim.
Refined wsrep_handle_SR_rollback() to skip store_globals() calls
if the transaction was BF aborting itself.
Made mysql-wsrep-features#165 more deterministic by waiting until
the update is in progress before sending next update.
with UNION ALL after INTERSECT
EXPLAIN EXTENDED erroneously showed UNION instead of UNION ALL in
the warning if UNION ALL followed INTERSECT or EXCEPT operations.
The bug was in the function st_select_lex_unit::print() that printed
the text of the query used in the warning.
sql_field->key_length was 0 for blob fields when a field was
being added, but Field_blob::character_octet_length() on
subsequent ALTER TABLE's (when the Field object in the old table
already existed). This means mysql_prepare_create_table() couldn't
reliably detect if the keyseg was a prefix.
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
and, again, *don't use thd->clear_error()*
this fixed main.sp_notembedded failure on various amd64 platforms
(where ER_STACK_OVERRUN_NEED_MORE happens to fire in open_stat_tables()
under Dummy_error_handler)
After FLUSH PRIVILEGES remember if the connection started under
--skip-grant-tables and keep it all-powerful, not a lowly anonymous.
One could use this connection to reset passwords as needed.
Also fix a crash in SHOW CREATE USER
post-merge changes:
* handle password expiration on old tables like everything else -
make changes in memory, even if they cannot be done on disk
* merge "debug" tests with non-debug tests, they don't use dbug anyway
* only run rpl password expiration in MIXED mode, it doesn't replicate
anything, so no need to repeat it thrice
* restore update_user_table_password() prototype, it should not change
ACL_USER, this is done in acl_user_update()
* don't parse json twice in get_password_lifetime and get_password_expired
* remove LEX_USER::is_changing_password, see if there was any auth instead
* avoid overflow in expiration calculations
* don't initialize Account_options in the constructor, it's bzero-ed later
* don't create ulong sysvars - they're not portable, prefer uint or ulonglong
* misc simplifications
This patch adds support for expiring user passwords.
The following statements are extended:
CREATE USER user@localhost PASSWORD EXPIRE [option]
ALTER USER user@localhost PASSWORD EXPIRE [option]
If no option is specified, the password is expired with immediate
effect. If option is DEFAULT, global policy applies according to
the default_password_lifetime system var (if 0, password never
expires, if N, password expires every N days). If option is NEVER,
the password never expires and if option is INTERVAL N DAY, the
password expires every N days.
The feature also supports the disconnect_on_expired_password system
var and the --connect-expired-password client option.
Closes#1166
* inject portion of time updates into mysql_delete main loop
* triggered case emits delete+insert, no updates
* PORTION OF `SYSTEM_TIME` is forbidden
* `DELETE HISTORY .. FOR PORTION OF ...` is forbidden as well