The parser of CREATE USER accepts ACCOUNT LOCK before PASSWORD
EXPIRE but not the other way around.
This just changes the SHOW CREATE USER to output a sql syntax that
is valid.
Thanks to Robert Bindar for analysis.
During graceful shutdowns, client connections are closed and
eventually and THD::awake() acquires LOCK_thd_data mutex which is
required later on in wsrep_thd_is_aborting(). Make sure LOCK_thd_data
is acquired, even if global wsrep_on is disabled.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Some DDL statements appear to acquire MDL locks for a table referenced by
foreign key constraint from the actual affected table of the DDL statement.
OPTIMIZE, REPAIR and ALTER TABLE belong to this class of DDL statements.
Earlier MariaDB version did not take this in consideration, and appended
only affected table in the certification key list in write set.
Because of missing certification information, it could happen that e.g.
OPTIMIZE table for FK child table could be allowed to apply in parallel
with DML operating on the foreign key parent table, and this could lead to
unhandled MDL lock conflicts between two high priority appliers (BF).
The fix in this patch, changes the TOI replication for OPTIMIZE, REPAIR and
ALTER TABLE statements so that before the execution of respective DDL
statement, there is foreign key parent search round. This FK parent search
contains following steps:
* open and lock the affected table (with permissive shared locks)
* iterate over foreign key contstraints and collect and array of Fk parent
table names
* close all tables open for the THD and release MDL locks
* do the actual TOI replication with the affected table and FK parent
table names as key values
The patch contains also new mtr test for verifying that the above mentioned
DDL statements replicate without problems when operating on FK child table.
The mtr test scenario #1, which can be used to check if some other DDL
(on top of OPTIMIZE, REPAIR and ALTER) could cause similar excessive FK
parent table locking.
Reviewed-by: Aleksey Midenkov <aleksey.midenkov@mariadb.com>
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
buf_read_ahead_random(): Do not leak a tablespace reference.
The reference was already acquired in fil_space_t::get(),
and we must only check that operations were not stopped.
This error was introduced when
commit 118e258aaa
merged n_pending_ios, n_pending_ops into a single n_pending.
This was not noticed earlier, because innodb_random_read_ahead
is OFF by default and our regression tests did not vary that
parameter at all.
Changed the test so that it does not rely on specific auto increment
ids. With Galera's default wsrep_auto_increment_control setting it is
not guaranteed that auto increments always start from 1. The test was
occasionally failing due to result content mismatch.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Prepared statements which were run over binary protocol crashed
a server if the statement did not have CF_PS_ARRAY_BINDING_OPTIMIZED
flag and the statement was executed in bulk mode and a BF abort occrurred.
This was because the bulk execution resulted in several statements without
calling wsrep_after_statement() between, which confused wsrep transaction
state tracking.
As a fix, call wsrep_after_statement() in bulk loop after each execution
if CF_PS_ARRAY_BINDING_OPTIMIZED is not set.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Bugs fixed:
- prepare_for_repair() didn't close all open files if table opened failed
because of out-of-memory
- If dd_recreate_table() failed, the data file was not properly restored
from it's temporary name
- Aria repair initializing code didn't properly clear all used structs
before calling error, which caused crashed in memory-free calls.
- maria_delete_table() didn't register if table open failed. This could
calls my_error() to be called without returning 1 to the caller, which
cased failures in my_ok()
Note when merging to 10.5:
- Remove the #if MYSQL_VERSION from sql_admin.cc
instant_alter_column_possible(): Relax a too strict debug assertion.
The existence of an index stub or a corrupted index on virtual columns
does not imply that virtual columns exist.
This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.
After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
Though this is an error message task, the problem was deep in the
`mysql_prepare_create_table` implementation. The problem is described as
follows:
1. `append_system_key_parts` was called before
`mysql_prepare_create_table`, though key name generation was done close to
the latest stage of the latter.
2. We can't move `append_system_key_parts` in the end, because system keys
should be appended before some checks done.
3. If the checks from `append_system_key_parts` are moved to the end of
`mysql_prepare_create_table`, then some other inappropriate errors are
issued. like `ER_DUP_FIELDNAME`.
To have key name specified in error message, name generation should be done
before the checks, which consequenced in more changes.
The final design for key initialization in `mysql_prepare_create_table`
follows. The initialization is done in three phases:
1. Calculate a total number of keys created with respect to keys ignored.
Allocate KEY* buffer.
2. Generate unique names; calculate a total number of key parts.
Make early checks. Allocate KEY_PART_INFO* buffer.
3. Initialize key parts, make the rest of the checks.
`ha_heap::clone` was creating a handler by share's handlerton, which is
partition handlerton.
handler's handlerton should be used instead.
Here in particular, HEAP handlerton will be used and it will create ha_heap
handler.
The bug was fixed by MDEV-22599 bugfix, which changed `Field::cmp` call
to `Field::cmp_prefix` in `TABLE::check_period_overlaps`.
The trick is that `Field_bit::cmp` apparently calls `Field_bit::cmp_key`,
which condiders an argument an actual pointer to data, which isn't correct
for `Field_bit`, since it stores data by `bit_ptr`. which is in the
beginning of the record, and using `ptr` is incorrect (we use it through
`ptr_in_record` call)
After Sergei's cleanup this assertion is not actual anymore -- we can't
predict if the handler was used for lookup, especially in multi-update
scenario.
`position(old_data)` is made earlier in `ha_check_overlaps`, therefore it
is guaranteed that we compare right refs.
The problem here was that ha_check_overlaps internally uses ha_index_read,
which in case of fail overwrites table->status. Even though the handlers
are different, they share a common table, so the value is anyway spoiled.
This is bad, and table->status is badly designed and overweighted by
functionality, but nothing can be done with it, since the code related to
this logic is ancient and it's impossible to extract it with normal effort.
So let's just save and restore the value in ha_update_row before and after
the checks.
Other operations like INSERT and simple UPDATE are not in risk, since they
don't use this table->status approach.
DELETE does not do any unique checks, so it's also safe.
1. Subtracting table->record[0] from record is UB (non-contiguous buffers)
2. It is very popular to use move_field_offset, which changes Field::ptr,
but leaves table->record[0] unchanged. This makes a ptr_in_record result
incorrect, since it relies on table->record[0] value.
The check ensures the result is within the queried record boundaries.
Add --system={all, users, plugins, udfs, servers, stats, timezones}
This will dump system information from the server in
a logical form like:
* CREATE USER
* GRANT
* SET DEFAULT ROLE
* CREATE ROLE
* CREATE SERVER
* INSTALL PLUGIN
* CREATE FUNCTION
"stats" is the innodb statistics tables or EITS and
these are dumped as INSERT/REPLACE INTO statements
without recreating the table.
"timezones" is the collection of timezone tables
which are important to transfer to generate identical
results on restoration.
Two other options have an effect on the SQL generated by
--system=all. These are mutually exclusive of each other.
* --replace
* --insert-ignore
--replace will include "OR REPLACE" into the logical form
like:
* CREATE OR REPLACE USER ...
* DROP ROLE IF EXISTS (MySQL-8.0+)
* CREATE OR REPLACE ROLE ...
* UNINSTALL PLUGIN IF EXISTS (10.4+) ... (before INSTALL PLUGIN)
* DROP FUNCTION IF EXISTS (MySQL-5.7+)
* CREATE OR REPLACE [AGGREGATE] FUNCTION
* CREATE OR REPLACE SERVER
--insert-ignore uses the construct " IF NOT EXISTS" where
supported in the logical syntax.
'CREATE OR REPLACE USER' includes protection against
being run as the same user that is importing the mysqldump.
Includes experimental support for dumping mysql-5.7/8.0
system tables and exporting logical SQL compatible with MySQL.
Updates mysqldump man page, including this information and
(removing obsolute bug reference)
Reviewed-by: anel@mariadb.org
Per b9f3f06857, mysql_system_tables_data.sql creates
a mysql_native_password with a salted hash of "invalid" so that `set password`
will detect a native password can be applied:.
SHOW CREATE USER; diligently uses this value in its output
generating the SQL:
MariaDB [(none)]> show create user;
+---------------------------------------------------------------------------------------------------+
| CREATE USER for dan@localhost |
+---------------------------------------------------------------------------------------------------+
| CREATE USER `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket |
+---------------------------------------------------------------------------------------------------+
Attempting to execute this before this patch results in:
MariaDB [(none)]> CREATE USER `dan2`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket;
ERROR 1372 (HY000): Password hash should be a 41-digit hexadecimal number
As such, deep the implementation of mysql_native_password we make "invalid" valid (pun intended)
such that the above create user will succeed. We do this by storing
"*THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE" (credit: Oracle MySQL), that is of an INCORRECT
length for a scramble.
In native_password_authenticate we check the length of this cached value
and immediately fail if it is anything other than the scramble length.
native_password_get_salt is only called in the context of set_user_salt, so all setting of native
passwords to hashed content of 'invalid', quite literally create an invalid password.
So other forms of "invalid" are valid SQL in creating invalid passwords:
MariaDB [(none)]> set password = 'invalid';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> alter user dan@localhost IDENTIFIED BY PASSWORD 'invalid';
Query OK, 0 rows affected (0.000 sec)
closes#1628
Reviewer: serg@mariadb.com
buf_flush_try_neighbors(): Before invoking buf_page_t::ready_for_flush(),
check that the freshly looked up buf_pool.page_hash entry actually is
a buffer page and not a buf_pool.watch[] sentinel for purge buffering.
This race condition was introduced in MDEV-15053
(commit b1ab211dee).
It is rather hard to hit this bug, because
buf_flush_check_neighbors() already checked the condition.
The problem exists if buf_pool.watch_set() was invoked for
a page in the range after the check in buf_flush_check_neighbor()
had been finished.