This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.
After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
Though this is an error message task, the problem was deep in the
`mysql_prepare_create_table` implementation. The problem is described as
follows:
1. `append_system_key_parts` was called before
`mysql_prepare_create_table`, though key name generation was done close to
the latest stage of the latter.
2. We can't move `append_system_key_parts` in the end, because system keys
should be appended before some checks done.
3. If the checks from `append_system_key_parts` are moved to the end of
`mysql_prepare_create_table`, then some other inappropriate errors are
issued. like `ER_DUP_FIELDNAME`.
To have key name specified in error message, name generation should be done
before the checks, which consequenced in more changes.
The final design for key initialization in `mysql_prepare_create_table`
follows. The initialization is done in three phases:
1. Calculate a total number of keys created with respect to keys ignored.
Allocate KEY* buffer.
2. Generate unique names; calculate a total number of key parts.
Make early checks. Allocate KEY_PART_INFO* buffer.
3. Initialize key parts, make the rest of the checks.
After Sergei's cleanup this assertion is not actual anymore -- we can't
predict if the handler was used for lookup, especially in multi-update
scenario.
`position(old_data)` is made earlier in `ha_check_overlaps`, therefore it
is guaranteed that we compare right refs.
The problem here was that ha_check_overlaps internally uses ha_index_read,
which in case of fail overwrites table->status. Even though the handlers
are different, they share a common table, so the value is anyway spoiled.
This is bad, and table->status is badly designed and overweighted by
functionality, but nothing can be done with it, since the code related to
this logic is ancient and it's impossible to extract it with normal effort.
So let's just save and restore the value in ha_update_row before and after
the checks.
Other operations like INSERT and simple UPDATE are not in risk, since they
don't use this table->status approach.
DELETE does not do any unique checks, so it's also safe.
1. Subtracting table->record[0] from record is UB (non-contiguous buffers)
2. It is very popular to use move_field_offset, which changes Field::ptr,
but leaves table->record[0] unchanged. This makes a ptr_in_record result
incorrect, since it relies on table->record[0] value.
The check ensures the result is within the queried record boundaries.
Per b9f3f06857, mysql_system_tables_data.sql creates
a mysql_native_password with a salted hash of "invalid" so that `set password`
will detect a native password can be applied:.
SHOW CREATE USER; diligently uses this value in its output
generating the SQL:
MariaDB [(none)]> show create user;
+---------------------------------------------------------------------------------------------------+
| CREATE USER for dan@localhost |
+---------------------------------------------------------------------------------------------------+
| CREATE USER `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket |
+---------------------------------------------------------------------------------------------------+
Attempting to execute this before this patch results in:
MariaDB [(none)]> CREATE USER `dan2`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket;
ERROR 1372 (HY000): Password hash should be a 41-digit hexadecimal number
As such, deep the implementation of mysql_native_password we make "invalid" valid (pun intended)
such that the above create user will succeed. We do this by storing
"*THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE" (credit: Oracle MySQL), that is of an INCORRECT
length for a scramble.
In native_password_authenticate we check the length of this cached value
and immediately fail if it is anything other than the scramble length.
native_password_get_salt is only called in the context of set_user_salt, so all setting of native
passwords to hashed content of 'invalid', quite literally create an invalid password.
So other forms of "invalid" are valid SQL in creating invalid passwords:
MariaDB [(none)]> set password = 'invalid';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> alter user dan@localhost IDENTIFIED BY PASSWORD 'invalid';
Query OK, 0 rows affected (0.000 sec)
closes#1628
Reviewer: serg@mariadb.com
Invoking memcpy() on a NULL pointer is undefined behaviour
(even if the length is 0) and gives the compiler permission to
assume that the pointer is nonnull. Recent versions of GCC
(starting with version 8) are more aggressively optimizing away
checks for NULL pointers. This undefined behaviour would cause
a SIGSEGV in the test main.func_encrypt on an optimized debug build
on GCC 10.2.0.
The issue here was the system variable max_sort_length was being applied
to decimals and it was truncating the value for decimals to the number
of bytes set by max_sort_length.
This was leading to a buffer overflow as the values were written
to the buffer without truncation and then we moved the offset to
the number of bytes(set by max_sort_length), that are needed for comparison.
The fix is to not apply max_sort_length for fixed size types like INT,
DECIMALS and only apply max_sort_length for CHAR, VARCHARS, TEXT and
BLOBS.
The problem was that opt_sum_query() was, as part of MIN/MAX optimization,
doing read operations on constant tables that where already closed
Fixed by ensuring we don't try to read from tables that are closed.
It's a virtual method and it can't be inlined anyway. This allows type
plugins (mysql_json in particular) to use Type_handler_blob and / or
subclass it, without needing to explicitly expose the
vers_type_timestamp object.
Implement a different fix for
"MDEV-19232: Floating point precision / value comparison problem"
Instead of truncating decimal values after every division,
truncate them for comparison purposes.
This reverts commit 62d73df6b2 but keeps the test.
In case of direct execution(stmtid=-1, mariadb_stmt_execute_direct in C
API) application is in control of how many parameters client sends to
the server. In case this number is not equal to actual query parameters
number, the server may start to interprete packet data incorrectly, e.g.
starting from the size of null bitmap. And that could cause it to crash
at some point. The commit introduces some additional COM_STMT_EXECUTE
packet sanity checks:
- checking that "types sent" byte is set, and the value is equal to 1.
if it's not direct execution, then that value is 0 or 1.
- checking that parameter type value is a valid type, and parameter
flags value is 0 or only "unsigned" bit is set
- added more checks that read does not go beyond the end of the packet
This patch solves two key problems.
1. There is a type number clash between MySQL and MariaDB. The number
245, used for MariaDB Virtual Fields is the same as MySQL's JSON.
This leads to corrupt FRM errors if unhandled. The code properly
checks frm table version number and if it matches 5.7+ (until 10.0+)
it will assume it is dealing with a MySQL table with the JSON
datatype.
2. MySQL JSON datatype uses a proprietary format to pack JSON data. The
patch introduces a datatype plugin which parses the format and convers
it to its string representation.
The intended conversion path is to only use the JSON datatype within
ALTER TABLE <table> FORCE, to force a table recreate. This happens
during mysql_upgrade or via a direct ALTER TABLE <table> FORCE.
A wsrep transaction was started for EXECUTE IMMEDIATE, which
caused assertion failure when the executed statement was
CREATE TABLE which should be executed in TOI mode.
As a fix, don't start wsrep transaction for EXECUTE IMMEDIATE
to let the wsrep state logic to be handled from inside stored
procedure codepath.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
For a correlated subquery filesort is executed multiple times.
During each execution, sortlength() computed total sort key length in
Sort_keys::sort_length, without resetting it first.
Eventually Sort_keys::sort_length got larger than @@sort_buffer_size, which
caused filesort() to be aborted with error.
Fixed by making sortlength() to compute lengths only during the first
invocation. Subsequent invocations return pre-computed values.
session_track_system_variables and max_relay_log_size.
lock LOCK_global_system_variables around the get_one_variable() call
in the Session_sysvars_tracker::store_variable().
For debug build of MariaDB server running of the following test case
will hit the assert `thd->lex->sql_command == SQLCOM_UPDATE' in the function
check_fields() on attempt to execute the UPDATE statement.
CREATE TABLE t1 (a INT);
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
Stack trace to the fired assert statement
DBUG_ASSERT(thd->lex->sql_command == SQLCOM_UPDATE)
listed below:
mysql_execute_command() ->
mysql_multi_update_prepare() -->
Multiupdate_prelocking_strategy::handle_end() -->
check_fiels()
It's worth to note that this stack trace looks like a multi update
statement is being executed. The fired assert is checked inside the
function check_fields() in case table->has_period() returns the value
true that in turns happens when temporal period specified in the UPDATE
statement. Condition specified in the DEBUG_ASSERT statement returns
the false value since the data member thd->lex->sql_command have the
value SQLCOM_UPDATE_MULTI. So, the main question is why a program control
flow go to the path prescribed for handling MULTI update statement
despite of the fact that the ordinary UPDATE statement being executed.
The answer is a way that SQL grammar rules written.
When the statement
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
being parsed an action for the rule 'table_primary_ident' (part of this action
is listed below to simplify description) is invoked to handle the table
name 't1' specified in the clause 'SELECT 1 FROM t1'.
table_primary_ident:
table_ident opt_use_partition opt_for_system_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
This action calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
Later, an action for the following grammar rule
update_table_list:
table_ident opt_use_partition for_portion_of_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
is invoked to handle the clause 't1 FOR PORTION OF APPTIME FROM ... TO 2'.
This action also calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
In result the table name 't1' contained twice in this list.
Presence of duplicate names for the table 't1' in a list of table used by
a statement leads to the fact that the function unique_table() called
from the function mysql_update() returns the value true that forces
implementation of the function mysql_update() to return the value 2 as
a signal to fall through the case boundary of the switch statement placed
in the function mysql_execute_statement() and start handling of the case
for sql_command SQLCOM_UPDATE_MULTI. The compound statement block for the
case SQLCOM_UPDATE_MULTI invokes the function mysql_multi_update_prepare()
that executes the statement
set thd->lex->sql_command= SQLCOM_UPDATE_MULTI;
and after that calls the method
Multiupdate_prelocking_strategy::handle_end(). Finally, this method
invokes the check_field() function and assert is fired.
The above analysis shows that update for a table that simultaneously specified
both as a destination table of UPDATE statement and as a table taking part in
subquery is actually treated by MariaDB server as multi-update statement.
Taking into account that multi-update statement for temporal period
table is not supported yet by MariaDB, correct way to fix the bug is to return
the error ER_NOT_SUPPORTED_YET for this case.
Deadlock is possible between applier thread and local committing thread with active FLUSH TABLE.
Applier thread should skip table share checks and locks when opening table.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
ANALYSIS:
=========
During Bootstrap, while executing the statements from sql
file passed to the init-file server option, transaction
mem_root was being freed for every statement. This creates
an issue with multi statement transactions especially when a
statement in the transaction has to access the memory used
by the previous statement in the transaction.
FIX:
====
Transaction mem_root is freed whenever a transaction is
committed or rolled-back. Hence explicitly freeing it is not
necessary in the bootstrap implementation.
Change-Id: I40f71d49781bf7ad32d474bb176bd6060c9377dc
`LOCK TABLES view_name` should require
* invoker to have SELECT and LOCK TABLES privileges on the view
* either invoker or definer (only if sql security definer) to
have SELECT and LOCK TABLES privileges on the used tables/views.
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.