Invoking memcpy() on a NULL pointer is undefined behaviour
(even if the length is 0) and gives the compiler permission to
assume that the pointer is nonnull. Recent versions of GCC
(starting with version 8) are more aggressively optimizing away
checks for NULL pointers. This undefined behaviour would cause
a SIGSEGV in the test main.func_encrypt on an optimized debug build
on GCC 10.2.0.
The issue here was the system variable max_sort_length was being applied
to decimals and it was truncating the value for decimals to the number
of bytes set by max_sort_length.
This was leading to a buffer overflow as the values were written
to the buffer without truncation and then we moved the offset to
the number of bytes(set by max_sort_length), that are needed for comparison.
The fix is to not apply max_sort_length for fixed size types like INT,
DECIMALS and only apply max_sort_length for CHAR, VARCHARS, TEXT and
BLOBS.
The problem was that opt_sum_query() was, as part of MIN/MAX optimization,
doing read operations on constant tables that where already closed
Fixed by ensuring we don't try to read from tables that are closed.
It's a virtual method and it can't be inlined anyway. This allows type
plugins (mysql_json in particular) to use Type_handler_blob and / or
subclass it, without needing to explicitly expose the
vers_type_timestamp object.
Implement a different fix for
"MDEV-19232: Floating point precision / value comparison problem"
Instead of truncating decimal values after every division,
truncate them for comparison purposes.
This reverts commit 62d73df6b2 but keeps the test.
In case of direct execution(stmtid=-1, mariadb_stmt_execute_direct in C
API) application is in control of how many parameters client sends to
the server. In case this number is not equal to actual query parameters
number, the server may start to interprete packet data incorrectly, e.g.
starting from the size of null bitmap. And that could cause it to crash
at some point. The commit introduces some additional COM_STMT_EXECUTE
packet sanity checks:
- checking that "types sent" byte is set, and the value is equal to 1.
if it's not direct execution, then that value is 0 or 1.
- checking that parameter type value is a valid type, and parameter
flags value is 0 or only "unsigned" bit is set
- added more checks that read does not go beyond the end of the packet
This patch solves two key problems.
1. There is a type number clash between MySQL and MariaDB. The number
245, used for MariaDB Virtual Fields is the same as MySQL's JSON.
This leads to corrupt FRM errors if unhandled. The code properly
checks frm table version number and if it matches 5.7+ (until 10.0+)
it will assume it is dealing with a MySQL table with the JSON
datatype.
2. MySQL JSON datatype uses a proprietary format to pack JSON data. The
patch introduces a datatype plugin which parses the format and convers
it to its string representation.
The intended conversion path is to only use the JSON datatype within
ALTER TABLE <table> FORCE, to force a table recreate. This happens
during mysql_upgrade or via a direct ALTER TABLE <table> FORCE.
A wsrep transaction was started for EXECUTE IMMEDIATE, which
caused assertion failure when the executed statement was
CREATE TABLE which should be executed in TOI mode.
As a fix, don't start wsrep transaction for EXECUTE IMMEDIATE
to let the wsrep state logic to be handled from inside stored
procedure codepath.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
For a correlated subquery filesort is executed multiple times.
During each execution, sortlength() computed total sort key length in
Sort_keys::sort_length, without resetting it first.
Eventually Sort_keys::sort_length got larger than @@sort_buffer_size, which
caused filesort() to be aborted with error.
Fixed by making sortlength() to compute lengths only during the first
invocation. Subsequent invocations return pre-computed values.
session_track_system_variables and max_relay_log_size.
lock LOCK_global_system_variables around the get_one_variable() call
in the Session_sysvars_tracker::store_variable().
For debug build of MariaDB server running of the following test case
will hit the assert `thd->lex->sql_command == SQLCOM_UPDATE' in the function
check_fields() on attempt to execute the UPDATE statement.
CREATE TABLE t1 (a INT);
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
Stack trace to the fired assert statement
DBUG_ASSERT(thd->lex->sql_command == SQLCOM_UPDATE)
listed below:
mysql_execute_command() ->
mysql_multi_update_prepare() -->
Multiupdate_prelocking_strategy::handle_end() -->
check_fiels()
It's worth to note that this stack trace looks like a multi update
statement is being executed. The fired assert is checked inside the
function check_fields() in case table->has_period() returns the value
true that in turns happens when temporal period specified in the UPDATE
statement. Condition specified in the DEBUG_ASSERT statement returns
the false value since the data member thd->lex->sql_command have the
value SQLCOM_UPDATE_MULTI. So, the main question is why a program control
flow go to the path prescribed for handling MULTI update statement
despite of the fact that the ordinary UPDATE statement being executed.
The answer is a way that SQL grammar rules written.
When the statement
UPDATE t1 FOR PORTION OF APPTIME FROM (SELECT 1 FROM t1) TO 2 SET a = 1;
being parsed an action for the rule 'table_primary_ident' (part of this action
is listed below to simplify description) is invoked to handle the table
name 't1' specified in the clause 'SELECT 1 FROM t1'.
table_primary_ident:
table_ident opt_use_partition opt_for_system_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
This action calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
Later, an action for the following grammar rule
update_table_list:
table_ident opt_use_partition for_portion_of_time_clause
opt_table_alias_clause opt_key_definition
{
SELECT_LEX *sel= Select;
sel->table_join_options= 0;
if (!($$= Select->add_table_to_list(thd, $1, $4,
is invoked to handle the clause 't1 FOR PORTION OF APPTIME FROM ... TO 2'.
This action also calls the method st_select_lex::add_table_to_list()
to add the table name 't1' to the list of tables being used by the statement.
In result the table name 't1' contained twice in this list.
Presence of duplicate names for the table 't1' in a list of table used by
a statement leads to the fact that the function unique_table() called
from the function mysql_update() returns the value true that forces
implementation of the function mysql_update() to return the value 2 as
a signal to fall through the case boundary of the switch statement placed
in the function mysql_execute_statement() and start handling of the case
for sql_command SQLCOM_UPDATE_MULTI. The compound statement block for the
case SQLCOM_UPDATE_MULTI invokes the function mysql_multi_update_prepare()
that executes the statement
set thd->lex->sql_command= SQLCOM_UPDATE_MULTI;
and after that calls the method
Multiupdate_prelocking_strategy::handle_end(). Finally, this method
invokes the check_field() function and assert is fired.
The above analysis shows that update for a table that simultaneously specified
both as a destination table of UPDATE statement and as a table taking part in
subquery is actually treated by MariaDB server as multi-update statement.
Taking into account that multi-update statement for temporal period
table is not supported yet by MariaDB, correct way to fix the bug is to return
the error ER_NOT_SUPPORTED_YET for this case.
Deadlock is possible between applier thread and local committing thread with active FLUSH TABLE.
Applier thread should skip table share checks and locks when opening table.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
ANALYSIS:
=========
During Bootstrap, while executing the statements from sql
file passed to the init-file server option, transaction
mem_root was being freed for every statement. This creates
an issue with multi statement transactions especially when a
statement in the transaction has to access the memory used
by the previous statement in the transaction.
FIX:
====
Transaction mem_root is freed whenever a transaction is
committed or rolled-back. Hence explicitly freeing it is not
necessary in the bootstrap implementation.
Change-Id: I40f71d49781bf7ad32d474bb176bd6060c9377dc
`LOCK TABLES view_name` should require
* invoker to have SELECT and LOCK TABLES privileges on the view
* either invoker or definer (only if sql security definer) to
have SELECT and LOCK TABLES privileges on the used tables/views.
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.
fix printing precedence for BETWEEN, LIKE/ESCAPE, REGEXP, IN
don't use precedence for printing CASE/WHEN/THEN/ELSE/END
fix parsing precedence of BETWEEN, LIKE/ESCAPE, REGEXP, IN
support predicate arguments for IN, BETWEEN, SOUNDS LIKE, LIKE/ESCAPE,
REGEXP
use %nonassoc for unary operators
fix parsing of IS TRUE/FALSE/UNKNOWN/NULL
remove parser_precedence test as superseded by the precedence test
Reimplement MDEV-14275 Improving memory utilization for information schema
Postpone temp table instantiation until after setup_fields().
Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).
This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.
Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.
Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.
This reverts e2664ee836 and a8458a2345
When first argument to the JSON_MERGE_PATCH was NULL and second - the
invalid JSON line, the error code was garbage. So it should be set to 0
initially.
MDEV-20945: BACKUP UNLOCK + FTWRL assertion failure | SIGSEGV in I_P_List
from MDL_context::release_lock on INSERT w/ BACKUP LOCK (on optimized
builds) | Assertion `ticket->m_duration == MDL_EXPLICIT' failed
BACKUP LOCK behavior is modified so it won't be used wrong:
- BACKUP LOCK should commit any active transactions.
- BACKUP LOCK should not be allowed in stored procedures.
- When BACKUP LOCK is active, don't allow any DDL's for that connection.
- FTWRL is forbidden on the same connection while BACKUP LOCK is active.
Reviewed-by: monty@mariadb.com
Some functions on ha_partition call functions on all partitions, but handler->reset() is only called that pruned by m_partitions_to_reset. So Spider didn't clear pointer on unpruned partitions, if the unpruned partitions are used by next query, Spider reference the pointer that is already freed.