The parser works as follows:
The rule expr_lex returns a pointer to a newly created sp_expr_lex
instance which is not linked to any MariaDB structures yet - it is
pointed only from a Bison stack variable. The sp_expr_lex instance
gets linked to other structures (such as sp_instr_jump_if_not) later,
after scanning some following grammar.
Problem before the fix:
If a parse error happened immediately after expr_lex (before it got linked),
the created sp_expr_lex value got lost causing a memory leak.
Fix:
- Using Bison's "destructor" directive to free the results of expr_lex
on parse/oom errors.
- Moving the call for LEX::cleanup_lex_after_parse_error() from
MYSQL_YYABORT and yyerror inside parse_sql().
This is needed because Bison calls destructors after yyerror(),
while it's important to delete the sp_expr_lex instance before
LEX::cleanup_lex_after_parse_error().
The latter frees the memory root containing the sp_expr_lex instance.
After this change the code block are executed in the following order:
- yyerror() -- now only raises the error to DA (no cleanup done any more)
- %destructor { delete $$; } <expr_lex> -- destructs the sp_expr_lex instance
- LEX::cleanup_lex_after_parse_error() -- frees the memory root containing
the sp_expr_lex instance
- Removing the "delete sublex" related code from restore_lex():
- restore_lex() is called in most cases on success, when delete is not needed.
- There is one place when restore_lex() is called on error:
In sp_create_assignment_instr(). But in this case LEX::sp_lex_in_use
is true anyway.
The patch adds a new DBUG_ASSERT(lex->sp_lex_in_use) to guard this.
LooseScan code set opt_range_condition_rows to be the
MIN(loose_scan_plan->records, table->records)
totally ignoring possible quick range selects. If there was a quick
select $QUICK on another index with
$QUICK->records < loose_scan_plan->records
this would create a situation where
opt_range_condition_rows > $QUICK->records
which causes an assert in 10.6+ and potentially wrong query plan
choice in 10.5.
Fixed by making opt_range_condition_rows to be the minimum #rows
of any quick select.
Approved-by: Monty <monty@mariadb.org>
The code in choose_best_splitting() assumed that the join prefix is
in join->positions[].
This is not necessarily the case. This function might be called when
the join prefix is in join->best_positions[], too.
Follow the approach from best_access_path(), which calls this function:
pass the current join prefix as an argument,
"const POSITION *join_positions" and use that.
This bug could affect queries containing a subquery over splittable derived
tables and having an outer references in its WHERE clause. If such subquery
contained an equality condition whose left part was a reference to a column
of the derived table and the right part referred only to outer columns
then the server crashed in the function st_join_table::choose_best_splitting()
The crashing code was added in the commit ce7ffe61d8
that made the code of the function sensitive to presence of the flag
OUTER_REF_TABLE_BIT in the KEYUSE_EXT::needed_in_prefix fields.
The field needed_in_prefix of the KEYUSE_EXT structure should not contain
table maps with OUTER_REF_TABLE_BIT or RAND_TABLE_BIT.
Note that this fix is quite conservative: for affected queries it just
returns the query plans that were used before the above mentioned commit.
In fact the equalities causing crashes should be pushed into derived tables
without any usage of split optimization.
Approved by Sergei Petrunia <sergey@mariadb.com>
EXPLAIN EXTENDED should always print the field item used in the left part
of an equality expression from the SET clause of an update statement as a
reference to table column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The problem was that when JOIN_TAB::remove_duplicates() noticed there
can only be one possible row in the output, it adjusted limits but
didn't take into account any possible offset.
Fixed by not adjusting limit offset when setting one-row-limit.
The reason for ASAN report was that the MERGE and MYISAM file
had different key definitions, which is not allowed.
Fixed by ensuring that the MERGE code is not copying more key stats
than what is in the MyISAM file.
Other things:
- Give an error if different MyISAM files has different number of
key parts.
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.
What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.
This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.
The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.
There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
maybe_null, which is required if the table rows should be regarded as
null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
function.
- join->clear() does not use a table_map argument to clear_tables(),
which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
was before clear_tables().
Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().
Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
result set.
- Changed return_zero_rows() to use pointers instead of references,
similar to the rest of the code.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
The problem, introduced in patch for MDEV-26301:
When check_join_cache_usage() decides not to use join buffer, it must
adjust the access method accordingly. For BNL-H joins this means switching
from pseudo-"ref access"(with index=MAX_KEY) to some other access method.
Failing to do this will cause assertions down the line when code that is
not aware of BNL-H will try to initialize index use for ref access with
index=MAX_KEY.
The fix is to follow the regular code path to disable the join buffer for
the join_tab ("goto no_join_cache") instead of just returning from
check_join_cache_usage().
select_insert::store_values() must reset
has_value_set bitmap before every row, just like mysql_insert() does.
because ON DUPLICATE KEY UPDATE and triggers modify it
This patch optimizes the number of refills for the lateral derived table
to which a materialized derived table subject to split optimization is
is converted. This optimized number of refills is now considered as the
expected number of refills of the materialized derived table when searching
for the best possible splitting of the table.
Variant #2.
When Histogram::point_selectivity() sees that the point value of interest
falls into one bucket, it tries to guess whether the bucket has many
different (unpopular) values or a few popular values. (The number of
rows is fixed, as it's a Height-balanced histogram).
The basis for this guess is the "width" of the value range the bucket
covers. Buckets covering wider value ranges are assumed to contain
values with proportionally lower frequencies.
This is just a [brave] guesswork. For a very narrow bucket, it may
produce an estimate that's larger than total #rows in the bucket
or even in the whole table.
Remove the guesswork and replace it with basic logic: return
either the per-table average selectivity of col=const, or selectivity
of one bucket, whichever is lower.
Fix-up for commit 476b24d084
Author: Monty
Date: Thu Feb 16 14:19:33 2023 +0200
MDEV-20057 Distinct SUM on CROSS JOIN and grouped returns wrong result
which misses initializing of sorder->suffix_length.
In this commit the initialization is implemented by passing
MY_ZEROFILL flag to the allocation of SORT_FIELD elements
This bug could manifest itself at the first execution of prepared statement
created for queries using a materialized view defined as union. A crash
could happen for sure if the query contained a condition pushable into
the view and this condition was over the column defined via a complex string
expression requiring implicit conversion from one charset to another for
some of its sub-expressions. The bug could cause crashes when executing
PS for some other queries whose optimization needed building clones for
such expressions.
This bug was introduced in the patch for MDEV-29988 where the class
Item_direct_ref_to_item was added. The implementations of the virtual
methods get_copy() and build_clone() were invalid for the class and this
could cause crashes after the method build_clone() was called for
expressions containing objects of the Item_direct_ref_to_item type.
Approved by Sergei Golubchik <serg@mariadb.com>
This bug caused server crash when processing a multi-update statement that
used views if optimizer tracing was enabled.
The bug was introduced in the patch for MDEV-30539 that could incorrectly
detect the most top level selects of queries if views were used in them.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
A GROUP BY query which uses "MIN(pk)" and has "pk<>const" in the
WHERE clause would produce wrong result when handled with "Using index
for group-by". Here "pk" column is the table's primary key.
The problem was introduced by fix for MDEV-23634. It made the range
optimizer to not produce ranges for conditions in form "pk != const".
However, LooseScan code requires that the optimizer is able to
convert the condition on the MIN/MAX column into an equivalent range.
The range is used to locate the row that has the MIN/MAX value.
LooseScan checks this in check_group_min_max_predicates(). This fix
makes the code in that function to take into account that "pk != const"
does not produce a range.
This bug could affect multi-update statements as well as single-table
update statements processed as multi-updates when the where condition
contained a range condition over a non-indexed varchar column. The
optimizer calculates selectivity of such range conditions using histograms.
For each range the buckets containing endpoints of the the range are
determined with a procedure that stores the values of the endpoints in the
space of the record buffer where values of the columns are usually stored.
For a range over a varchar column the value of a endpoint may exceed the
size of the buffer and in such case the value is stored with truncation.
This truncations cannot affect the result of the calculation of the range
selectivity as the calculation employes only the beginning of the value
string. However it can trigger generation of an unexpected error on this
truncation if an update statement is processed.
This patch prohibits truncation messages when selectivity of a range
condition is calculated for a non-indexed column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Adding a new argument "flag" to MY_COLLATION_HANDLER::strnncollsp_nchars()
and a flag MY_STRNNCOLLSP_NCHARS_EMULATE_TRIMMED_TRAILING_SPACES.
The flag defines if strnncollsp_nchars() should emulate trailing spaces
which were possibly trimmed earlier (e.g. in InnoDB CHAR compression).
This is important for NOPAD collations.
For example, with this input:
- str1= 'a ' (Latin letter a followed by one space)
- str2= 'a ' (Latin letter a followed by two spaces)
- nchars= 3
if the flag is given, strnncollsp_nchars() will virtually restore
one trailing space to str1 up to nchars (3) characters and compare two
strings as equal:
- str1= 'a ' (one extra trailing space emulated)
- str2= 'a ' (as is)
If the flag is not given, strnncollsp_nchars() does not add trailing
virtual spaces, so in case of a NOPAD collation, str1 will be compared
as less than str2 because it is shorter.
- Field_string::cmp_prefix() now passes the new flag.
Field_varstring::cmp_prefix() and Field_blob::cmp_prefix() do
not pass the new flag.
- The branch in cmp_whole_field() in storage/innobase/rem/rem0cmp.cc
(which handles the CHAR data type) now also passed the new flag.
- Fixing UCA collations to respect the new flag.
Other collations are possibly also affected, however
I had no success in making an SQL script demonstrating the problem.
Other collations will be extended to respect this flags in a separate
patch later.
- Changing the meaning of the last parameter of Field::cmp_prefix()
from "number of bytes" (internal length)
to "number of characters" (user visible length).
The code calling cmp_prefix() from handler.cc was wrong.
After this change, the call in handler.cc became correct.
The code calling cmp_prefix() from key_rec_cmp() in key.cc
was adjusted according to this change.
- Old strnncollsp_nchar() related tests in unittest/strings/strings-t.c
now pass the new flag.
A few new tests also were added, without the flag.
Test fails sporadically and very rarely on this:
```
let $org_queries= `SHOW STATUS LIKE 'Queries'`;
SELECT f1();
CALL p1();
let $new_queries= `SHOW STATUS LIKE 'Queries'`;
let $diff= `SELECT SUBSTRING('$new_queries',9)-SUBSTRING('$org_queries',9)`;
```
if COM_QUIT from one of the earlier (in the test) disconnect's
happens between the two SHOW STATUS commands.
Because COM_QUIT increments "Queries".
The directly previous test uses wait_condition to wait for
its disconnects to complete. But there are more disconnects earlier
in the test file and nothing waits for them.
Let's change wait_condition to wait for *all* disconnect to complete.
When using LEFT() function with a string that is without a charset,
the function crashes. This is because the function assumes that
the string has a charset, and tries to use it to calculate the
length of the string.
Two functions, UNHEX and WEIGHT_STRING, returned a string without
the charset being set to a not null value.
The fix is to set charset when calling val_str on these two functions.
Reviewed-by: Alexander Barkov <bar@mariadb.com>
Reviewed-by: Daniel Black <daniel@mariadb.org>
Problem:
UNIX_TIMESTAMP() called for a expression of the TIME data type
returned NULL.
Inside Type_handler_timestamp_common::Item_val_native_with_conversion
the call for item->get_date() did not convert TIME to DATETIME
automatically (because it does not have to, by design).
As a result, Type_handler_timestamp_common::TIME_to_native() received
a MYSQL_TIME value with zero date 0000-00-00 and therefore returned "true"
(indicating SQL NULL value).
Fix:
Removing the call for item->get_date().
Instantiating Datetime(item) instead.
This forces automatic TIME to DATETIME conversion
(unless @@old_mode is zero_date_time_cast).
EXPLAIN EXTENDED for an UPDATE/DELETE/INSERT/REPLACE statement did not
produce the warning containing the text representation of the query
obtained after the optimization phase. Such warning was produced for
SELECT statements, but not for DML statements.
The patch fixes this defect of EXPLAIN EXTENDED for DML statements.
mtr uses group suffix, but some existing inc and test files use
server_id for expect files. This patch aims to fix that.
For spider:
With this change we will not have to maintain a separate version of
restart_mysqld.inc for spider, that duplicates code, just because
spider tests use different names for expect files, and shutdown_mysqld
requires magical names for them.
With this change spider tests will also be able to use other features
provided by restart_mysqld.inc without code duplication, like the
parameter $restart_parameters (see e.g. the testcase mdev_29904.test
in commit ef1161e5d4f).
Tests run after this change: default, spider, rocksdb, galera, using
the following command
mtr --parallel=auto --force --max-test-fail=0 --skip-core-file
mtr --suite spider,spider/*,spider/*/* \
--skip-test="spider/oracle.*|.*/t\..*" --parallel=auto --big-test \
--force --max-test-fail=0 --skip-core-file
mtr --suite galera --parallel=auto
mtr --suite rocksdb --parallel=auto