A defect in the subquery substitution code may lead to a server crash:
setting substitution's name should be followed by setting its length
(to keep them in sync).
The result of materialization of the right part of an IN subquery predicate
is placed into a temporary table. Each row of the materialized table is
distinct. A unique key over all fields of the temporary table is defined and
created. It allows to perform key look-ups into the table.
The table created for a materialized subquery can be accessed by key as
any other table. The function best_access-path search for the best access
to join a table to a given partial join. With some where conditions this
function considers a possibility of a ref_or_null access. If such access
employs the unique key on the temporary table then when estimating
the cost this access the function tries to use the array rec_per_key. Yet,
such array is not built for this unique key. This causes a crash of the server.
Rows returned by the subquery that contain nulls don't have to be placed
into temporary table, as they cannot be match any row produced by the
left part of the subquery predicate. So all fields of the temporary table
can be defined as non-nullable. In this case any ref_or_null access
to the temporary table does not make any sense and it does not make sense
to estimate such an access.
The fix makes sure that the temporary table for a materialized IN subquery
is defined with columns that are all non-nullable. The also ensures that
any row with nulls returned by the subquery is not placed into the
temporary table.
The function subselect_uniquesubquery_engine::copy_ref_key has to take into
account that when EXPLAIN is processed the array of store_key object created
for any TABLE_REF may contain elements for constant items. These items should
be ignored by thefunction.
Completed the fix for this bug.
Note: in 5.3 the affected 'if' statement in Item_in_subselect::single_value_transformer()
starting with the condition (thd->variables.sql_mode & MODE_ONLY_FULL_GROUP_BY)
should be removed altogether. The change from table.cc is not needed either.
This is because in 5.3
- min/max transformation for subqueries are done at the optimization phase
- evaluation of the expensive subqueries is done at the execution phase.
Added an EXPLAIN EXTENDED to the test case for bug #12329653.
The MIN/MAX optimizer code from the function opt_sum_query erroneously
did not take into account conjunctive conditions that did not depend on
any table, yet were not identified as constant items. These could be
items containing rand() or PS/SP parameters. These items are supposed
to be evaluated at the execution phase. That's why if such conditions
can be extracted from the WHERE condition the MIN/MAX optimization is
not applied as currently it is always done at the optimization phase.
(In 5.3 expensive subqueries are also evaluated only at the execution
phase. So, if a constant condition with such subquery can be extracted
from the WHERE clause the MIN/MAX optimization should not be applied
in 5.3.)
IF an IN/ALL/SOME predicate with a constant left part is transformed
into an EXISTS subquery the resulting subquery should not be considered
uncacheable if the right part of the predicate is not uncacheable.
Backported the function dbug_print_item() from 5.3. The function is used
only for debugging.
The patch differs from the original MySQL patch as follows:
- All test case differences have been reviewed one by one, and
care has been taken to restore the original plan so that each
test case executes the code path it was designed for.
- A bug was found and fixed in MariaDB 5.3 in
Item_allany_subselect::cleanup().
- ORDER BY is not removed because we are unsure of all effects,
and it would prevent enabling ORDER BY ... LIMIT subqueries.
- ref_pointer_array.m_size is not adjusted because we don't do
array bounds checking, and because it looks risky.
Original comment by Jorgen Loland:
-------------------------------------------------------------
WL#5953 - Optimize away useless subquery clauses
For IN/ALL/ANY/SOME/EXISTS subqueries, the following clauses are
meaningless:
* ORDER BY (since we don't support LIMIT in these subqueries)
* DISTINCT
* GROUP BY if there is no HAVING clause and no aggregate
functions
This WL detects and optimizes away these useless parts of the
query during JOIN::prepare()
- Part 1 of the fix: for semi-join merged subqueries, calling child_join->optimize() until we're done with all
PS-lifetime optimizations in the parent.
The problem was that when we have single row subquery with no rows
Item_cache(es) which represent result row was not null and being
requested via element_index() returned random value.
The fix is setting all Item_cache(es) in NULL before executing the
query (reset() method) which guaranty NULL value of whole query
or its elements requested in any way if no rows was found.
set_null() method was added to Item_cache to guaranty correct NULL
value in case of reseting the cache.
The patch also fixes an unrelated compiler warning.
Analysis:
The temporary table created during SJ-materialization
might be used for sorting for a group by operation. The
sort buffers for this internal temporary table were not
cleared by the execution code after each subquery
re-execution. This resulted in a memory leak detected
by valgrind.
Solution:
Cleanup the sort buffers for the semijon tables as well.
sql/item_subselect.cc:
- Fix a compiler warning and add logic to revert to table
scan partial match when there are more rows in the materialized
subquery than there can be bits in the NULL bitmap index used
for partial matching.
sql/opt_subselect.cc:
- Fixed a memory leak detected by valgrind
Analysis:
The bug is a result of an incomplete fix for bug lp:869036.
That fix didn't take into account that there may be a case
when ther are no NULLs in the materialized subquery, however
all columns without NULLs may not be grouped in the only
non-null index. This is the case when the left subquery expression
has nullable columns.
Solution:
The patch handles two missing sub-cases of the case when there are
no value (non-null matches) for any outer expression, and there are
both NULLs and non-NUll values in the outer reference.
a) If the materialized subquery contains no NULLs there cannot be a
partial match, because there are no NULLs in those columns where
the outer reference has no NULLs.
b) If the materialized subquery contains NULLs, but there exists a
column, such that its corresponding outer expression has no NULL,
and this column also has no NULL. Then there cannot be a partial
match either.
Apart from the fix, the patch also adds few more unrelated test
cases for partial matching, and fixes few typos.
Analysis:
This bug uncovered that partial matching via rowid intersection
didn't handle the case when:
- the left IN argument has some NULLs,
- there are no non-null value matches, and there is no non-null
column,
- the subquery columns that are not covered with the NULLs in
the left IN argument contain at least one row, such that it
has NULL values in all columns where the left IN operand has
no NULLs.
In this case there is a partial match.
In addition the analysis of the related code uncovered incorrect
handling of few other related cases.
Solution:
The solution for the bug is to check if there exists a row with
NULLs in all columns other than the ones having NULL in the
let IN operand.
The check is implemented via checking whether the bitmaps that
store NULL information in class Ordered_key have a non-empty
intersection for the relevant columns.
The intersection itself is implemented via the function
bitmap_exists_intersection() in my_bitmap.c.
In MariaDB, when running in ONLY_FULL_GROUP_BY mode,
the server produced in incorrect error message that there
is an aggregate function without GROUP BY, for artificially
created MIN/MAX functions during subquery MIN/MAX optimization.
The fix introduces a way to distinguish between artifially
created MIN/MAX functions as a result of a rewrite, and normal
ones present in the query. The test for ONLY_FULL_GROUP_BY violation
now tests in addition if a MIN/MAX function was part of a MIN/MAX
subquery rewrite.
In order to be able to distinguish these MIN/MAX functions, the
patch introduces an additional flag in Item_in_subselect::in_strategy -
SUBS_STRATEGY_CHOSEN. This flag is set when the optimizer makes its
final choice of a subuqery strategy. In order to make the choice
consistent, access to Item_in_subselect::in_strategy is provided
via new class methods.
******
Fix MySQL BUG#12329653
In MariaDB, when running in ONLY_FULL_GROUP_BY mode,
the server produced in incorrect error message that there
is an aggregate function without GROUP BY, for artificially
created MIN/MAX functions during subquery MIN/MAX optimization.
The fix introduces a way to distinguish between artifially
created MIN/MAX functions as a result of a rewrite, and normal
ones present in the query. The test for ONLY_FULL_GROUP_BY violation
now tests in addition if a MIN/MAX function was part of a MIN/MAX
subquery rewrite.
In order to be able to distinguish these MIN/MAX functions, the
patch introduces an additional flag in Item_in_subselect::in_strategy -
SUBS_STRATEGY_CHOSEN. This flag is set when the optimizer makes its
final choice of a subuqery strategy. In order to make the choice
consistent, access to Item_in_subselect::in_strategy is provided
via new class methods.
Analysis:
Equality propagation propagated the constant '7' into
args[0] of the Item_in_optimizer that stands for the
"< ANY" predicate. At the same the min/max subquery
rewrite swapped the order of the left and right operands
of the "<" predicate, but used Item_in_subselect::left_expr.
As a result, when the <ANY predicate is executed early in the
execution phase as a contant condition, instead of a constant
right (swapped) argument of the < predicate, there was a field
(t3.a). This field had no data, since the whole predicate is
considered constant, and it is evaluated before any tables are
read. Having junk in the field row buffer produced wrong result
Solution:
Fix create_swap to pick the correct Item_in_optimizer left
argument.
The problem was that merged views has its own nest_level numbering =>
when we compare nest levels we should take into considiration basis (i.e. 0 level),
if it is different then nest levels are not comparable.
This bug happened for the queries over multi-table mergeable views
because the bitmap TABLE::read_set of the underlying tables were not
updated after the views had been merged into the query.
Now this bitmaps are updated properly.
Also the bitmap TABLE::merge_keys now is updated in prevention of
future bugs.
This bug happened due to incompleteness of the fix for bug 872735:
the occurrences of the fields in the conditions of correlated
subqueries were not taken into account when recalculating
covering keys bit maps.
sql/sql_insert.cc:
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
******
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
small cleanup
******
small cleanup
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
mysql-test/r/error_simulation.result:
test case
mysql-test/t/error_simulation.test:
test case
sql/item_subselect.cc:
added new flag subselect_single_select_engine::optimize_error
which is used for error detection which could happen at optimize
stage.
sql/item_subselect.h:
added new flag subselect_single_select_engine::optimize_error
sql/sql_select.cc:
test case
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
The problem: in an uncorrelated subquery, JOIN execution code may make
irreversible cleanups after it has been executed (because the subquery
won't be executed again). In particular, JTBM nests and their injected
IN-equalities will be invalidated (the materialized table will be dropped
and TABLE structure freed).
Solution: don't walk into Item_subselect if represents an uncorrelated
subselect that has already been executed. The idea is that the subselect
will not be executed again (calling Item_subselect_xxx::val_int() will fetch
the cached value), so it should not matter what is inside the subselect.
Analysis:
The cause of the bug was that the method
subselect_rowid_merge_engine::partial_match()
was not designed for re-execution within the
same query. Specifically, it didn't cleanup
the bitmap of matching keys after completion.
The test query requires double execution of
the IN predicate because it first checks the
predicate as a constant condition. The second
execution during regular execution used the bitmap
of matching keys produced by the first execution
instead of starting with a clean one.
Solution:
Cleanup the bitmap of matching keys at the end of
the partial matching procedure.
Analysis:
The cause of the bug was the changed meaning of
subselect_partial_match_engine::has_covering_null_row.
Previously it meant that there is row with NULLs in
all nullable fields of the materialized subquery table.
Later it was changed to mean a row with NULLs in all
fields of this table.
At the same time there was a shortcut in
subselect_rowid_merge_engine::partial_match() that
detected a special case where:
- there is no match in any of the columns with NULLs, and
- there is no NULL-only row that covers all columns with
NULLs.
With the change in the meaning of has_covering_null_row,
the condition that detected this special case was incomplete.
This resulted in an incorrect FALSE, when the result was a
partial match.
Solution:
Expand the condition that detected the special case with the
correct test for the existence of a row with NULL values in
all columns that contain NULLs (a kind of parially covering
NULL-row).
- Provide fix_after_pullout() function for Item_in_optimizer and other Item_XXX classes (basically, all of them
that have eval_not_null_tables, which means they have special rules for calculating not_null_tables_cache value)