Analysis:
Queries with implicit grouping (there is aggregate, but no group by)
follow some non-obvious semantics in the case of empty result set.
Aggregate functions produce some special "natural" value depending on
the function. For instance MIN/MAX return NULL, COUNT returns 0.
The complexity comes from non-aggregate expressions in the select list.
If the non-aggregate expression is a constant, it can be computed, so
we should return its value, however if the expression is non-constant,
and depends on columns from the empty result set, then the only meaningful
value is NULL.
The cause of the wrong result was that for subqueries the optimizer didn't
make a difference between constant and non-constant ones in the case of
empty result for implicit grouping.
Solution:
In all implementations of Item_subselect::no_rows_in_result() check if the
subquery predicate is constant. If it is constant, do not set it to the
default value for implicit grouping, instead let it be evaluated.
Analysis:
When the method JOIN::choose_subquery_plan() decided to apply
the IN-TO-EXISTS strategy, it set the unit and select_lex
uncacheable flag to UNCACHEABLE_DEPENDENT_INJECTED unconditionally.
As result, even if IN-TO-EXISTS injected non-correlated predicates,
the subquery was still treated as correlated.
Solution:
Set the subquery as correlated only if the injected predicate(s) depend
on the outer query.
The patch enables back constant subquery execution during
query optimization after it was disabled during the development
of MWL#89 (cost-based choice of IN-TO-EXISTS vs MATERIALIZATION).
The main idea is that constant subqueries are allowed to be executed
during optimization if their execution is not expensive.
The approach is as follows:
- Constant subqueries are recursively optimized in the beginning of
JOIN::optimize of the outer query. This is done by the new method
JOIN::optimize_constant_subqueries(). This is done so that the cost
of executing these queries can be estimated.
- Optimization of the outer query proceeds normally. During this phase
the optimizer may request execution of non-expensive constant subqueries.
Each place where the optimizer may potentially execute an expensive
expression is guarded with the predicate Item::is_expensive().
- The implementation of Item_subselect::is_expensive has been extended
to use the number of examined rows (estimated by the optimizer) as a
way to determine whether the subquery is expensive or not.
- The new system variable "expensive_subquery_limit" controls how many
examined rows are considered to be not expensive. The default is 100.
In addition, multiple changes were needed to make this solution work
in the light of the changes made by MWL#89. These changes were needed
to fix various crashes and wrong results, and legacy bugs discovered
during development.
make sure that stored routines are evaluated (that is, de facto - cached) in convert_const_to_int().
revert the fix for lp:806943 because it cannot be repeated anymore.
add few tests for convert_const_to_int()
- After the exec_const_cond->val_int() call, check for error and return.
(if we don't do it, we will eventually hit an error when trying to set status OK in
the diagnostics area, which already has an error status).
fixed several defects in the greedy optimization:
1) The greedy optimizer calculated the 'compare-cost' (CPU-cost)
for iterating over the partial plan result at each level in
the query plan as 'record_count / (double) TIME_FOR_COMPARE'
This cost was only used locally for 'best' calculation at each
level, and *not* accumulated into the total cost for the query plan.
This fix added the 'CPU-cost' of processing 'current_record_count'
records at each level to 'current_read_time' *before* it is used as
'accumulated cost' argument to recursive
best_extension_by_limited_search() calls. This ensured that the
cost of a huge join-fanout early in the QEP was correctly
reflected in the cost of the final QEP.
To get identical cost for a 'best' optimized query and a
straight_join with the same join order, the same change was also
applied to optimize_straight_join() and get_partial_join_cost()
2) Furthermore to get equal cost for 'best' optimized query and a
straight_join the new code substrcated the same '0.001' in
optimize_straight_join() as it had been already done in
best_extension_by_limited_search()
3) When best_extension_by_limited_search() aggregated the 'best' plan a
plan was 'best' by the check :
'if ((search_depth == 1) || (current_read_time < join->best_read))'
The term '(search_depth == 1' incorrectly caused a new best plan to be
collected whenever the specified 'search_depth' was reached - even if
this partial query plan was more expensive than what we had already
found.
The patch differs from the original MySQL patch as follows:
- All test case differences have been reviewed one by one, and
care has been taken to restore the original plan so that each
test case executes the code path it was designed for.
- A bug was found and fixed in MariaDB 5.3 in
Item_allany_subselect::cleanup().
- ORDER BY is not removed because we are unsure of all effects,
and it would prevent enabling ORDER BY ... LIMIT subqueries.
- ref_pointer_array.m_size is not adjusted because we don't do
array bounds checking, and because it looks risky.
Original comment by Jorgen Loland:
-------------------------------------------------------------
WL#5953 - Optimize away useless subquery clauses
For IN/ALL/ANY/SOME/EXISTS subqueries, the following clauses are
meaningless:
* ORDER BY (since we don't support LIMIT in these subqueries)
* DISTINCT
* GROUP BY if there is no HAVING clause and no aggregate
functions
This WL detects and optimizes away these useless parts of the
query during JOIN::prepare()
If the optimizer switch 'semijoin_with_cache' is set to 'off' then
join cache cannot be used to join inner tables of a semijoin.
Also fixed a bug in the function check_join_cache_usage() that led
to wrong output of the EXPLAIN commands for some test cases.
Analysis:
Equality propagation propagated the constant '7' into
args[0] of the Item_in_optimizer that stands for the
"< ANY" predicate. At the same the min/max subquery
rewrite swapped the order of the left and right operands
of the "<" predicate, but used Item_in_subselect::left_expr.
As a result, when the <ANY predicate is executed early in the
execution phase as a contant condition, instead of a constant
right (swapped) argument of the < predicate, there was a field
(t3.a). This field had no data, since the whole predicate is
considered constant, and it is evaluated before any tables are
read. Having junk in the field row buffer produced wrong result
Solution:
Fix create_swap to pick the correct Item_in_optimizer left
argument.
Honor unique/not unique when creating keys for internal tempory tables.
Added new variables to be used to limit how keys are created for internal temporary tables.
include/maria.h:
Added maria_max_key_length() and maria_max_key_segments()
include/myisam.h:
Added myisam_max_key_length() and myisam_max_key_segments()
mysql-test/r/mysql.result:
Drop all used tables
mysql-test/r/subselect4.result:
Added test case for lp:879939
mysql-test/t/mysql.test:
Drop all used tables
mysql-test/t/subselect4.test:
Added test case for lp:879939
sql/mysql_priv.h:
Added internal_tmp_table_max_key_length and internal_tmp_table_max_key_segments to be used to limit how keys for derived tables are created.
sql/mysqld.cc:
Added internal_tmp_table_max_key_length and internal_tmp_table_max_key_segments to be used to limit how keys for derived tables are created.
sql/share/errmsg.txt:
Added new error message for internal errors
sql/sql_select.cc:
Give error if we try to create a wrong key (this error should never happen)
Honor unique/not unique when creating keys for internal tempory tables.
storage/maria/ha_maria.cc:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
(Not having this caused an assert in the included test)
storage/maria/ha_maria.h:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
storage/maria/ma_check.c:
Fixed bug in Duplicate key error printing (now row position is printed correctly)
storage/maria/ma_create.c:
maria_max_key_length() -> _ma_max_key_length()
storage/maria/ma_info.c:
Added extern function maria_max_key_length() to calculate the max key length based on current block size.
storage/maria/ma_open.c:
maria_max_key_length() -> _ma_max_key_length()
storage/maria/maria_def.h:
maria_max_key_length() -> _ma_max_key_length()
storage/myisam/ha_myisam.cc:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
(Not having this caused an assert in the included test)
storage/myisam/ha_myisam.h:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
sql/sql_insert.cc:
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
******
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
small cleanup
******
small cleanup
Analysis:
During the first execution of the query through the stored
procedure, the optimization phase calls
substitute_for_best_equal_field(), which calls
Item_in_optimizer::transform(). The latter replaces
Item_in_subselect::left_expr with args[0] via assignment.
In this test case args[0] is an Item_outer_ref which is
created/deallocated for each re-execution. As a result,
during the second execution Item_in_subselect::left_expr
pointed to freed memory, which resulted in a crash.
Solution:
The solution is to use change_item_tree(), so that the
origianal left expression is restored after each execution.
Analysis:
Both the wrong result and the valgrind warning were a result
of incomplete cleanup of the MIN/MAX subquery rewrite. At the
first execution of the query, the non-aggregate subquery is
transformed into an aggregate MIN/MAX subquery. During the
fix_fields phase of the MIN/MAX function, it sets the property
st_select_lex::with_sum_func to true.
The second execution of the query finds this flag to be ON.
When optimization reaches the same MIN/MAX subquery
transformation, it tests if the subquery is an aggregate or not.
Since select_lex->with_sum_func == true from the previous
execution, the transformation executes the second branch that
handles aggregate subqueries. This substitutes the subquery
Item into a Item_maxmin_subselect. At the same time elsewhere
it is assumed that the subquery Item is of type
Item_allany_subselect. Ultimately this results in casting the
actual object to the wrong class, and calling the wrong
any_value() method from empty_underlying_subquery().
Solution:
Cleanup the st_select_lex::with_sum_func property in the case
when the MIN/MAX transformation was performed for a non-aggregate
subquery, so that the transformation can be repeated.
Analysis:
For some of the re-executions of the correlated subquery the
where clause is false. In these cases the execution of the
subquery detects that it must generate a NULL row because of
implicit grouping. In this case the subquery execution reaches
the following code in do_select():
while ((table= li++))
mark_as_null_row(table->table);
This code marks all rows in the table as complete NULL rows.
In the example, when evaluating the field t2.f10 for the second
row, all bits of Field::null_ptr[0] are set by the previous call
to mark_as_null_row(). Then the call to Field::is_null()
returns true, resulting in a NULL for the MAX function.
Thus the lines above are not suitable for subquery re-execution
because mark_as_null_row() changes the NULL bits of each table
field, and there is no logic to restore these fields.
Solution:
The call to mark_as_null_row() was added by the fix for bug
lp:613029. Therefore removing the fix for lp:613029 corrects
this wrong result. At the same time the test for lp:613029
behaves correctly because the changes of MWL#89 result in a
different execution path where:
- the constant subquery is evaluated via JOIN::exec_const_cond
- detecting that it has an empty result triggers the branch
if (zero_result_cause)
return_zero_rows()
- return_zero_rows() calls mark_as_null_row().
- Set the default
- Adjust the testcases so that 'new' tests are run with optimizations turned on.
- Pull out relevant tests from "irrelevant" tests and run them with optimizations on.
- Run range.test and innodb.test with both mrr=on and mrr=off
Analysis:
This bug is yet another incarnation of the generic problem
where optimization of the outer query triggers evaluation
of a subquery, and this evaluation performs a destructive
change to the subquery plan. Specifically a temp table is
created for the DISTINCT operation that replaces the
original subquery table. Later, select_describe() attempts
to print the table name, however, there is no corresponding
TABLE_LIST object to the internal temp table, so we get a
crash. Execution works fine because it is not interested in
the corresponding TABLE_LIST object (or its name).
Solution:
Similar to other such bugs, block the evaluation of expensive
Items in convert_const_to_int().
semijoin=on,firstmatch=on,loosescan=on
to
semijoin=off,firstmatch=off,loosescan=off
Adjust the testcases:
- Modify subselect*.test and join_cache.test so that all tests
use the same execution paths as before (i.e. optimizations that
are being tested are enabled)
- Let all other test files run with the new default settings (i.e.
with new optimizations disabled)
- Copy subquery testcases from these files into t/subselect_extra.test
which will run them with new optimizations enabled.
Analysis:
This bug consists of two related problems that are
result of too early evaluation of single-row subqueries
during the optimization phase of the outer query.
Several optimizer code paths try to evaluate single-row
subqueries in order to produce a constant and use that
constant for further optimzation.
When the execution of the subquery peforms destructive
changes to the representation of the subquery, and these
changes are not anticipated by the subsequent optimization
phases of the outer query, we tipically get a crash or
failed assert.
Specifically, in this bug the inner-most suqbuery with
DISTINCT triggers a substitution of the original JOIN
object by a single-table JOIN object with a temp table
needed to perform the DISTINCT operation (created by
JOIN::make_simple_join).
This substitution breaks EXPLAIN because:
a) in the first example JOIN::cleanup no longer can
reach the original table of the innermost subquery, and
close all indexes, and
b) in this second test query, EXPLAIN attempts to print
the name of the internal temp table, and crashes because
the temp table has no name (NULL pointer instead).
Solution:
a) fully disable subquery evaluation during optimization
in all cases - both for constant propagation and range
optimization, and
b) change JOIN::join_free() to perform cleanup irrespective
of EXPLAIN or not.
mysql-test/r/subselect4.result:
Moved test case for LP BUG#718593 into the correct test file subselect_mat_cost_bugs.test.
mysql-test/t/subselect4.test:
Moved test case for LP BUG#718593 into the correct test file subselect_mat_cost_bugs.test.
Analysis:
The subquery is evaluated first during ref-optimization of the outer
query because the subquery is considered constant from the perspective
of the outer query. Thus an attempt is made to evaluate the MAX subquery
and use the new constant to drive an index nested loops join.
During this evaluation the inner-most subquery replaces the JOIN_TAB
with a new one that fetches the data from a temp table.
The function select_describe crashes at the lines:
TABLE_LIST *real_table= table->pos_in_table_list;
item_list.push_back(new Item_string(real_table->alias,
strlen(real_table->alias),
cs));
because 'table' is a temp table, and it has no corresponding table
reference. This 'real_table' is NULL, and real_table->alias results
in a crash.
Solution:
In the spirit of MWL#89 prevent the evaluation of expensive predicates
during optimization. This patch prevents the evaluation of expensive
predicates during ref optimization.
sql/item_subselect.h:
Remove unused class member. Not needed for the fix, but noticed now and removed.
Analysis:
During optimization of the subquery, in the call chain:
update_ref_and_keys -> add_key_fields ->
merge_key_fields -> Item_direct_ref::is_null -> Item_cache::is_null
The call to Item_cache::is_null() returns TRUE, which is wrong.
This results in Item_null replacing the field 'f3' in the KEY_FIELD,
then this Item_null is used for index access, producing a wrong result.
The reason why Item_cache::is_null returns wrong result is that
this Item_cache object is a cache of the left operand of IN, and was
updated in Item_in_optimizer::val_int. In MWL#89 the latter method is
called during the execution phase, which is after we optimize the subquery.
Therefore during the optization phase the left operand cache of IN was
not updated.
Solution:
Update the left operand cache during optimization if it is a constant.
This bug fix also discoveres and fixes a wrong IF statement in
convert_constant_item().
Analysis:
The method st_select_lex::optimize_unflattened_subqueries()
incorrectly propagated to each subquery the complete
select_options flag set for the whole query. Among other
flags in select_options, this propagated incorrectly the
STRAIGHT_JOIN flag from the upper query to the subquery.
Solution:
During EXPLAIN set only the SELECT_DESCRIBE bit in the
select_options of the subquery.
Analysis:
Build_equal_items_for_cond() rewrites the WHERE clause in such a way,
that it may merge the list join->cond_equal->current_level with the
list of child Items in an AND condition of the WHERE clause.
The place where this is done is:
static COND *build_equal_items_for_cond(THD *thd, COND *cond,
COND_EQUAL *inherited)
{
...
if (and_level)
{
args->concat(&eq_list);
args->concat((List<Item> *)&cond_equal.current_level);
}
...
}
As a result, later transformations on the WHERE clause may change the
structure of the list join->cond_equal->current_level without knowing this.
Specifically in this bug, Item_in_subselect::inject_in_to_exists_cond
creates a new AND of the old WHERE clause and the IN->EXISTS conditions.
It then calls fix_fields() for the new AND. Among other things, fix_fields
flattens all nested ANDs into one by merging the AND argument lists.
When there is a cond_equal for the JOIN, its list of Item_equal objects
is attached to the end of the original AND. When a lower-level AND is
merged into the top-level one, the argument list of the lower-level AND
is concatenated to the list of multiple equalities in the upper-level AND.
As a result, when substitute_for_best_equal_field processes the
multiple equalities, it turns out that the multiple equality list contains
the Items from the lower-level AND which were concatenated to the end of
the join->cond_equal->current_level list. This results in a crash because
this list must not contain any other Items except for the previously found
Item_equal ones.
Solution:
When performing IN->EXIST predicate injection, and the where clause is an
AND, detach the list of Item_equal objects before calling fix_fields on
the injected where clause.
After fix_fields is done, reattach back the multiple equalities list to
the end of the argument list of the new AND.