- The problem was that JOIN::prepare() tried to set TABLE::maybe_null
for a table in join. Non-merged semi-join tables 1) are present as
join's base tables on second EXECUTE, but 2) do not yet have a TABLE
object.
Worked around the problem by putting mixed_implicit_grouping into JOIN
object, and then passing it to JTBM tables in setup_jtbm_semi_joins().
- convert_subq_to_sj() must connect child select's tables into
parent select's TABLE_LIST::next_local chain.
- The problem was that it took child's leaf_tables.head() which
is different. This could cause certain tables (in this bug's case,
child select's non-merged semi-join) not to be present in
TABLE_LIST::next_local chain. Which would cause non-merged semi-join
not to be initialized in setup_tables(), which would lead to
NULL pointer dereference.
- convert_subq_to_sj() must connect child select's tables into
parent select's TABLE_LIST::next_local chain.
- The problem was that it took child's leaf_tables.head() which
is different. This could cause certain tables (in this bug's case,
child select's non-merged semi-join) not to be present in
TABLE_LIST::next_local chain. Which would cause non-merged semi-join
not to be initialized in setup_tables(), which would lead to
NULL pointer dereference.
- Don't pull out a table out of a semi-join if it is on the inner side of an outer join.
- Make join->sort_by_table= get_sort_by_table(...) call after const table detection
is done. That way, the value of join->sort_by_table will match the actual execution.
Which will allow the code in setup_semijoin_dups_elimination() (search for
"Make sure that possible sorting of rows from the head table is not to be employed."
to see that "Using filesort" is going to be used together with Duplicate Elimination (
and change it to Using temporary + Using filesort)
1. Transformation of row IN subquery made the same as single value.
2. replace_where_subcondition() made working on several layers of OR/AND because it called on expression before fix_fields().
Apply the patch from Patryk Pomykalski:
- create_internal_tmp_table_from_heap() will now return information whether
the last row that we tried to write was a duplicate row.
(mysql-5.6 also has this change)
-Added test and extra code to ensure we don't leave keyread on for a handler table.
-Create on disk temporary files always with long data pointers if SQL_SMALL_RESULT is not used. This ensures that we can handle temporary files bigger than 4G.
mysql-test/include/default_mysqld.cnf:
Run test suite with smaller aria keybuffer size
mysql-test/suite/maria/maria3.result:
Run test suite with smaller aria keybuffer size
mysql-test/suite/sys_vars/r/aria_pagecache_buffer_size_basic.result:
Run test suite with smaller aria keybuffer size
sql/handler.cc:
Disable key read (extra safety if something went wrong)
sql/multi_range_read.cc:
Ensure we have don't leave keyread on for secondary_file
sql/opt_range.cc:
Simplify code with mark_columns_used_by_index_no_reset()
Ensure that read_keys_and_merge() disableds keyread if it enables it
sql/opt_subselect.cc:
Remove not anymore used argument for create_internal_tmp_table()
sql/sql_derived.cc:
Remove not anymore used argument for create_internal_tmp_table()
sql/sql_select.cc:
Use 'enable_keyread()' instead of calling HA_EXTRA_RESET. (Makes debugging easier)
Create on disk temporary files always with long data pointers if SQL_SMALL_RESULT is not used. This ensures that we can handle temporary files bigger than 4G.
Remove not anymore used argument for create_internal_tmp_table()
More DBUG
sql/sql_select.h:
Remove not anymore used argument for create_internal_tmp_table()
This bug happened because the executor tried to use a wrong
TABLE REF object when building access keys. It constructed
keys from fields of a materialized table from a ref object
created to construct keys from the fields of the underlying
base table. This could happen only when materialized table
was created for a non-correlated IN subquery and only
when the materialized table used for lookups.
In this case we are guaranteed to be able to construct the
keys from the fields of tables that would be outer tables
for the tables of the IN subquery.
The patch makes sure that no ref objects constructed from
fields of materialized lookup tables are to be used.
Analysis:
When the method JOIN::choose_subquery_plan() decided to apply
the IN-TO-EXISTS strategy, it set the unit and select_lex
uncacheable flag to UNCACHEABLE_DEPENDENT_INJECTED unconditionally.
As result, even if IN-TO-EXISTS injected non-correlated predicates,
the subquery was still treated as correlated.
Solution:
Set the subquery as correlated only if the injected predicate(s) depend
on the outer query.
The patch enables back constant subquery execution during
query optimization after it was disabled during the development
of MWL#89 (cost-based choice of IN-TO-EXISTS vs MATERIALIZATION).
The main idea is that constant subqueries are allowed to be executed
during optimization if their execution is not expensive.
The approach is as follows:
- Constant subqueries are recursively optimized in the beginning of
JOIN::optimize of the outer query. This is done by the new method
JOIN::optimize_constant_subqueries(). This is done so that the cost
of executing these queries can be estimated.
- Optimization of the outer query proceeds normally. During this phase
the optimizer may request execution of non-expensive constant subqueries.
Each place where the optimizer may potentially execute an expensive
expression is guarded with the predicate Item::is_expensive().
- The implementation of Item_subselect::is_expensive has been extended
to use the number of examined rows (estimated by the optimizer) as a
way to determine whether the subquery is expensive or not.
- The new system variable "expensive_subquery_limit" controls how many
examined rows are considered to be not expensive. The default is 100.
In addition, multiple changes were needed to make this solution work
in the light of the changes made by MWL#89. These changes were needed
to fix various crashes and wrong results, and legacy bugs discovered
during development.
- Let fix_semijoin_strategies_for_picked_join_order() set
POSITION::prefix_record_count for POSITION records that it copies from
SJ_MATERIALIZATION_INFO::tables.
(These records do not have prefix_record_count set, because they are optimized
as joins-inside-semijoin-nests, without full advance_sj_state() processing).
Analysis:
The reason for the wrong result is the interaction between constant
optimization (in this case 1-row table) and subquery optimization.
- First the outer query is optimized, and 'make_join_statistics' finds that
table t2 has one row, reads that row, and marks the whole table as constant.
This also means that all fields of t2 are constant.
- Next, we optimize the subquery in the end of the outer 'make_join_statistics'.
The field 'f2' is considered constant, with value '3'. The subquery predicate
is rewritten as the constant TRUE.
- The outer query execution detects early that the whole query result is empty
and calls 'return_zero_rows'. Since the query is with implicit grouping, we
have to produce one row with special values for the aggregates (depending on
each aggregate function), and NULL values for all non-aggregate fields. This
function calls 'no_rows_in_result' to set each aggregate function to the
default value when it aggregates over an empty result, and then calls
'send_data', which in turn evaluates each Item in the SELECT list.
- When evaluation reaches the subquery predicate, it executes the subquery
with field 'f2' having a constant value '3', and the subquery produces the
incorrect result '7'.
Solution:
Implement Item::no_rows_in_result for all subquery predicates. In order to
make this work, it is also needed to make all val_* methods of all subquery
predicates respect the Item_subselect::forced_const flag. Otherwise subqueries
are executed anyways, and override the default value set by no_rows_in_result
with whatever result is produced from the subquery evaluation.
- When doing join optimization, pre-sort the tables so that they mimic the execution
order we've had with 'semijoin=off'.
- That way, we will not get regressions when there are two query plans (the old and the
new) that have indentical costs but different execution times (because of factors that
the optimizer was not able to take into account).
- The problem was with execution strategy for cases where FirstMatch's inner tables
were interleaved with outer-uncorrelated tables.
- I was unable to find any cases where such join orders would be practically useful,
so fixed it by disabling them.
include/mysql_com.h:
remove "shutdown levels" that aren't shutdown levels from mysql_enum_shutdown_level
mysys/my_addr_resolve.c:
my_snprintf in 5.5 (but not in 5.3) supports %p
sql/item_func.cc:
use a method (that exists only in 5.5) instead of directly accessing a member
sql/item_subselect.cc:
use a method (that exists only in 5.5) instead of directly accessing a member
sql/opt_subselect.cc:
use a method (that exists only in 5.5) instead of directly accessing a member
sql/sql_select.cc:
use a method (that exists only in 5.5) instead of directly accessing a member
- Fix equality propagation to work with SJM nests and OR clauses (full descirption of problem and
solution in the comment in the patch)
(The second commit with post-review fixes)
- The problem was that
= we've picked a LooseScan that used full index scan (tab->type==JT_ALL) on certain index.
= there was also a quick select (tab->quick!=NULL), that used other indexes.
= some old code assumes that (tab->type==JT_ALL && tab->quick) -> means that the
quick select should be used, which is not true.
Fixed by discarding the quick select as soon as we know we're using LooseScan
without using the quick select.
- Remove all references of MAX_TABLES from JOIN struct and make these dynamic
- Updated Join_plan_state to allocate just as many elements as it's needed
sql/opt_subselect.cc:
Optimized version of Join_plan_state
sql/sql_select.cc:
Set join->positions and join->best_positions dynamicly
Don't call update_virtual_fields() if table->vfield is not set.
sql/sql_select.h:
Remove all references of MAX_TABLES from JOIN struct and Join_plan_state and make these dynamic
- Avoid needless load/stores in my_hash_sort_simple due to possible aliasing
- Avoid expensive Join_plan_state constructor in choose_subquery_plan when no subquery
- Avoid calling update_virtual_fields for every row when no virtual fields.
- The problem was that convert_subq_to_jtbm() attached the semi-join
TABLE_LIST object into the wrong list: they used to attach it to the
end of parent_lex->leaf_tables.head()->next_local->...->next_local.
This was apparently inccorect, as one can construct an example where
JTBM nest is attached to a table that is inside some mergeable VIEW, which
breaks (causes crash) for name resolution on the subsequent statement
re-execution.
- Solution: Attach to the "right" list. The "wording" was copied from
st_select_lex::handle_derived.