Analysis:
The fix for bug lp:985667 implements the method Item_subselect::no_rows_in_result()
for all main kinds of subqueries. The purpose of this method is to be called from
return_zero_rows() and set Items to some default value in the case when a query
returns no rows. Aggregates and subqueries require special treatment in this case.
Every implementation of Item_subselect::no_rows_in_result() called
Item_subselect::make_const() to set the subquery predicate to its default value
irrespective of where the predicate was located in the query. Once the predicate
was set to a constant it was never executed.
At the same time, the JOIN object of the fake select for UNIONs (the one used for
the final result of the UNION), was set after all subqueries in the union were
executed. Since we set the subquery as constant, it was never executed, and the
corresponding JOIN was never created.
In order to decide whether the result of NOT IN is NULL or FALSE, Item_in_optimizer
needs to check if the subquery result was empty or not. This is where we got the
crash, because subselect_union_engine::no_rows() checks for
unit->fake_select_lex->join->send_records, and the join object was NULL.
Solution:
If a subquery is in the HAVING clause it must be evaluated in order to know its
result, so that we can properly filter the result records. Once subqueries in the
HAVING clause are executed even in the case of no result rows, this specific
crash will be solved, because the UNION will be executed, and its JOIN will be
constructed. Therefore the fix for this crash is to narrow the fix for lp:985667,
and to apply Item_subselect::no_rows_in_result() only when the subquery predicate
is in the SELECT clause.
Analysis:
Queries with implicit grouping (there is aggregate, but no group by)
follow some non-obvious semantics in the case of empty result set.
Aggregate functions produce some special "natural" value depending on
the function. For instance MIN/MAX return NULL, COUNT returns 0.
The complexity comes from non-aggregate expressions in the select list.
If the non-aggregate expression is a constant, it can be computed, so
we should return its value, however if the expression is non-constant,
and depends on columns from the empty result set, then the only meaningful
value is NULL.
The cause of the wrong result was that for subqueries the optimizer didn't
make a difference between constant and non-constant ones in the case of
empty result for implicit grouping.
Solution:
In all implementations of Item_subselect::no_rows_in_result() check if the
subquery predicate is constant. If it is constant, do not set it to the
default value for implicit grouping, instead let it be evaluated.
One of the reported problems manifested itself in the scenario when one
thread tried to to get statistics on a key cache while the second thread
had not finished initialization of the key cache structure yet.
The problem was resolved by forcing serialization of such operations
on key caches.
To serialize function calls to perform certain operations over a key cache
a new mutex associated with the key cache now is used. It is stored in the
field op_lock of the KEY_CACHE structure. It is locked when the operation
is performed. Some of the serialized key cache operations utilize calls
for other key cache operations. To avoid recursive locking of op_lock
the new functions that perform the operations of key cache initialization,
destruction and re-partitioning with an additional parameter were introduced.
The parameter says whether the operation over op_lock are to be performed or
are to be omitted. The old functions for the operations of key cache
initialization, destruction,and re-partitioning now just call the
corresponding new functions with the additional parameter set to true
requesting to use op_lock while all other calls of these new function
have this parameter set to false.
Another problem reported in the bug entry concerned the operation of
assigning an index to a key cache. This operation can be called
while the key cache structures are not initialized yet. In this
case any call of flush_key_blocks() should return without any actions.
No test case is provided with this patch.
- Item::get_seconds() now skips decimal arithmetic, if decimals is 0. This significantly speeds up from_unixtime() if no fractional part is passed.
- replace sprintfs used to format temporal values by hand-coded formatting
Query1 (original query in the bug report)
BENCHMARK(10000000,DATE_SUB(FROM_UNIXTIME(RAND() * 2147483648), INTERVAL (FLOOR(1 + RAND() * 365)) DAY))
Query2 (Variation of query1 that does not use fractional part in FROM_UNIXTIME parameter)
BENCHMARK(10000000,DATE_SUB(FROM_UNIXTIME(FLOOR(RAND() * 2147483648)), INTERVAL (FLOOR(1 + RAND() * 365)) DAY))
Prior to the patch, the runtimes were (32 bit compilation/AMD machine)
Query1: 41.53 sec
Query2: 23.90 sec
With the patch, the runtimes are
Query1: 32.32 sec (speed up due to removing sprintf)
Query2: 12.06 sec (speed up due to skipping decimal arithmetic)
Analysis:
When the method JOIN::choose_subquery_plan() decided to apply
the IN-TO-EXISTS strategy, it set the unit and select_lex
uncacheable flag to UNCACHEABLE_DEPENDENT_INJECTED unconditionally.
As result, even if IN-TO-EXISTS injected non-correlated predicates,
the subquery was still treated as correlated.
Solution:
Set the subquery as correlated only if the injected predicate(s) depend
on the outer query.
The constructor for Query_log_event allocated 2 bytes too few for
extra space needed by Query cache. (Not sure if this is reproducible
in practice, as there are often a couple of extra bytes allocated
for unused string zero terminators, but better safe than sorry).
Analysis:
When a subquery that needs a temp table is executed during
the prepare or optimize phase of the outer query, at the end
of the subquery execution all the JOIN_TABs of the subquery
are replaced by a new JOIN_TAB that selects from the temp table.
However that temp table has no corresponding TABLE_LIST.
Once EXPLAIN execution reaches its last phase, it tries to print
the names of the subquery tables through its TABLE_LISTs, but in
the case of this bug there is no such TABLE_LIST (it is NULL),
hence a crash.
Solution:
The fix is to block subquery evaluation inside
Item_func_like::fix_fields and Item_func_like::select_optimize()
using the Item::is_expensive() test.
Optimizator fails using index with ST_Within(g, constant_poly).
per-file comments:
mysql-test/r/gis-rt-precise.result
test result fixed.
mysql-test/r/gis-rtree.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree-dynamic.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree-trans.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree.result
test result fixed.
storage/maria/ma_rt_index.c
Use MBR_INTERSECT mode when optimizing the select WITH ST_Within.
storage/myisam/rt_index.c
Use MBR_INTERSECT mode when optimizing the select WITH ST_Within.
- In JOIN::exec(), make the having->update_used_tables() call before we've
made the JOIN::cleanup(full=true) call. The latter frees SJ-Materialization
structures, which correlated subquery predicate items attempt to walk afterwards.
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.
Analysis:
When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.
The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.
However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.
Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().
Analysis:
The optimizer detects an empty result through constant table optimization.
Then it calls return_zero_rows(), which in turns calls inderctly
Item_maxmin_subselect::no_rows_in_result(). The latter method set "value=0",
however "value" is pointer to Item_cache, and not just an integer value.
All of the Item_[maxmin | singlerow]_subselect::val_XXX methods does:
if (forced_const)
return value->val_real();
which of course crashes when value is a NULL pointer.
Solution:
When the optimizer discovers an empty result set, set
Item_singlerow_subselect::value to a FALSE constant Item instead of NULL.
Handle the 'set read_only=1' in lighter way, than the FLUSH TABLES READ LOCK;
For the transactional engines we don't wait for operations on that tables to finish.
per-file comments:
mysql-test/r/read_only_innodb.result
MDEV-136 Non-blocking "set read_only".
test result updated.
mysql-test/t/read_only_innodb.test
MDEV-136 Non-blocking "set read_only".
test case added.
sql/mysql_priv.h
MDEV-136 Non-blocking "set read_only".
The close_cached_tables_set_readonly() declared.
sql/set_var.cc
MDEV-136 Non-blocking "set read_only".
Call close_cached_tables_set_readonly() for the read_only::set_var.
sql/sql_base.cc
MDEV-136 Non-blocking "set read_only".
Parameters added to the close_cached_tables implementation,
close_cached_tables_set_readonly declared.
Prevent blocking on the transactional tables if the
set_readonly_mode is on.
- make make_cond_after_sjm() correctly handle OR clauses where one branch refers to the semi-join table
while the other branch refers to the non-semijoin table.
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
- Let fix_semijoin_strategies_for_picked_join_order() set
POSITION::prefix_record_count for POSITION records that it copies from
SJ_MATERIALIZATION_INFO::tables.
(These records do not have prefix_record_count set, because they are optimized
as joins-inside-semijoin-nests, without full advance_sj_state() processing).
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.