Analysis (BUG#719198):
The assert failed because the execution code for
partial matching is designed with the assumption that
NULLs on the left side are detected as early as possible,
and a NULL result is returned before any lookups are
performed at all.
However, in the case of an Item_cache object on the left
side, null was not detected properly, because detection
was done via Item::is_null(), which is not implemented at
all for Item_cache, and resolved to the default Item::is_null()
which always returns FALSE.
Solution:
Imlpement Item::is_null().
******
Analysis (BUG#730604):
The method Item_field::is_null() determines if an item is NULL from its
Item_field::field object. However, for Item_fields that represent internal
temporary tables, Item_field::field represents the field of the original
table that was the source for the temporary table (in this case t1.f3).
Both in the committed test case, and in the original bug report the current
value of t1.f3 is not NULL. This results in an incorrect count of NULLs
for this column. As a consequence, all related Ordered_key buffers are
allocated with incorrect sizes. Depending on the exact query and data,
these incorrect sizes result in various crashes or failed asserts.
Solution:
The correct value of the current field of the internal temp table is
in Item_field::result_field. This value is determined by
Item::is_null_result().
This allows us to simplify and speed up some tests and also remove get_cached_item()
sql/item.h:
Added item.real_type()
Removed get_cached_item()
sql/opt_range.cc:
Simplify test
sql/sql_select.cc:
Simplify test
sql/sql_show.cc:
Simplify test
Analysis:
The assert failed because the execution code for
partial matching is designed with the assumption that
NULLs on the left side are detected as early as possible,
and a NULL result is returned before any lookups are
performed at all.
However, in the case of an Item_cache object on the left
side, null was not detected properly, because detection
was done via Item::is_null(), which is not implemented at
all for Item_cache, and resolved to the default Item::is_null()
which always returns FALSE.
Solution:
Use the property Item::null_value instead of is_null(), which
is properly updated for Item_cache objects as well.
If join condition is of the form <t2.key>=<t1.no_key> then the server
performs no index look-ups when looking for matching rows of t2 for
the rows from t1 with t1.no_key=NULL. It happens because the function
add_not_null_conds() injects an additional condition of the form
IS NOT NULL(<t1.no_key>) into the WHERE condition.
However if the join condition was of the form <t.key>=<outer_ref> no
additional null rejecting predicate was generated. This could lead
to extra records in the result set if the value of <outer_ref> happened
to be NULL.
The new code injects null rejecting predicates of the form
IS NOT NULL(<outer_ref>) and evaluates them before the first row
the subquery is constructed.
The bug was a result of the fix for bug 668644 that turned out to be
not quite correct. A problem appeared with HAVING conditions containing
more than one predicate. If a query with an ORDER BY clause uses
such HAVING condition and the required order can be obtained with
a range/index scan then the HAVING condition has to be pushed into
two different formulas (items). To be able to do it we have to create
a copy of the ANDOR structure of the pushed condition.
- The problem was that Mrr_ordered_index_reader's interrupt_read() and resume_read() would
save and restore 1) index tuple 2) the rowid (as bytes returned by handler->position()). Clustered
primary key columns were not saved/restored.
They are not explicitly present in the index tuple (i.e. table->key_info[secondary_key].key_parts
doesn't list them), but they are actually there, in particular
table->field[clustered_primary_key_member].part_of_key(secondary_key) == 1. Index condition pushdown
code [correctly] uses the latter as inidication that pushed index condition can refer to clustered PK
members.
The fix was to make interrupt_read()/resume_read() to save/restore clustered primary key members as well,
so that we get correct values for them when evaluating pushed index condition.
[3rd attempt: remove the debugging aids, fix comments in testcase]
Analysis:
The reason for the crash was that the inner subquery was executed
via a scan on a final temporary table applied after all other
operations. This final operation is implemented by changing the
contents of the JOIN object of the subquery to represent a table
scan over the temp table. At the same time query optimization of
the outer subquery required evaluation of the inner subquery, which
happened before the actual EXPLAIN. The evaluation left the JOIN
object of the inner subquery in the changed state, where it represented
a table scan over a temp table, and EXPLAIN crashed because the temp
table is not associated with any table reference (TABLE_LIST object).
The reason the JOIN was not restored was because its saving/restoration
was controlled by the join->select_lex->uncacheable flag, which was
not set in the case of materialization.
Solution:
In the methods Item_in_subselect::[single | row]_value_transformer() set:
select_lex->uncacheable|= UNCACHEABLE_EXPLAIN;
In addition, for symmetry, change:
master_unit->uncacheable|= UNCACHEABLE_EXPLAIN;
instead of UNCACHEABLE_DEPENDENT because if a subquery was not
dependent initially, the changed methods do not change this
fact. The subquery may later become correlated if it is transformed
to an EXISTS query, but it may stay uncorrelated if executed via
materialization.
- Make DsMrr_impl::dsmrr_init() handle the case of
1. 1st MRR scan using DS-MRR strategy (i.e. doing key sorting and rowid sorting)
2. 2nd MRR scan getting a buffer that's too small to fit one key element
and one rowid element, and so falling back to default MRR implementation
In this case, dsmrr_init() is invoked with {primary_handler, secondary_handler}
initialized for DS-MRR scan and have to reset them to be initialized for the
default MRR scan.
(attempt 2, with simplified testcase)
- Added ORDER BY to get consistent results to federated_server
- Sort slow tests first
mysql-test/lib/My/ConfigFactory.pm:
Remove usage of port as the test suite is not using that anymore and it causes some probelms in buildbot
mysql-test/lib/mtr_cases.pm:
Sort slow tests first
If a test is marked as 'big_test' also mark it as 'long_test'
mysql-test/suite/federated/federated_server.result:
Added ORDER BY to get consistent results
mysql-test/suite/federated/federated_server.test:
Added ORDER BY to get consistent results