plans or wrong results due to the fact that JOIN_CACHE functions
ignored the possibility of interleaving materialized semijoin
tables with tables whose records were stored in join buffers.
This fixes would become mostly unnecessary if the new code of
mwl 90 was merged into 5.3 right now.
Yet the fix the code of optimize_wo_join_buffering was needed
in any case.
This bug in the MRR code for Maria engine caused wrong results when
MRR was used to scan ranges for each record.
Added test cases for bugs 669420 and 669423 (as a duplicate of 669420).
Prohibited to use hash join algorithm BNLH if join attributes
need non-binary collations. It has to be done because BNLH does
not support join for such attributes yet.
Later this limitations will be lifted.
Changed default collations for the schemes of some test cases
to preserve the old execution plans.
The set of Ordered keys of a rowid merge engine is dense. Thus when
we decide not to create a key for a column that has only NULLs, this
column shouldn't be counted.
Notice that the caller has already precomputed the correct total
number of keys that should be created.
The main.ps_ddl test does SELECT * FROM mysql.general_log; that can be really
expensive with --valgrind if previous test cases put lots of data in the
general log since last server restart. Fix by truncating the log at test start.
The cause for this bug is that MariaDB 5.3 still processes derived tables
(subqueries in the FROM clause) by fully executing them during the parse
phase. This will be remedied by MWL#106 once merged into the main 5.3.
The assert statement is triggered when MATERIALIZATION is ON for EXPLAIN
EXTENDED for derived tables with an IN subquery as follows:
- mysql_parse calls JOIN::exec for the derived table as if it is regular
execution (not explain).
- When materialization is ON, this call goes all the way to
subselect_hash_sj_engine::exec, which creates a partial match engine
because of NULL presence.
- In order to proceed with normal execution, the hash_sj engine substitutes
itself with the created partial match engine.
- After the parse phase it turns out that this execution was part of
EXPLAIN EXTENDED, which in turn calls
Item_cond::print -> ... -> Item_subselect::print,
which calls engine->print().
Since subselect_hash_sj_engine::exec substituted the current
Item_subselect engine with a partial match engine, eventually we call
its ::print() method. However the partial match engines are designed only
for execution, hence there is no implementation of this print() method.
The fix temporarily removes the assert, until this code is merged with
MWL#106.
The bug was a result of missing logic to handle the case
when there are 'expensive' predicates that are not evaluated
during constant table optimization. Such is the case for
the IN predicate, which is considered expensive if it is
computed via materialization. In general this bug can be
triggered with any expensive predicate instead of IN.
When FALSE constant predicates are not evaluated during constant
optimization, the execution path changes so that instead of
setting JOIN::zero_result_cause after make_join_select, and
exiting JOIN::exec via the call to return_zero_rows(), execution
ends in JOIN::exec in the branch:
if (join->tables == join->const_tables)
{
...
else if (join->send_row_on_empty_set())
...
rc= join->result->send_data(*columns_list);
}
Unlike return_zero_rows(), this branch didn't evaluate the
having clause of the query.
The patch adds a call to evaluate the HAVING clause of a query even
when all tables are constant, because even for an empty result set
some aggregate functions may produce a NULL value.
When join buffers are employed no index scan for the first
table with grouping columns can be used.
mysql-test/r/join_cache.result:
Added a test case for bug #664508.
Sorted results for some other test cases.
mysql-test/t/join_cache.test:
Added a test case for bug #664508.
Sorted results for some other test cases.
After the patch for bug 663840 had been applied the test case for
bug 663818 triggered the assert introduced by this patch.
It happened because the the patch turned out to be incomplete:
the space needed for a key entry must be taken into account
for the record written into the buffer, and, for the next record
as well, when figuring out whether the record being written is
the last for the buffer or not.
When adding a new record into the join buffer that is employed by
BNLH join algorithm the writing procedure JOIN_CACHE::write_record_data
checks whether there is enough space for the record in the buffer.
When doing this it must take into account a possible new key entry
added to the buffer. It might happen, as it has been demonstrated by
the bug test case, that there is enough remaining space in the buffer
for the record, but not for the additional key entry for this record.
In this case the key entry overwrites the end of the record that might
cause a crash or wrong results.
Fixed by taking into account a possible addition of new key entry when
estimating the remaining free space in the buffer.
BUG#26447 prefer a clustered key for an index scan, as secondary index is always slower
... which was fixed to cause
BUG#35850 queries take 50% longer
... and was reverted
and
BUG#39653 prefer a secondary index for an index scan, as clustered key is always slower
... which was fixed to cause
BUG#55656 mysqldump takes six days instead of half an hour
... and was amended with a special case workaround
sql/opt_range.cc:
move get_index_only_read_time() into the handler class
sql/sql_select.cc:
use cost not an index length when choosing a cheaper index