itself when executing queries referring to a view with GROUP BY
an expression containing non-constant interval.
It happened because Item_date_add_interval::eq neglected the
fact that the method can be applied to an expression of the form
date(col) + interval time_to_sec(col) second
at the time when col could not be evaluated yet.
An attempt to evaluate time_to_sec(col) in this method resulted
in a crash.
a worse execution plan than in 4.1 for some queries.
It happened due the fact that at some conditions the
optimizer always preferred range or full index scan access
methods to lookup access methods even when the latter were much
cheaper.
The problem was not observed in 4.1 for the reported query
because the WHERE condition was not of a form that could
cause the problem.
Equality propagation introduced on 5.0 added an extra
predicate and changed the WHERE condition. The new condition
provoked the optimizer to make a bad choice.
The problem was fixed by the patch for bug 17379.
When a view statement is compiled on CREATE VIEW time, most of the
optimizations should not be done. Finding the right optimization
for a subquery is one of them.
Unfortunately the optimizer is resolving the column references of
the left expression of IN subqueries in the process of deciding
witch optimization to use (if needed). So there should be a
special case in Item_in_subselect::fix_fields() : check the
validity of the left expression of IN subqueries in CREATE VIEW
mode and then proceed as normal.
Re-work best_access_path() and find_best() to reuse E(#rows(range access)) as
E(#rows(ref[_or_null](const) access) only when it is appropriate.
[This is the final cumulative patch]
Correct a bug (that I introduced, after using Oracle's database software for
too many years) where the length of the database-sent data is incorrectly
used to infer NULLness.
"alter table from MyISAM to MERGE lost data without errors and warnings"
Add new handlerton flag which prevent user from altering table storage
engine to storage engines which would lose data. Both 'blackhole' and
'merge' are marked with the new flag.
Tests included.
When converting DISTINCT to GROUP BY where the columns are from the covering
index and they are quoted twice in the SELECT list the optimizer is creating
improper processing sequence. This is because of the fact that the columns
of the covering index are not recognized as such and treated as non-index
columns.
Generally speaking duplicate columns can safely be removed from the GROUP
BY/DISTINCT list because this will not add or remove new rows in the
resulting set. Duplicates can be removed even if they are not consecutive
(as is the case for ORDER BY, where the duplicate columns can be removed
only if they are consecutive).
So we can safely transform "SELECT DISTINCT a,a FROM ... ORDER BY a" to
"SELECT a,a FROM ... GROUP BY a ORDER BY a" instead of
"SELECT a,a FROM .. GROUP BY a,a ORDER BY a". We can even transform
"SELECT DISTINCT a,b,a FROM ... ORDER BY a,b" to
"SELECT a,b,a FROM ... GROUP BY a,b ORDER BY a,b".
The fix to this bug consists of checking for duplicate columns in the SELECT
list when constructing the GROUP BY list in transforming DISTINCT to GROUP
BY and skipping the ones that are already in.