subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
Bug #40925: Equality propagation takes non indexed attribute
Query execution plans and execution time of queries like
select a, b, c from t1
where a > '2008-11-21' and b = a limit 10
depended on the order of equality operator parameters:
"b = a" and "a = b" are not same.
An equality propagation algorithm has been fixed:
the substitute_for_best_equal_field function should not
substitute a field for an equal field if both fields belong
to the same table.
Various parts of code used different 'precision' arguments for sprintf("%g") when converting
floating point numbers to a string. This led to differences in results in some cases
depending on whether the text-based or prepared statements protocol is used for a query.
Fixed by changing arguments to sprintf("%g") to always be 15 (DBL_DIG) so that results are
consistent regardless of the protocol.
This patch will be null-merged to 6.0 as the problem does not exists there (fixed by the
patch for WL#2934).
Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
type conversion.
Instead of copying of whole character string from a temporary
buffer, the server copied a short-living pointer to that string
into a long-living structure. That has been fixed.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
BETWEEN was more lenient with regard to what it accepted as a DATE/DATETIME
in comparisons than greater-than and less-than were. ChangeSet makes < >
comparisons similarly robust with regard to trailing garbage (" GMT-1")
and "missing" leading zeros. Now all three comparators behave similarly
in that they throw a warning for "junk" at the end of the data, but then
proceed anyway if possible. Before < > fell back on a string- (rather than
date-) comparison when a warning-condition was raised in the string-to-date
conversion. Now the fallback only happens on actual errors, while warning-
conditions still result in a warning being to delivered to the client.
variable in where clause.
Problem: the new_item() method of Item_uint used an incorrect
constructor. "new Item_uint(name, max_length)" calls
Item_uint::Item_uint(const char *str_arg, uint length) which assumes the
first argument to be the string representation of the value, not the
item's name. This could result in either a server crash or incorrect
results depending on usage scenarios.
Fixed by using the correct constructor in new_item():
Item_uint::Item_uint(const char *str_arg, longlong i, uint length).
tables or more
The problem was that the optimizer used the join buffer in cases when
the result set is ordered by filesort. This resulted in the ORDER BY
clause being ignored, and the records being returned in the order
determined by the order of matching records in the last table in join.
Fixed by relaxing the condition in make_join_readinfo() to take
filesort-ordered result sets into account, not only index-ordered ones.
all space column names.
The parser has been modified to check VIEW column names
with the check_column_name function and to report an error
on empty and all space column names (same as for TABLE
column names).
Inserting Data.
The problem was that under some circumstances Field class was not
properly initialized before calling create_length_to_internal_length()
function, which led to assert failure.
The fix is to do the proper initialization.
The user-visible problem was that under some circumstances
CREATE TABLE ... SELECT statement crashed the server or led
to wrong error message (wrong results).
Declaring an all space column name in the SELECT FROM DUAL or in a view
leads to misleading warning message:
"Leading spaces are removed from name ' '".
The Item::set_name method has been modified to raise warnings like
"Name ' ' has become ''" in case of the truncation of an all
space identifier to an empty string identifier instead of the
"Leading spaces are removed from name ' '" warning message.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
The SELECT query with more than 31 nested dependent SELECT queries returned
wrong result.
New error message has been added: ER_TOO_HIGH_LEVEL_OF_NESTING_FOR_SELECT.
It will be reported as: "Too high level of nesting for select".
Added a test case.
The problem was fixed by the fix for bug #17379.
The problem was that because of some conditions
the optimizer always preferred range or full index
scan access methods to lookup access methods even
when the latter were much cheaper.
after single-row table substitution could lead to a wrong result set.
The bug happened because the function Item_field::replace_equal_field
erroniously assumed that any field included in a multiple equality
with a constant has been already substituted for this constant.
This not true for fields becoming constant after row substitutions
for constant tables.
were evaluated.
According to the new rules for string comparison partial indexes on text
columns can be used in the same cases when partial indexes on varchar
columns can be used.
Ignoring error codes from type conversion allows default (wrong) values to
go unnoticed in the formation of index search conditions.
Fixed by correctly checking for conversion errors.
The bug could cause choosing a sub-optimal execution plan for
a single-table query if a unique index with many null keys were
defined for the table.
It happened because the code of the check_quick_keys function
made an assumption that any key may occur in an unique index
only once. Yet this is not true for keys with nulls that may
have multiple occurrences in the index.
We use INT_RESULT type if all arguments are of type INT for 'if', 'case',
'coalesce' functions regardless of arguments' unsigned flag, so sometimes we can
exceed the INT bounds.
After fix for bug#21798 JOIN stores the pointer to the buffer for sorting
fields. It is used while sorting for grouping and for ordering. If ORDER BY
clause has more elements then the GROUP BY clause then a memory overrun occurs.
Now the length of the ORDER BY list is always passed to the
make_unireg_sortorder() function and it allocates buffer big enough to be
used for bigger list.
With MySQL 3.23 and 4.0, the syntax 'LIMIT N, -1' is accepted, and returns
all the rows located after row N. This behavior, however, is not the
intended result, and defeats the purpose of LIMIT, which is to constrain
the size of a result set.
With MySQL 4.1 and later, this construct is correctly detected as a syntax
error.
This fix does not change the production code, and only adds a new test case
to improve test coverage in this area, to enforce in the test suite the
intended behavior.
account predicates that become sargable after reading const tables.
In some cases this resulted in choosing non-optimal execution plans.
Now info of such potentially saragable predicates is saved in
an array and after reading const tables we check whether this
predicates has become saragable.
strings
MySQL is setting the flag HA_END_SPACE_KEYS for all the keys that reference
text or varchar columns with collation different than binary.
This was done to handle correctly the situation where a lookup on such a key
may return more than 1 row because of the presence of many rows that differ
only by the amount of trailing space in the table's string column.
Inserting such values however appears to violate the unique checks on
INSERT/UPDATE. Thus that flag must not be set as it will prevent the optimizer
from choosing a faster access method.
This fix removes the setting of the HA_END_SPACE_KEYS flag.