column is used for ORDER BY
Problem: filesort isn't meant for null length sort data
(e.g. char(0)), that leads to a server crash.
Fix: disregard sort order if sort data record length is 0 (nothing
to sort).
NULLable BIGINT and INT columns in comparison
Problem: a consequence of the fix for 43668.
Some Arg_comparator inner initialization missed,
that may lead to unpredictable (wrong) comparison
results.
Fix: always properly initialize Arg_comparator
before its usage.
Arg_comparator uses Item_cache objects to store constants being compared when
they're need a type conversion. Because this cache wasn't initialized properly
Arg_comparator might produce wrong comparison result.
The Arg_comparator::cache_converted_constant function now initializes cache
prior to usage.
field='const1' AND field='const2' in some cases
Building multiple equality predicates containing
a constant which is compared as a datetime (with a field)
we should take this fact into account and compare the
constant with another possible constatns as datetimes
as well.
E.g. for the
SELECT ... WHERE a='2001-01-01' AND a='2001-01-01 00:00:00'
we should compare '2001-01-01' with '2001-01-01 00:00:00' as
datetimes but not as strings.
init_read_record() - (records.cc:274)
Item_cond::used_tables_cache was accessed in
records.cc#init_read_record() without being initialized. It had
not been initialized because it was wrongly assumed that the
Item's variables would not be accessed, and hence
quick_fix_field() was used instead of fix_fields() to save a few
CPU cycles at creation time.
The fix is to properly initilize the Item by replacing
quick_fix_field() with fix_fields().
memory
The server was doing a bad class typecast causing setting of
wrong value for the maximum number of items in an internal
structure used in equality propagation.
Fixed by not doing the wrong typecast and asserting the type
of the Item where it should be done.
subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
Bug #40925: Equality propagation takes non indexed attribute
Query execution plans and execution time of queries like
select a, b, c from t1
where a > '2008-11-21' and b = a limit 10
depended on the order of equality operator parameters:
"b = a" and "a = b" are not same.
An equality propagation algorithm has been fixed:
the substitute_for_best_equal_field function should not
substitute a field for an equal field if both fields belong
to the same table.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
Various parts of code used different 'precision' arguments for sprintf("%g") when converting
floating point numbers to a string. This led to differences in results in some cases
depending on whether the text-based or prepared statements protocol is used for a query.
Fixed by changing arguments to sprintf("%g") to always be 15 (DBL_DIG) so that results are
consistent regardless of the protocol.
This patch will be null-merged to 6.0 as the problem does not exists there (fixed by the
patch for WL#2934).
Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
Field_varstring::store
The code that temporary saved the bitmaps of the read set and the write set so that
it can set it to all columns for debug purposes was not expecting that the
table->read_set and table->write_set can be the same. And was always saving both in
sequence.
As a result the original value was never restored.
Fixed by saving & restoring the original value only once if the two sets are the
same (in a special set of functions).
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
type conversion.
Instead of copying of whole character string from a temporary
buffer, the server copied a short-living pointer to that string
into a long-living structure. That has been fixed.
but not collation.
The problem here was that text literals in a view were always
dumped with character set introducer. That lead to loosing
collation information.
The fix is to dump character set introducer only if it was
in the original query. That is now possible because there
is no problem any more of loss of character set of string
literals in views -- after WL#4052 the view is dumped
in the original character set.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
BETWEEN was more lenient with regard to what it accepted as a DATE/DATETIME
in comparisons than greater-than and less-than were. ChangeSet makes < >
comparisons similarly robust with regard to trailing garbage (" GMT-1")
and "missing" leading zeros. Now all three comparators behave similarly
in that they throw a warning for "junk" at the end of the data, but then
proceed anyway if possible. Before < > fell back on a string- (rather than
date-) comparison when a warning-condition was raised in the string-to-date
conversion. Now the fallback only happens on actual errors, while warning-
conditions still result in a warning being to delivered to the client.
variable in where clause.
Problem: the new_item() method of Item_uint used an incorrect
constructor. "new Item_uint(name, max_length)" calls
Item_uint::Item_uint(const char *str_arg, uint length) which assumes the
first argument to be the string representation of the value, not the
item's name. This could result in either a server crash or incorrect
results depending on usage scenarios.
Fixed by using the correct constructor in new_item():
Item_uint::Item_uint(const char *str_arg, longlong i, uint length).