UNIQUE (eq-ref) lookups result in table being considered as a "constant" table.
Queries that consist of only constant tables are processed in do_select() in a
special way that doesn't invoke evaluate_join_record(), and therefore doesn't
increase the counters join->examined_rows and join->thd->row_count.
The patch increases these counters in this special case.
NOTICE:
This behavior seems to contradict what the documentation says in Sect. 5.11.4:
"Queries handled by the query cache are not added to the slow query log, nor
are queries that would not benefit from the presence of an index because the
table has zero rows or one row."
No test case in 5.0 as issue shows only in slow query log, and other counters
can give subtly different values (with regard to counting in create_sort_index(),
synthetic rows in ROLLUP, etc.).
BETWEEN was more lenient with regard to what it accepted as a DATE/DATETIME
in comparisons than greater-than and less-than were. ChangeSet makes < >
comparisons similarly robust with regard to trailing garbage (" GMT-1")
and "missing" leading zeros. Now all three comparators behave similarly
in that they throw a warning for "junk" at the end of the data, but then
proceed anyway if possible. Before < > fell back on a string- (rather than
date-) comparison when a warning-condition was raised in the string-to-date
conversion. Now the fallback only happens on actual errors, while warning-
conditions still result in a warning being to delivered to the client.
The bug is a regression introduced by the fix for bug30596. The problem
was that in cases when groups in GROUP BY correspond to only one row,
and there is ORDER BY, the GROUP BY was removed and the ORDER BY
rewritten to ORDER BY <group_by_columns> without checking if the
columns in GROUP BY and ORDER BY are compatible. This led to
incorrect ordering of the result set as it was sorted using the
GROUP BY columns. Additionaly, the code discarded ASC/DESC modifiers
from ORDER BY even if its columns were compatible with the GROUP BY
ones.
This patch fixes the regression by checking if ORDER BY columns form a
prefix of the GROUP BY ones, and rewriting ORDER BY only in that case,
preserving the ASC/DESC modifiers. That check is sufficient, since the
GROUP BY columns contain a unique index.
causes out of memory errors
The code in mysql_create_function() and mysql_drop_function() assumed
that the only reason for UDFs being uninitialized at that point is an
out-of-memory error during initialization. However, another possible
reason for that is the --skip-grant-tables option in which case UDF
initialization is skipped and UDFs are unavailable.
The solution is to check whether mysqld is running with
--skip-grant-tables and issue a proper error in such a case.
HOUR(), MINUTE(), ... returned spurious results when used on a DATE-cast.
This happened because DATE-cast object did not overload get_time() method
in superclass Item. The default method was inappropriate here and
misinterpreted the data.
Patch adds missing method; get_time() on DATE-casts now returns SQL-NULL
on NULL input, 0 otherwise. This coincides with the way DATE-columns
behave.
variable in where clause.
Problem: the new_item() method of Item_uint used an incorrect
constructor. "new Item_uint(name, max_length)" calls
Item_uint::Item_uint(const char *str_arg, uint length) which assumes the
first argument to be the string representation of the value, not the
item's name. This could result in either a server crash or incorrect
results depending on usage scenarios.
Fixed by using the correct constructor in new_item():
Item_uint::Item_uint(const char *str_arg, longlong i, uint length).
Calculating the estimated number of records for a range scan may take a
significant time, and it was impossible for a user to interrupt that
process by killing the connection or the query.
Fixed by checking the thread's 'killed' status in check_quick_keys() and
interrupting the calculation process if it is set to a non-zero value.
The HAVING clause is subject to the same rules as the SELECT list
about using aggregated and non-aggregated columns.
But this was not enforced when processing implicit grouping from
using aggregate functions.
Fixed by performing the same checks for HAVING as for SELECT.
The new default database engine for altered table was reassigned to
the old one. That's wrong thing by itself, and (as the engine
for a subpartition gets that new value) leads to DBUG_ASSERTION
in mysql_unpack_partition()
Item_in_subselect's only externally callable method is val_bool().
However the nullability in the wrapper class (Item_in_optimizer) is
established by calling the "forbidden" method val_int().
Fixed to use the correct method (val_bool() ) to establish nullability
of Item_in_subselect in Item_in_optimizer.
Item_func_inet_ntoa and Item_func_conv inherit 'maybe_null' flag from an
argument, which is wrong.
Both can be NULL with notnull arguments, so that's fixed.
There are two problems with ROUND(X, D) on an exact numeric
(DECIMAL, NUMERIC type) field of a table:
1) The implementation of the ROUND function would change the number of decimal
places regardless of the value decided upon in fix_length_and_dec. When the
number of decimal places is not constant, this would cause an inconsistent
state where the number of digits was less than the number of decimal places,
which crashes filesort.
Fixed by not allowing the ROUND operation to add any more decimal places than
was decided in fix_length_and_dec.
2) fix_length_and_dec would allow the number of decimals to be greater than
the maximium configured value for constant values of D. This led to the same
crash as in (1).
Fixed by not allowing the above in fix_length_and_dec.
The fix is a copy of Martin Friebe's suggestion.
added testing for no_appended which will be false if anything,
including the empty string is in result