In a union without braces, the order by at the end is applied to the
overall union. It therefore should not interfere with the individual
select parts of the union.
Fixed by changing our parser rules appropriately.
with null values
For queries containing GROUP_CONCAT(DISTINCT fields ORDER BY fields), there
was a limitation that the DISTINCT fields had to be the same as ORDER BY
fields, owing to the fact that one single sorted tree was used for keeping
track of tuples, ordering and uniqueness. Fixed by introducing a second
structure to handle uniqueness so that the original structure has only to
order the result.
subselects into account
It is forbidden to use the SELECT INTO construction inside UNION statements
unless on the last SELECT of the union. The parser records whether it
has seen INTO or not when parsing a UNION statement. But if the INTO was
legally used in an outer query, an error is thrown if UNION is seen in a
subquery. Fixed in 5.0 by remembering the nesting level of INTO tokens and
mitigate the error unless it collides with the UNION.
The problem was that when convert_constant_item is called for subqueries,
this happens when we already started executing the top-level query, and
the field argument of convert_constant_item pointed to a valid table row.
In turn convert_constant_item used the field buffer to compute the value
of its item argument. This copied the item's value into the field,
and made equalities with outer references always true.
The fix saves/restores the original field's value when it belongs to an
outer table.
Both arguments of the function NAME_CONST must be constant expressions.
This constraint is checked in the Item_name_const::fix_fields method.
Yet if the argument of the function was not a constant expression no
error message was reported. As a result the client hanged waiting for a
response.
Now the function Item_name_const::fix_fields reports an error message
when any of the additional context conditions imposed on the function
NAME_CONST is not satisfied.
The index (key_part_1, key_part-2) was erroneously considered as compatible
with the required ordering in the function test_test_if_order_by_key when
a query with an ORDER BY clause contained a condition of the form
key_part_1=const OR key_part_1 IS NULL
and the order list contained only key_part_2. This happened because the value
of the const_key_parts field in the KEYUSE structure was not formed correctly
for the keys that could be used for ref_or_null access.
This was fixed in the code of the update_ref_and_keys function.
The problem could not manifest itself for MyISAM databases because the
implementation of the keys_to_use_for_scanning() handler function always
returns an empty bitmap for the MyISAM engine.
Anti-patch. This patch undoes the previously pushed patch. It is
null-merged in versions 5.1 and above since there the original
patch is still desired.
numbers into char fields" and bug #12860 "Difference in zero padding of
exponent between Unix and Windows"
Rewrote the code that determines what 'precision' argument should be
passed to sprintf() to fit the string representation of the input number
into the field.
We get finer control over conversion by pre-calculating the exponent, so
we are able to determine which conversion format, 'e' or 'f', will be
used by sprintf().
We also remove the leading zero from the exponent on Windows to make it
compatible with the sprintf() output on other platforms.
filesort() uses file->estimate_rows_upper_bound() call to allocate
internal buffers. If this function returns a value smaller than
a number of row that will be returned later in find_all_keys(),
that can cause server crash.
Fixed by implementing ha_federated::estimate_rows_upper_bound() to
return maximum possible number of rows.
Present estimation for FEDERATED always returns 0 if the linked to the VIEW.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
Kill of a CREATE TABLE source_table LIKE statement waiting for a
name-lock on the source table causes a bad lock interaction.
The mysql_create_like_table() has a bug that if the connection is
killed while waiting for the name-lock on the source table, it will
jump to the wrong error path and try to unlock the source table and
LOCK_open, but both weren't locked.
The solution is to simple return when the name lock request is killed,
it's safe to do so because no lock was acquired and no cleanup is needed.
Original bug report also contains description of other problems
related to this scenario but they either already fixed in 5.1 or
will be addressed separately (see bug report for details).