When a merge table is opened compare column and key definition of
underlying tables against column and key definition of merge table.
If any of underlying tables have different column/key definition
refuse to open merge table.
Depending on the queries we use different data processing methods
and can lose some data in case of double (and decimal in 4.1) fields.
The fix consists of two parts:
1. double comparison changed, now double a is equal to double b
if (a-b) is less than 5*0.1^(1 + max(a->decimals, b->decimals)).
For example, if a->decimals==1, b->decimals==2, a==b if (a-b)<0.005
2. if we use a temporary table, store double values there as is
to avoid any data conversion (rounding).
The bug report has demonstrated the following two problems.
1. If an ORDER/GROUP BY list includes a constant expression being
optimized away and, at the same time, containing single-row
subselects that return more that one row, no error is reported.
Strictly speaking the standard allows to ignore error in this case.
Yet, now a corresponding fatal error is reported in this case.
2. If a query requires sorting by expressions containing single-row
subselects that, however, return more than one row, then the execution
of the query may cause a server crash.
To fix this some code has been added that blocks execution of a subselect
item in case of a fatal error in the method Item_subselect::exec.
files. This helps stability of multiple parallel automated test runs,
avoiding the situation where one bad build fills up disk with 1000s of
core files, causing failures in other test runs.
WL#3681 (ALTER TABLE ORDER BY)
Before this fix, the ALTER TABLE statement implemented an ORDER BY option
with the following characteristics :
1) The order by clause accepts a list of criteria, with optional ASC or
DESC keywords
2) Each criteria can be a general expression, involving operators,
native functions, stored functions, user defined functions, subselects ...
With this fix :
1) has been left unchanged, since it's a de-facto existing feature,
that was already present in the code base and partially covered in the test
suite. Code coverage for ASC and DESC was missing and has been improved.
2) has been changed to limit the kind of criteria that are permissible:
now only a column name is valid.
for queries using 'range checked for each record'.
The problem was fixed in 5.0 by the patch for bug 12291.
This patch down-ported the corresponding code from 5.0 into
QUICK_SELECT::init() and added a new test case.
correctly.
The Item_func::print method was used to print the Item_func_encode and the
Item_func_decode objects. The last argument to ENCODE and DECODE functions
is a plain C string and thus Item_func::print wasn't able to print it.
The print() method is added to the Item_func_encode class. It correctly
prints the Item_func_encode and the Item_func_decode objects.
WHERE is present.
If a DELETE statement with ORDER BY and LIMIT contains a WHERE clause
with conditions that for sure cannot be used for index access (like in
WHERE @var:= field) the execution always follows the filesort path.
It happens currently even when for the above case there is an index that
can be used to speedup sorting by the order by list.
Now if a DELETE statement with ORDER BY and LIMIT contains such WHERE
clause conditions that cannot be used to build any quick select then
the mysql_delete() tries to use an index like there is no WHERE clause at all.
In the method Item_field::fix_fields we try to resolve the name of
the field against the names of the aliases that occur in the select
list. This is done by a call of the function find_item_in_list.
When this function finds several occurrences of the field name
it sends an error message to the error queue and returns 0.
Yet the code did not take into account that find_item_in_list
could return 0 and tried to dereference the returned value.
The function mi_get_pointer_length() computed too small
pointer size for very large tables.
Inserted missing 'else' between the branches for very
large tables.