A subquery transformation changes the HAVING clause of the embedding query if the subquery contains
a GROUP BY clause. Yet the split_sum_func2 function was not applied to the modified HAVING clause.
This could result in wrong answers.
and possibly server crash in mysqld v5.0.
Reported MyISAM table was created in mysqld 4.1 and contains varchar field.
When binary files of that table was moved to 5.0, mysqld treats that varchar
field as a string field.
In order to make grouping server calculates group buffer, and because
that field is string server assumes it has fixed length and doesn't add
space for length, but later that field is converted to varchar field.
Due to this, when field values were actually copied, additional space for
length bytes is taken and buffer overrun occurs, which may lead to server crash.
The calc_group_buffer() function now reserves additional space for length
bytes for VAR_STRING fields, like for VARCHAR fields.
When an ambiguous field name is used in a group by clause a warning is issued
in the find_order_in_list function by a call to push_warning_printf.
An expression that was not always valid was passed to this call as the field
name parameter.
A query with a group by and having clauses could return a wrong
result set if the having condition contained a constant conjunct
evaluated to FALSE.
It happened because the pushdown condition for table with
grouping columns lost its constant conjuncts.
Pushdown conditions are always built by the function make_cond_for_table
that ignores constant conjuncts. This is apparently not correct when
constant false conjuncts are present.
The problem has manifested itself in the cases when we have a nested outer join
for which it can be inferred that one of the inner tables is a single row table.
functions are involved.
When subselect is a join with set functions and no record have been found in
it, end_send_group() sets null_row for all tables in order aggregate functions
to calculate their values correctly. Normally this null_row flag is cleared for
each table in sub_select(), but flush_cached_records() doesn't do so.
Due to this all fields from the table processed by flush_cached_records() are
always evaluated as nulls and whole select produces wrong result.
flush_cached_records() now clears null_row flag at the very beginning.
Remove wrong fix for Bug#14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
Safety fix for bug #13855 "select distinct with group by caused server crash"
- Fixed tests
- Optimized new code
- Fixed some unlikely core dumps
- Better bug fixes for:
- #14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
- #14850 (ERROR 1062 when a quering a view using a Group By on a column that can be null
according to the standard.
The idea is to use Field-classes to implement stored routines
variables. Also, we should provide facade to Item-hierarchy
by Item_field class (it is necessary, since SRVs take part
in expressions).
The patch fixes the following bugs:
- BUG#8702: Stored Procedures: No Error/Warning shown for inappropriate data
type matching;
- BUG#8768: Functions: For any unsigned data type, -ve values can be passed
and returned;
- BUG#8769: Functions: For Int datatypes, out of range values can be passed
and returned;
- BUG#9078: STORED PROCDURE: Decimal digits are not displayed when we use
DECIMAL datatype;
- BUG#9572: Stored procedures: variable type declarations ignored;
- BUG#12903: upper function does not work inside a function;
- BUG#13705: parameters to stored procedures are not verified;
- BUG#13808: ENUM type stored procedure parameter accepts non-enumerated
data;
- BUG#13909: Varchar Stored Procedure Parameter always BINARY string (ignores
CHARACTER SET);
- BUG#14161: Stored procedure cannot retrieve bigint unsigned;
- BUG#14188: BINARY variables have no 0x00 padding;
- BUG#15148: Stored procedure variables accept non-scalar values;
The cause of the bug was the use of end_write_group instead of end_write
in the case when ORDER BY required a temporary table, which didn't take
into account the fact that loose index scan already computes the result
of MIN/MAX aggregate functions (and performs grouping).
The solution is to call end_write instead of end_write_group and to add
the MIN/MAX functions to the list of regular functions so that their
values are inserted into the temporary table.
Bad examples of usage of a string with its length fixed.
The incorrect length in the trigger file configuration descriptor
fixed (BUG#14090).
A hook for unknown keys added to the parser to support old .TRG files.
test_if_order_by_key() expected only Item_fields to be in order->item, thus
failing to find available index on view's field, which results in reported
error.
Now test_if_order_by_key() calls order->item->real_item() to get field for
choosing index.