When processing aggregate functions all tables values are reset
to NULLs at the end of each group.
When doing that if there are no rows found for a group
the const tables must not be reset as they are not recalculated
by do_select()/sub_select() for each group.
- Add prelocking for stored procedures that uses sp or sf
- Update test result for sp_error(reported as bug#21294)
- Make note about new error message from sp-error(bug#17244)
When optimizing conditions like 'a = <some_val> OR a IS NULL' so that they're
united into a single condition on the key and checked together the server must
check which value is the NULL value in a correct way : not only using ->is_null
but also check if the expression doesn't depend on any tables referenced in the
current statement.
This additional check must be performed because that optimization takes place
before the actual execution of the statement, so if the field was initialized
to NULL from a previous statement the optimization would be applied incorrectly.
The problem was in that opt_sum_query() replaced MIN/MAX functions
with the corresponding constant found in a key, but due to imprecise
representation of float numbers, when evaluating the where clause,
this comparison failed.
When MIN/MAX optimization detects that all tables can be removed,
also remove all conjuncts in a where clause that refer to these
tables. As a result of this fix, these conditions are not evaluated
twice, and in the case of float number comparisons we do not discard
result rows due to imprecise float representation.
As a side-effect this fix also corrects an unnoticed problem in
bug 12882.
When there is no index defined filesort is used to sort the result of a
query. If there is a function in the select list and the result set should be
ordered by it's value then this function will be evaluated twice. First time to
get the value of the sort key and second time to send its value to a user.
This happens because filesort when sorts a table remembers only values of its
fields but not values of functions.
All functions are affected. But taking into account that SP and UDF functions
can be both expensive and non-deterministic a temporary table should be used
to store their results and then sort it to avoid twice SP evaluation and to
get a correct result.
If an expression referenced in an ORDER clause contains a SP or UDF
function, force the use of a temporary table.
A new Item_processor function called func_type_checker_processor is added
to check whether the expression contains a function of a particular type.
When executing INSERT over a view with calculated columns it was assuming all
elements of the fields collection are actually Item_field instances.
This may not be true when inserting into a view and that view has columns that are
such expressions that allow updating (like setting a collation for example).
Corrected to access field information through the filed_for_view_update() function and
retrieve correctly the field info even for "update-friendly" non-Item_field items.
when calculating GROUP_CONCAT all blob fields are transformed
to varchar when making the temp table.
However a varchar has at max 2 bytes for length.
This fix makes the conversion only for blobs whose max length
is below that limit.
Otherwise blob field is created by make_string_field() call.
a non-correlated single-row subquery over information schema.
The function get_all_tables filling all information schema
tables reset lex->sql_command to SQLCOM_SHOW_FIELDS. After
this the function could evaluate partial conditions related to
some columns. If these conditions contained a subquery over
information schema it led to a wrong evaluation and a wrong
result set.
This bug was already fixed in 5.1.
This patch follows the way how it was done in 5.1 where
the value of lex->sql_command is set to SQLCOM_SHOW_FIELDS
in get_all_tables only for the calls of the function
open_normal_and_derived_tables and is restored after these
calls.
subqueries on information schema that use MIN/MAX aggregation.
Execution of some correlated subqueries may set the value
of null_row to 1 for tables used in the subquery.
If the the subquery is on information schema it causes
rejection of any row for the following executions of
the subquery in the case when an optimization filtering
by some condition is applied.
The fix restores the value of the null_row flag for
each execution of a subquery on information schema.
The Item::tmp_table_field_from_field_type() function creates Field_datetime
object instead of Field_timestamp object for timestamp field thus always
changing data type is a tmp table is used.
The Field_blob object constructor which is used in the
Item::tmp_table_field_from_field_type() is always setting packlength field of
newly created blob to 4. This leads to changing fields data type for example
from the blob to the longblob if a temporary table is used.
The Item::make_string_field() function always converts Field_string objects
to Field_varstring objects. This leads to changing data type from the
char/binary to varchar/varbinary.
Added appropriate Field_timestamp object constructor for using in the
Item::tmp_table_field_from_field_type() function.
Added Field_blob object constructor which sets pack length according to
max_length argument.
The Item::tmp_table_field_from_field_type() function now creates
Field_timestamp object for a timestamp field.
The Item_type_holder::display_length() now returns correct NULL length NULL
length.
The Item::make_string_field() function now doesn't change Field_string to
Field_varstring in the case of Item_type_holder.
The Item::tmp_table_field_from_field_type() function now uses the Field_blob
constructor which sets packlength according to max_length.