When processing aggregate functions all tables values are reset
to NULLs at the end of each group.
When doing that if there are no rows found for a group
the const tables must not be reset as they are not recalculated
by do_select()/sub_select() for each group.
When optimizing conditions like 'a = <some_val> OR a IS NULL' so that they're
united into a single condition on the key and checked together the server must
check which value is the NULL value in a correct way : not only using ->is_null
but also check if the expression doesn't depend on any tables referenced in the
current statement.
This additional check must be performed because that optimization takes place
before the actual execution of the statement, so if the field was initialized
to NULL from a previous statement the optimization would be applied incorrectly.
Added HA_NULL_IN_KEY to table flags to allow for nullable unique indexes
and added test to verify
ha_federated.h:
BUG #15133 "unique index with nullable value not accepted in federated table"
added HA_NULL_IN_KEY to table flags to allow for nullable unique indexes
federated.test:
BUG #15133 "unique index with nullable value not accepted in federated table"
New test to show that nullable unique indexes work
federated.result:
BUG #15133 "unique index with nullable value not accepted in federated table"
New results for new test
The problem was in that opt_sum_query() replaced MIN/MAX functions
with the corresponding constant found in a key, but due to imprecise
representation of float numbers, when evaluating the where clause,
this comparison failed.
When MIN/MAX optimization detects that all tables can be removed,
also remove all conjuncts in a where clause that refer to these
tables. As a result of this fix, these conditions are not evaluated
twice, and in the case of float number comparisons we do not discard
result rows due to imprecise float representation.
As a side-effect this fix also corrects an unnoticed problem in
bug 12882.
When there is no index defined filesort is used to sort the result of a
query. If there is a function in the select list and the result set should be
ordered by it's value then this function will be evaluated twice. First time to
get the value of the sort key and second time to send its value to a user.
This happens because filesort when sorts a table remembers only values of its
fields but not values of functions.
All functions are affected. But taking into account that SP and UDF functions
can be both expensive and non-deterministic a temporary table should be used
to store their results and then sort it to avoid twice SP evaluation and to
get a correct result.
If an expression referenced in an ORDER clause contains a SP or UDF
function, force the use of a temporary table.
A new Item_processor function called func_type_checker_processor is added
to check whether the expression contains a function of a particular type.
When executing INSERT over a view with calculated columns it was assuming all
elements of the fields collection are actually Item_field instances.
This may not be true when inserting into a view and that view has columns that are
such expressions that allow updating (like setting a collation for example).
Corrected to access field information through the filed_for_view_update() function and
retrieve correctly the field info even for "update-friendly" non-Item_field items.
LIKE craashed with a pattern having letters in the range 128..255
(e.g. A WITH ACUTE or C WITH CARON) because of wrong cast from
signed char to unsigned int.
The fix is: if user has privileges to view fields and user has any
(insert,select,delete,update) privileges on underlying view
then 'show fields' and select from I_S.COLUMNS table are sucsessful.
when calculating GROUP_CONCAT all blob fields are transformed
to varchar when making the temp table.
However a varchar has at max 2 bytes for length.
This fix makes the conversion only for blobs whose max length
is below that limit.
Otherwise blob field is created by make_string_field() call.
a non-correlated single-row subquery over information schema.
The function get_all_tables filling all information schema
tables reset lex->sql_command to SQLCOM_SHOW_FIELDS. After
this the function could evaluate partial conditions related to
some columns. If these conditions contained a subquery over
information schema it led to a wrong evaluation and a wrong
result set.
This bug was already fixed in 5.1.
This patch follows the way how it was done in 5.1 where
the value of lex->sql_command is set to SQLCOM_SHOW_FIELDS
in get_all_tables only for the calls of the function
open_normal_and_derived_tables and is restored after these
calls.
- Add test case(execute perror)
- Check if strerror has returned NULL and set msg to "Unknown Error" in that case
- Thanks to Steven Xie for pointing out how to fix.
subqueries on information schema that use MIN/MAX aggregation.
Execution of some correlated subqueries may set the value
of null_row to 1 for tables used in the subquery.
If the the subquery is on information schema it causes
rejection of any row for the following executions of
the subquery in the case when an optimization filtering
by some condition is applied.
The fix restores the value of the null_row flag for
each execution of a subquery on information schema.
When a wildcard database name is given the mysqlshow, but that wildcard
matches one database *exactly* (it contains the wildcard character), we
list the contents of that database instead of just listing the database
name as matching the wildcard. Probably the most common instance of users
encountering this behavior would be with "mysqlshow information_schema".