Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
Bug#37671 crash on prepared statement + cursor + geometry + too many open files!
if mysql_execute_command() returns error then free materialized_cursor object.
is_rnd_inited is added to satisfy rnd_end() assertion
(handler may be uninitialized in some cases)
ONLY_FULL_GROUP_BY
The check for non-aggregated columns in queries with aggregate function, but without
GROUP BY was treating all the parts of the query as if they are in the SELECT list.
Fixed by ignoring the non-aggregated fields in the WHERE clause.
fails after the first time
Two separate problems :
1. When flattening joins the linked list used for name resolution
(next_name_resolution_table) was not updated.
Fixed by updating the pointers when extending the table list
2. The items created by expanding a * (star) as a column reference
were marked as fixed, but no cached table was assigned to them
(unlike what Item_field::fix_fields does).
Fixed by assigning a cached table (so the re-preparation is done
faster).
Note that the fix for #2 hides the fix for #1 in most cases
(except when a table reference cannot be cached).
Server crashed during a sort order optimization
of a dependent subquery:
SELECT
(SELECT t1.a FROM t1, t2
WHERE t1.a = t2.b AND t2.a = t3.c
ORDER BY t1.a)
FROM t3;
Bitmap of tables, that the reference to outer table
column uses, in addition to the regular table bit
has the OUTER_REF_TABLE_BIT bit set.
The only_eq_ref_tables function traverses this map
bit by bit simultaneously with join->map2table list.
Obviously join->map2table never contains an entry
for the OUTER_REF_TABLE_BIT pseudo-table, so the
server crashed there.
The only_eq_ref_tables function has been modified
to traverse regular table bits only like the
update_depend_map function (resetting of the
OUTER_REF_TABLE_BIT there is enough, but
resetting of the whole set of PSEUDO_TABLE_BITS
is used there for sure).
with COALESCE and JOIN
The server returned to a client the VARBINARY column type
instead of the DATE type for a result of the COALESCE,
IFNULL, IF, CASE, GREATEST or LEAST functions if that result
was filesorted in an anonymous temporary table during
the query execution.
For example:
SELECT COALESCE(t1.date1, t2.date2) AS result
FROM t1 JOIN t2 ON t1.id = t2.id ORDER BY result;
To create a column of various date/time types in a
temporary table the create_tmp_field_from_item() function
uses the Item::tmp_table_field_from_field_type() method
call. However, fields of the MYSQL_TYPE_NEWDATE type were
missed there, and the VARBINARY columns were created
by default.
Necessary condition has been added.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
Calling List<Cached_item>::delete_elements for the same list twice
caused a crash of the server in the function JOIN::cleaunup.
Ensured that delete_elements() in JOIN::cleanup would be called only once.
Range scan in descending order for c <= <col> <= c type of
ranges was ignoring the DESC flag.
However some engines like InnoDB have the primary key parts
as a suffix for every secondary key.
When such primary key suffix is used for ordering ignoring
the DESC is not valid.
But we generally would like to do this because it's faster.
Fixed by performing only reverse scan if the primary key is used.
Removed some dead code in the process.
- In QUICK_INDEX_MERGE_SELECT::read_keys_and_merge: when we got table->sort from Unique,
tell init_read_record() not to use rr_from_cache() because a) rowids are already sorted
and b) it might be that the the data is used by filesort(), which will need record rowids
(which rr_from_cache() cannot provide).
- Fully de-initialize the table->sort read in QUICK_INDEX_MERGE_SELECT::get_next(). This fixes BUG#35477.
(bk trigger: file as fix for BUG#35478).
with dependent subqueries
An IN subquery is executed on EXPLAIN when it's not correlated.
If the subquery required a temporary table for its execution
not all the internal structures were restored from pointing to
the items of the temporary table to point back to the items of
the subquery.
Fixed by restoring the ref array when a temp tables were used in
executing the IN subquery during EXPLAIN EXTENDED.
impossible WHERE/HAVING clause
(subselect_single_select_engine::exec).
Allocation and initialization of joined table list t1, t2... of
subqueries like:
NOT IN (SELECT ... FROM t1,t2,... WHERE 0)
is optimized out, however server tries to traverse this list.
Mixing aggregate functions and non-grouping columns is not allowed in the
ONLY_FULL_GROUP_BY mode. However in some cases the error wasn't thrown because
of insufficient check.
In order to check more thoroughly the new algorithm employs a list of outer
fields used in a sum function and a SELECT_LEX::full_group_by_flag.
Each non-outer field checked to find out whether it's aggregated or not and
the current select is marked accordingly.
All outer fields that are used under an aggregate function are added to the
Item_sum::outer_fields list and later checked by the Item_sum::check_sum_func
function.
View definition as SELECT ... FROM DUAL WHERE ... has
valid syntax, but use of such view in SELECT or
SHOW CREATE VIEW syntax causes unexpected syntax error.
Server omits FROM DUAL clause when storing view body
string in a .frm file for further evaluation.
However, syntax of SELECT-witout-FROM query is more
restrictive than SELECT FROM DUAL syntax, and doesn't
allow the WHERE clause.
NOTE: this syntax difference is not documented.
View registration procedure has been modified to
preserve original structure of view's body.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
and Item_direct_ref constructor calls.
Order of ref->field_name and ref->table_name arguments
is of Item_ref and Item_direct_ref in the fix_inner_refs
function is inverted.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
Two disjuncts containing equalities of the form key=const1 and key=const2 can
be merged into one if const1 is equal to const2. To check it the common
collation of the constants were used rather than the collation of the field key.
For example when the default collation of the constants was cases insensitive
while the collation of the field was case sensitive, then two or-ed equality
predicates key='b' and key='B' incorrectly were merged into one f='b'. As a
result ref access was used instead of range access and wrong result sets were
returned in many cases.
Fixed the problem by comparing constant in the or-ed predicate with collation of
the key field.
The problem occurred when one had a subquery that had an equality X=Y where
Y referred to a named select list expression from the parent select. MySQL
crashed when trying to use the X=Y equality for ref-based access.
Fixed by allowing non-Item_field items in the described case.
There were two problems when inferring the correct field types resulting from
UNION queries.
- If the type is NULL for all corresponding fields in the UNION, the resulting
type would be NULL, while the type is BINARY(0) if there is just a single
SELECT NULL.
- If one SELECT in the UNION uses a subselect, a temporary table is created
to represent the subselect, and the result type defaults to a STRING type,
hiding the fact that the type was unknown(just a NULL value).
Fixed by remembering whenever a field was created from a NULL value and pass
type NULL to the type coercion if that is the case, and creating a string field
as result of UNION only if the type would otherwise be NULL.
The index (key_part_1, key_part-2) was erroneously considered as compatible
with the required ordering in the function test_test_if_order_by_key when
a query with an ORDER BY clause contained a condition of the form
key_part_1=const OR key_part_1 IS NULL
and the order list contained only key_part_2. This happened because the value
of the const_key_parts field in the KEYUSE structure was not formed correctly
for the keys that could be used for ref_or_null access.
This was fixed in the code of the update_ref_and_keys function.
The problem could not manifest itself for MyISAM databases because the
implementation of the keys_to_use_for_scanning() handler function always
returns an empty bitmap for the MyISAM engine.
Index lookup does not always guarantee that we can
simply remove the relevant conditions from the WHERE
clause. Reasons can be e.g. conversion errors,
partial indexes etc.
The optimizer was removing these parts of the WHERE
condition without any further checking.
This leads to "false positives" when using indexes.
Fixed by checking the index reference conditions
(using WHERE) when using indexes with sub-queries.
Loose index scan does the grouping so the temp table does
not need to do it, even when sorting.
Fixed by checking if the grouping is already done before
doing sorting and grouping in a temp table and do only
sorting.
Server failed in assert() when we tried to create a DECIMAL() temp field
with a scale of more than the allowed 30. Now we limit the scale to the
allowed maximum. A truncation warning is thrown as necessary.