'INSERT ... SELECT' statements
The code that produces result rows expected that a duplicate row
error could not occur in INSERT ... SELECT statements with
unfulfilled WHERE conditions. This may happen, however, if the
SELECT list contains only aggregate functions.
Fixed by checking if an error occured before trying to send EOF
to the client.
EXPLAIN EXTENDED of nested query containing a error:
1054 Unknown column '...' in 'field list'
may cause a server crash.
Parse error like described above forces a call to
JOIN::destroy() on malformed subquery.
That JOIN::destroy function closes and frees temporary
tables. However, temporary fields of these tables
may be listed in st_select_lex::group_list of outer
query, and that st_select_lex may not cleanup them
properly. So, after the JOIN::destroy call that
st_select_lex::group_list may have Item_field
objects with dangling pointers to freed temporary
table Field objects. That caused a crash.
Original commentary:
Bug #37348: Crash in or immediately after JOIN::make_sum_func_list
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
mysqld is optimized for the default
case (up to 64-indices); for a greater
number of indices it goes through a
different code path. As that code-path
is a compile-time option and can not
easily be covered in standard tests,
bitrot occurred. key-fields need an
explicit initialization in the non-
optimized case; this setup was
presumably not added when a new key-
vector was added.
Changeset adds the necessary
initialisations.
No test case added due to dependence
on compile-time option.
connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
ORDER BY could cause a server crash
Dependent subqueries like
SELECT COUNT(*) FROM t1, t2 WHERE t2.b
IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a)
caused a memory leak proportional to the
number of outer rows.
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only
(make_simple_join of tmp_join may access these values
via 'this' pointer of the parent JOIN).
NOTE: this patch doesn't include standard test case (this is
"out of memory" bug). See bug #42037 page for test cases.
messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
The greedy optimizer tracks the current level of nested joins and the position
inside these by setting and maintaining a state that's global for the whole FROM
clause.
This state was correctly maintained inside the selection of the next partial plan
table (in best_extension_by_limited_search()).
greedy_search() also moves the current position by adding the last partial match
table when there's not enough tables in the partial plan found by
best_extension_by_limited_search().
This may require update of the global state variables that describe the current
position in the plan if the last table placed by greedy_search is not a top-level
join table.
Fixed by updating the state after placing the partial plan table in greedy_search()
in the same way this is done on entering the best_extension_by_limited_search().
Fixed the signature of the function called to update the state :
check_interleaving_with_nj
Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
Bug#37671 crash on prepared statement + cursor + geometry + too many open files!
if mysql_execute_command() returns error then free materialized_cursor object.
is_rnd_inited is added to satisfy rnd_end() assertion
(handler may be uninitialized in some cases)
ONLY_FULL_GROUP_BY
The check for non-aggregated columns in queries with aggregate function, but without
GROUP BY was treating all the parts of the query as if they are in the SELECT list.
Fixed by ignoring the non-aggregated fields in the WHERE clause.
fails after the first time
Two separate problems :
1. When flattening joins the linked list used for name resolution
(next_name_resolution_table) was not updated.
Fixed by updating the pointers when extending the table list
2. The items created by expanding a * (star) as a column reference
were marked as fixed, but no cached table was assigned to them
(unlike what Item_field::fix_fields does).
Fixed by assigning a cached table (so the re-preparation is done
faster).
Note that the fix for #2 hides the fix for #1 in most cases
(except when a table reference cannot be cached).
Server crashed during a sort order optimization
of a dependent subquery:
SELECT
(SELECT t1.a FROM t1, t2
WHERE t1.a = t2.b AND t2.a = t3.c
ORDER BY t1.a)
FROM t3;
Bitmap of tables, that the reference to outer table
column uses, in addition to the regular table bit
has the OUTER_REF_TABLE_BIT bit set.
The only_eq_ref_tables function traverses this map
bit by bit simultaneously with join->map2table list.
Obviously join->map2table never contains an entry
for the OUTER_REF_TABLE_BIT pseudo-table, so the
server crashed there.
The only_eq_ref_tables function has been modified
to traverse regular table bits only like the
update_depend_map function (resetting of the
OUTER_REF_TABLE_BIT there is enough, but
resetting of the whole set of PSEUDO_TABLE_BITS
is used there for sure).
with COALESCE and JOIN
The server returned to a client the VARBINARY column type
instead of the DATE type for a result of the COALESCE,
IFNULL, IF, CASE, GREATEST or LEAST functions if that result
was filesorted in an anonymous temporary table during
the query execution.
For example:
SELECT COALESCE(t1.date1, t2.date2) AS result
FROM t1 JOIN t2 ON t1.id = t2.id ORDER BY result;
To create a column of various date/time types in a
temporary table the create_tmp_field_from_item() function
uses the Item::tmp_table_field_from_field_type() method
call. However, fields of the MYSQL_TYPE_NEWDATE type were
missed there, and the VARBINARY columns were created
by default.
Necessary condition has been added.
crashes server
When creating temporary table that contains aggregate functions a
non-reversible source transformation was performed to redirect aggregate
function arguments towards temporary table columns.
This caused EXPLAIN EXTENDED to fail because it was trying to resolve
references to the (freed) temporary table.
Fixed by preserving the original aggregate function arguments and
using them (instead of the transformed ones) for EXPLAIN EXTENDED.
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.