The optimizer chose a less efficient execution plan due to the following
defects of the code:
1. the generic handler function handler::keyread_time did not take into account
that in clustered primary keys record data is included into each index entry
2. the function make_join_readinfo erroneously decided that index only scan
could not be used if join cache was empoyed.
Added no additional test case.
Adjusted some of the test results.
Changed HA_EXTRA_NORMAL to HA_EXTRA_NOT_USED (more clean)
mysql-test/suite/maria/lock.result:
More extensive tests of LOCK TABLE with FLUSH and REPAIR
mysql-test/suite/maria/lock.test:
More extensive tests of LOCK TABLE with FLUSH and REPAIR
sql/sql_admin.cc:
Fix that REPAIR TABLE ... USE_FRM works with LOCK TABLES
sql/sql_base.cc:
Ensure that transactions are closed in ARIA when doing flush
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
Don't call extra many times for a table in close_all_tables_for_name()
Added test if table_list->table as this can happen in error situations
sql/sql_partition.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_reload.cc:
Fixed comment
sql/sql_table.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_trigger.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_truncate.cc:
HA_EXTRA_FORCE_REOPEN -> HA_EXTRA_PREPARE_FOR_DROP for truncate, as this speeds up truncate by not having to flush the cache to disk.
mysql-test/suite/maria/maria-partitioning.result:
New test case
mysql-test/suite/maria/maria-partitioning.test:
New test case
sql/sql_base.cc:
Ignore HA_EXTRA_NORMAL for wait_while_table_is_used()
More DBUG
sql/sql_partition.cc:
Don't use HA_EXTRA_FORCE_REOPEN for wait_while_table_is_used() as the table is opened multiple times (in prep_alter_part_table)
This fixes the assert in Aria where we check if table is opened multiple times if HA_EXTRA_FORCE_REOPEN is issued
- 5.5 was missing calls to ha_extra(HA_PREPARE_FOR_DROP | HA_PREPARE_FOR_RENAME); Lost in merge 5.3 -> 5.5
sql/sql_admin.cc:
Updated arguments for close_all_tables_for_name
sql/sql_base.h:
Updated arguments for close_all_tables_for_name
sql/sql_partition.cc:
Updated arguments for close_all_tables_for_name
sql/sql_table.cc:
Updated arguments for close_all_tables_for_name
Removed test of kill, as we have already called 'ha_extra(HA_PREPARE_FOR_DROP)' and the table may be inconsistent.
sql/sql_trigger.cc:
Updated arguments for close_all_tables_for_name
sql/sql_truncate.cc:
For truncate that is done with drop + recreate, signal that the table will be dropped.
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.
(originally by Monty)
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
- Let fix_semijoin_strategies_for_picked_join_order() set
POSITION::prefix_record_count for POSITION records that it copies from
SJ_MATERIALIZATION_INFO::tables.
(These records do not have prefix_record_count set, because they are optimized
as joins-inside-semijoin-nests, without full advance_sj_state() processing).
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
The failures are missing entries in the slow query log. The reason for the failure are sleep() calls with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
let x = `SELECT <something>`
The fix is to detect the condition "no active connection", to report error and die.
Note, that the check for no active connection was already in place for ordinary commands,
and was missing only for assign-variable command.
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY
Original comment:
-----------------
3714 Jorgen Loland 2012-03-01
BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT
QUERY OUTPUT
For all but simple grouped queries, temporary tables are used to
resolve grouping. In these cases, the list of grouping fields is
stored in the temporary table and grouping is resolved
there (e.g. by adding a unique constraint on the involved
fields). Because of this, grouping is already done when the rows
are read from the temporary table.
In the case where a group clause may be optimized away, grouping
does not have to be resolved using a temporary table. However, if
a temporary table is explicitly requested (e.g. because the
SQL_BUFFER_RESULT hint is used, or the statement is
INSERT...SELECT), a temporary table is used anyway. In this case,
the temporary table is created with an empty group list (because
the group clause was optimized away) and it will therefore not
create groups. Since the temporary table does not take care of
grouping, JOIN::group shall not be set to false in
make_simple_join(). This was fixed in bug 12578908.
However, there is an exception where make_simple_join() should
set JOIN::group to false even if the query uses a temporary table
that was explicitly requested but is not strictly needed. That
exception is if the loose index scan access method (explain
says "Using index for group-by") is used to read into the
temporary table. With loose index scan, grouping is resolved
by the access method. This is exactly what happens in this bug.
This is a backport of the fix for MySQL bug #13723054 in 5.6.
Original comment:
The crash is caused by arbitrary memory area owerwriting in case of
BLOB fields during attempt to copy BLOB field key image into record
buffer(record buffer is too small to get BLOB key part image).
note:
QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
because it uses record buffer as temporary buffer for key values
however this case is filtered out by covering_keys() check
in get_best_group_min_max() as BLOBs always require key length
modificator in the key declaration and if the key has a BLOB
then it can not be covered key.
The fix is to use 'max_used_key_length' key length instead of 0.
Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
The problem was increment of aborted_threads variable due to thd->killed which was set when threadpool connection was terminated . The fix is not to set thd->killed anymore, there is no real reason for doing it..
Added a test that checks that status variable aborted_clients does not grow for ordinary disconnects, and that successful KILL increments this variable.
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.
Solution is to mark constant only top equalities connected with AND.