Fixed some mtr test problems
dbug/tests.c:
Fixed compiler warnings
mysql-test/r/handlersocket.result:
Fixed that plugin_license is written
mysql-test/suite/innodb/t/innodb_bug60196.test:
Force sorted results as it was sometimes different on windows
mysql-test/suite/rpl/t/rpl_heartbeat_basic.test:
Prolong test as this failed on windows
mysql-test/t/handlersocket.test:
Fixed that plugin_license is written
plugin/handler_socket/handlersocket/handlersocket.cpp:
Use maria_declare_plugin
plugin/handler_socket/handlersocket/mysql_incl.hpp:
Fixed compiler warning
plugin/handler_socket/libhsclient/auto_addrinfo.hpp:
Fixed compiler warning
sql/handler.h:
Fixed typo
sql/sql_plugin.cc:
Fixed bug that caused plugin library name twice in error message
storage/maria/ma_checkpoint.c:
Fixed compiler warning
storage/maria/ma_loghandler.c:
Fixed compiler warning
unittest/mysys/base64-t.c:
Fixed compiler warning
unittest/mysys/bitmap-t.c:
Fixed compiler warning
unittest/mysys/my_malloc-t.c:
Fixed compiler warning
Fix is done by doing an autocommit in truncate table inside Aria
storage/maria/ha_maria.cc:
Force a commit for TRUNCATE TABLE inside lock tables
Check that we don't call TRUNCATE with concurrent inserts going on.
Make ha_maria::implict_commit faster when we don't have Aria tables in the transaction.
(Most of the patch is just re-indentation because I removed an if level)
- make make_cond_after_sjm() correctly handle OR clauses where one branch refers to the semi-join table
while the other branch refers to the non-semijoin table.
The optimizer chose a less efficient execution plan due to the following
defects of the code:
1. the generic handler function handler::keyread_time did not take into account
that in clustered primary keys record data is included into each index entry
2. the function make_join_readinfo erroneously decided that index only scan
could not be used if join cache was empoyed.
Added no additional test case.
Adjusted some of the test results.
Changed HA_EXTRA_NORMAL to HA_EXTRA_NOT_USED (more clean)
mysql-test/suite/maria/lock.result:
More extensive tests of LOCK TABLE with FLUSH and REPAIR
mysql-test/suite/maria/lock.test:
More extensive tests of LOCK TABLE with FLUSH and REPAIR
sql/sql_admin.cc:
Fix that REPAIR TABLE ... USE_FRM works with LOCK TABLES
sql/sql_base.cc:
Ensure that transactions are closed in ARIA when doing flush
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
Don't call extra many times for a table in close_all_tables_for_name()
Added test if table_list->table as this can happen in error situations
sql/sql_partition.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_reload.cc:
Fixed comment
sql/sql_table.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_trigger.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_truncate.cc:
HA_EXTRA_FORCE_REOPEN -> HA_EXTRA_PREPARE_FOR_DROP for truncate, as this speeds up truncate by not having to flush the cache to disk.
mysql-test/suite/maria/maria-partitioning.result:
New test case
mysql-test/suite/maria/maria-partitioning.test:
New test case
sql/sql_base.cc:
Ignore HA_EXTRA_NORMAL for wait_while_table_is_used()
More DBUG
sql/sql_partition.cc:
Don't use HA_EXTRA_FORCE_REOPEN for wait_while_table_is_used() as the table is opened multiple times (in prep_alter_part_table)
This fixes the assert in Aria where we check if table is opened multiple times if HA_EXTRA_FORCE_REOPEN is issued
- 5.5 was missing calls to ha_extra(HA_PREPARE_FOR_DROP | HA_PREPARE_FOR_RENAME); Lost in merge 5.3 -> 5.5
sql/sql_admin.cc:
Updated arguments for close_all_tables_for_name
sql/sql_base.h:
Updated arguments for close_all_tables_for_name
sql/sql_partition.cc:
Updated arguments for close_all_tables_for_name
sql/sql_table.cc:
Updated arguments for close_all_tables_for_name
Removed test of kill, as we have already called 'ha_extra(HA_PREPARE_FOR_DROP)' and the table may be inconsistent.
sql/sql_trigger.cc:
Updated arguments for close_all_tables_for_name
sql/sql_truncate.cc:
For truncate that is done with drop + recreate, signal that the table will be dropped.
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.
(originally by Monty)
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
- Let fix_semijoin_strategies_for_picked_join_order() set
POSITION::prefix_record_count for POSITION records that it copies from
SJ_MATERIALIZATION_INFO::tables.
(These records do not have prefix_record_count set, because they are optimized
as joins-inside-semijoin-nests, without full advance_sj_state() processing).
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
The failures are missing entries in the slow query log. The reason for the failure are sleep() calls with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
let x = `SELECT <something>`
The fix is to detect the condition "no active connection", to report error and die.
Note, that the check for no active connection was already in place for ordinary commands,
and was missing only for assign-variable command.
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY
Original comment:
-----------------
3714 Jorgen Loland 2012-03-01
BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT
QUERY OUTPUT
For all but simple grouped queries, temporary tables are used to
resolve grouping. In these cases, the list of grouping fields is
stored in the temporary table and grouping is resolved
there (e.g. by adding a unique constraint on the involved
fields). Because of this, grouping is already done when the rows
are read from the temporary table.
In the case where a group clause may be optimized away, grouping
does not have to be resolved using a temporary table. However, if
a temporary table is explicitly requested (e.g. because the
SQL_BUFFER_RESULT hint is used, or the statement is
INSERT...SELECT), a temporary table is used anyway. In this case,
the temporary table is created with an empty group list (because
the group clause was optimized away) and it will therefore not
create groups. Since the temporary table does not take care of
grouping, JOIN::group shall not be set to false in
make_simple_join(). This was fixed in bug 12578908.
However, there is an exception where make_simple_join() should
set JOIN::group to false even if the query uses a temporary table
that was explicitly requested but is not strictly needed. That
exception is if the loose index scan access method (explain
says "Using index for group-by") is used to read into the
temporary table. With loose index scan, grouping is resolved
by the access method. This is exactly what happens in this bug.