* handler::get_auto_increment() was not expecting any errors from the storage engine.
That was wrong, errors could happen.
* ha_partition::get_auto_increment() was doing index lookups in partition under a mutex.
This was redundant (engine transaction isolation was covering that anyway)
and harmful (causing deadlocks).
The problem was caused by the following scenario:
- range optimizer picks an index IDX1 which doesn't match the ORDER BY ...
LIMIT clause.
- test_if_skip_sort_order() decides to switch to index IDX2 which matches
the ORDER BY ... LIMIT.
- it runs SQL_SELECT::test_quick_select() for the second time to produce
an quick select for IDX2.
- However, test_quick_select() would figure that full index scan on IDX1
is still cheaper (its calculations ignore the LIMIT n).
Fixed this by
- passing force_quick_range=true to test_quick_select()
- in test_quick_select, don't consider full index scans if the mentioned
parameter is true.
Numerous changes in .result files are caused by test_quick_select() being
run after "early/late NULLs filtering" feature has injected NOT NULL
condition.
- Key_value_records_iterator::get_next() should pass pointer to the key
to handler->ha_index_next_same(). Because of a typo bug, pointer-to-pointer
was passed instead in certain cases.
mysql-test/r/create_or_replace.result:
More tests for create or replace
mysql-test/t/create_or_replace.test:
More tests for create or replace
sql/log.cc:
Don't use binlog_hton if binlog is not enabmed
sql/sql_base.cc:
We have to call restart_trans_for_tables also if tables where not locked with LOCK TABLES.
If not, we will get a crash in TokuDB
sql/sql_insert.cc:
Don't call binlog_reset_cache() if we don't have binary log open
sql/sql_table.cc:
Don't log to binary log if not open
Better test if we where using create or replace ... select
storage/tokudb/mysql-test/tokudb_mariadb/r/create_or_replace.result:
More tests for create or replace
storage/tokudb/mysql-test/tokudb_mariadb/t/create_or_replace.test:
More tests for create or replace
MDEV-5839 TokuDB tables not properly cleaned on DROP DATABASE
TokuDB does not support discover_table_names() and writes no files
in the database directory, so automatic filename-based
discover_table_names() doesn't work either. So, it must force .frm
file to disk in ::create()
Made stopping of slave more robust
Fixed tokudb test cases that gave different results between runs
Speed up some slow tokudb tests by adding begin ... commit
mysql-test/extra/rpl_tests/rpl_stop_slave.test:
Ensure that slaves are properly synced before they are stopped.
(Otherwise some tests results will be different between runs)
storage/innobase/buf/buf0buf.cc:
Fixed compiler warning
storage/tokudb/mysql-test/tokudb/r/cluster_filter_unpack_varchar_and_int_hidden.result:
Test case could be solved with index or range scan.
storage/tokudb/mysql-test/tokudb/t/cluster_filter_unpack_varchar_and_int_hidden.test:
Test case could be solved with index or range scan.
storage/tokudb/mysql-test/tokudb_bugs/r/5733_innodb.result:
Speed up test by adding begin...commit
storage/tokudb/mysql-test/tokudb_bugs/r/5733_tokudb.result:
Speed up test by adding begin...commit
storage/tokudb/mysql-test/tokudb_bugs/t/5733_innodb.test:
Speed up test by adding begin...commit
storage/tokudb/mysql-test/tokudb_bugs/t/5733_tokudb.test:
Speed up test by adding begin...commit
storage/tokudb/mysql-test/tokudb_mariadb/r/compression.result:
Added drop table (safety)
storage/tokudb/mysql-test/tokudb_mariadb/t/compression.test:
Added drop table (safety)
- the problem was caused by EXPLAIN INSERT SELECT. For that statement,
the code would call select_insert::prepare2(), which would call
handler->ha_start_bulk_insert(). The corresponding handler->end_bulk_insert()
call is made from select_insert::send_eof or select_insert::abort_result_set
which are never called for EXPLAIN INSERT SELECT.
- Fixed by re-using approach of mysql-5.6: don't call ha_start_bulk_insert() if
we are in EXPLAIN.