This bug manifests due to wrong computation and evaluation of
keyinfo->key_length. The issues were:
* Using table->file->max_key_length() as an absolute value that must not be
reached for a key, while it represents the maximum number of bytes
possible for a table key.
* Incorrectly computing the keyinfo->key_length size during
KEY_PART_INFO creation. The metadata information regarding the key
such the field length (for strings) was added twice.
Do not use merge_for_insert for commands which use SELECT because optimizer can't work with such tables.
Fixes which makes multi-delete working with normally merged views.
reset default fields not for every modified row, but only once,
at the beginning, as the set of modified fields doesn't change.
exception: INSERT ... ON DUPLICATE KEY UPDATE - the set of fields
does change per row and in that case we reset default fields per row.
Extend existing plugins to support
* SHOW QUERY_RESPONSE_TIME
* FLUSH QUERY_RESPONSE_TIME
* SHOW LOCALE
move userstat tables to use the new API instead of
hand-coded syntax
- "ANALYZE $stmt" should discard select's output, but it should still
evaluate the output columns (otherwise, subqueries in select list
are not executed)
- SHOW EXPLAIN's code practice of calling JOIN::save_explain_data()
after JOIN::exec() is disastrous for ANALYZE, because it resets
all counters after the first execution. It is stopped
= "Late" test_if_skip_sort_order() calls explicitly update their part
of the query plan.
= Also, I had to rewrite I_S optimization to actually have optimization
and execution stages.
- Make JOIN::const_key_parts include keyparts for which
the WHERE clause has an equality in form
"t.key_part=reference_outside_this_select"
- This allows to avoid filesort'ing in some cases (and also
avoid a difficult choice between using filesort or using an index)
MDEV-5839 TokuDB tables not properly cleaned on DROP DATABASE
TokuDB does not support discover_table_names() and writes no files
in the database directory, so automatic filename-based
discover_table_names() doesn't work either. So, it must force .frm
file to disk in ::create()
Let TABLE_SHARE::tdc.free_tables, TABLE_SHARE::tdc.all_tables,
TABLE_SHARE::tdc.flushed and corresponding invariants be protected by
per-share TABLE_SHARE::tdc.LOCK_table_share instead of global LOCK_open.
Now if CREATE OR REPLACE fails but we have deleted a table already, we will generate a DROP TABLE in the binary log.
This fixes this issue.
In addition, for a failing CREATE OR REPLACE TABLE ... SELECT we don't generate a log of all the inserted rows, only the DROP TABLE.
I added code for not logging DROP TEMPORARY TABLE for tables where the CREATE TABLE was not logged. This code will be activated in 10.1
by removing the code protected by DONT_LOG_DROP_OF_TEMPORARY_TABLES.
mysql-test/suite/rpl/r/create_or_replace_mix.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_row.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_statement.result:
More test cases
mysql-test/suite/rpl/t/create_or_replace.inc:
More test cases
sql/log.cc:
Added binlog_reset_cache() to clear the binary log.
sql/log.h:
Added prototype
sql/sql_insert.cc:
If CREATE OR REPLACE TABLE ... SELECT fails:
- Don't log anything if nothing changed
- If table was deleted, log a DROP TABLE.
Remember if we table creation of temporary tables was logged.
sql/sql_table.cc:
Added log_drop_table()
Remember if we table creation of temporary tables was logged.
If CREATE OR REPLACE TABLE ... SELECT fails and a table was deleted, log a DROP TABLE.
sql/sql_table.h:
Added prototype
sql/sql_truncate.cc:
Remember if we table creation of temporary tables was logged.
sql/table.h:
Added table_creation_was_logged
This is port of fix for MySQL BUG#17647863.
revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM
Rename test() macro to MY_TEST() to avoid conflict with libc++.
Move TABLE::in_use out of LOCK_open.
This is done with assumtion that foreign threads accessing TABLE::in_use
will only need consistent value _after_ marking table for flush and purging
unused table instances. In this case TABLE::in_use will always point to a
valid thread object.
Previously FLUSH TABLES thread may wait for tables flushed subsequently by
concurrent threads which breaks the above assumption, e.g.:
open tables: t1 (version= 1)
thr1 (FLUSH TABLES): refresh_version++
thr1 (FLUSH TABLES): purge table cache
open tables: none
thr2 (SELECT * FROM t1): open tables: t1
open tables: t1 (version= 2)
thr2 (FLUSH TABLES): refresh_version++
thr2 (FLUSH TABLES): purge table cache
thr1 (FLUSH TABLES): wait for old tables (including t1 with version 2)
It is fixed so that FLUSH TABLES waits only for tables that were open
heretofore.