SYNTAX: ATOMIC_WRITES=['DEFAULT','ON','OFF']
Idea here is to be able to define innodb_doublewrite = 1 but with following rules:
ATOMIC_WRITES='DEFAULT' - if innodb_use_atomic_writes = 1, we do not write to doublewrite buffer the changes
if innodb_use_atomic_writes = 0, we write to doublewrite buffer
ATOMIC_WRITES='ON' - do not write to doublewrite buffer
ATOMIC_WRITES='OFF' - write to doublewrite buffer
Note that doublewrite buffer can't be used if innodb_doublewrite = 0.
support ha_innodb.so as a dynamic plugin.
* remove obsolete *,innodb_plugin.rdiff files
* s/--plugin-load=/--plugin-load-add=/
* MYSQL_PLUGIN_IMPORT glob_hostname[]
* use my_error instead of push_warning_printf(ER_DEFAULT)
* don't use tdc_size and tc_size in a module
update test cases (XtraDB is 5.6.14, InnoDB is 5.6.10)
* copy new tests over
* disable some tests for (old) InnoDB
* delete XtraDB tests that no longer apply
small compatibility changes:
* s/HTON_EXTENDED_KEYS/HTON_SUPPORTS_EXTENDED_KEYS/
* revert unnecessary InnoDB changes to make it a bit closer to the upstream
fix XtraDB to compile on Windows (both as a static and a dynamic plugin)
disable XtraDB on Windows (deadlocks) and where no atomic ops are available (e.g. CentOS 5)
storage/innobase/handler/ha_innodb.cc:
revert few unnecessary changes to make it a bit closer to the original InnoDB
storage/innobase/include/univ.i:
correct the version to match what it was merged from
restore old innodb get_innobase_type_from_mysql_type() function,
record all mysql_type->innodb_type mapping
(as generated by mysql-5.6).
add safety code to disable online alter when internal types don't match
storage/innobase/dict/dict0stats.cc:
revert to 5.6 state
The merge is still missing a few hunks related to temporary tables and
InnoDB log file size. The associated code did not seem to exist in
10.0, so the merge of that needs more work. Until this is fixed, there
are a number of test failures as a result.
For compatibility purposes let InnoDB use DATA_INT for MYSQL_TYPE_ENUM and MYSQL_TYPE_SET.
Silence the warning for these types and let the index translation table to be built anyway.
Test case by Jeremy Cole.
Add an error code to the wait_for_commit facility.
Now, when a transaction fails, it can signal the error to
any subsequent transaction that is waiting for it to commit.
The waiting transactions then receive the error code back from
wait_for_prior_commit() and can handle the error appropriately.
Also fix one race that could cause crash if @@slave_parallel_threads
were changed several times quickly in succession.
Following variables do not require LOCK_open protection anymore:
- table_def_cache (renamed to tdc_hash) is protected by rw-lock
LOCK_tdc_hash;
- table_def_shutdown_in_progress doesn't need LOCK_open protection;
- last_table_id use atomics;
- TABLE_SHARE::ref_count (renamed to TABLE_SHARE::tdc.ref_count)
is protected by TABLE_SHARE::tdc.LOCK_table_share;
- TABLE_SHARE::next, ::prev (renamed to tdc.next and tdc.prev),
oldest_unused_share, end_of_unused_share are protected by
LOCK_unused_shares;
- TABLE_SHARE::m_flush_tickets (renamed to tdc.m_flush_tickets)
is protected by TABLE_SHARE::tdc.LOCK_table_share;
- refresh_version (renamed to tdc_version) use atomics.
remove unused HA_CAN_EXPORT
(with the merge of "FLUSH TABLE <table_list> FOR EXPORT",
this functionality should be added back, but differently -
no table flag, handler::extra() returns an error instead)
Problem:
When the user specified foreign key name contains "_ibfk_", InnoDB wrongly
tries to rename it.
Solution:
When a table is renamed, all its associated foreign keys will also be renamed,
only if the foreign key names are automatically generated. If the foreign key
names are given by the user, even if it has _ibfk_ in it, it must not be
renamed.
rb#2935 approved by Jimmy, Krunal and Satya
includes:
* remove some remnants of "Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING"
* introduce LOCK_share, now LOCK_ha_data is strictly for engines
* rea_create_table() always creates .par file (even in "frm-only" mode)
* fix a 5.6 bug, temp file leak on dummy ALTER TABLE
SERIALIZABLE
Problem:
The documentation claims that WITH CONSISTENT SNAPSHOT will work for both
REPEATABLE READ and SERIALIZABLE isolation levels. But it will work only
for REPEATABLE READ isolation level. Also, the clause WITH CONSISTENT
SNAPSHOT is silently ignored when it is not applicable to the given isolation
level.
Solution:
Generate a warning when the clause WITH CONSISTENT SNAPSHOT is ignored.
rb#2797 approved by Kevin.
Note: Support team wanted to push this to 5.5+.
Merged all ddl_logging code.
Merged sql_partition.cc
innodb_mysql_lock2.test and partition_cache.test now works.
Changed interface to strconvert() to make it easier to use with not \0 terminated strings.
sql/sql_partition.cc:
Full merge with 5.6
sql/sql_table.cc:
Merged all ddl_logging code
sql/strfunc.cc:
Added from_length argument to strconvert() to make it possible to use without end terminated strings.
sql/strfunc.h:
Added from_length argument to strconvert() to make it possible to use without end terminated strings.
mysql-test/r/mdl_sync.result:
Full merge with 5.6
mysql-test/t/mdl_sync.test:
Full merge with 5.6
sql/debug_sync.cc:
Full merge with 5.6
sql/debug_sync.h:
Full merge with 5.6
sql/mdl.cc:
Full merge with 5.6
sql/sql_base.cc:
Removed code not in 5.6 anymore
Implement facility for the commit in one thread to wait for the commit of
another to complete first. The wait is done in a way that does not hinder
that a waiter and a waitee can group commit together with a single fsync()
in both binlog and InnoDB. The wait is done efficiently with respect to
locking.
The patch was originally made to support TaoBao parallel replication with
in-order commit; now it will be adapted to also be used for parallel
replication of group-committed transactions.
A waiter THD registers itself with a prior waitee THD. The waiter will then
complete its commit at the earliest in the same group commit of the waitee
(when using binlog). The wait can also be done explicitly by the waitee.
Analysis
--------
The pthread_mutex commit_threads_m was initiliazed but never
used.
Fix
---
Removing the commit_threads_m mutex from the code base.
[ Approved by Marko rb#2475]
DDL AND I_S QUERIES
Skip partially created indexes (ones whose name starts with TEMP_INDEX_PREFIX)
from stats gathering.
Because InnoDB reports HA_INPLACE_ADD_INDEX_NO_WRITE to MySQL, the latter
allows parallel execution of ha_innobase::add_index() and ha_innobase::info().
Reviewed by: Inaam (rb:2613)
IF IT HAS A WRONG COUNT
If CHECK TABLE finds that a secondary index contains the wrong
number of entries, it used to report an error but not mark the
index as corrupt. The error means that the index should be rebuilt,
which can be done with ALTER TABLE DROP INDEX and ALTER TABLE ADD
INDEX. But just in case the DBA does not pay any attention to the
output of CHECK TABLE, the secondary index should be marked as
corrupted so that it is not used again.
Approved by Inaam in RB:2607
AFTER A ROW IS READ
Approved by: Sunny Bains rb://2425
Don't release concurrency tickets when asked to release
btr_search_latch. This is a 5.5 only bug. It is already
fixed in 5.6 upwards.
TRANSACTION ROLLBACK
Problem:
=======
"prepare_commit_mutex" is acquired during "innobase_xa_prepare"
and it is freed only in "innobase_commit". After prepare,
if the commit operation fails the transaction is rolled back
but the mutex is not released.
Analysis:
========
During transaction commit process transaction is prepared and
the "prepare_commit_mutex" is acquired to preserve the order
of commit. After prepare write to binlog is initiated.
File: sql/handler.cc
if (error || (is_real_trans && xid &&
-----> (error= !(cookie= tc_log->log_xid(thd, xid)))))
{
ha_rollback_trans(thd, all);
In the above code "tc_log->log_xid" operation fails.
When the write to binlog fails the transaction is rolled back
with out freeing the mutex. A subsequent "INSERT" operation
tries to acquire the same mutex during its commit process
and the server aborts.
Fix:
===
"prepare_commit_mutex" is freed during "innobase_rollback".
storage/innobase/handler/ha_innodb.cc:
Added code to free "prepare_commit_mutex"
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There are two situations here. The constraint name is explicitly
given by the user and the constraint name is automatically generated
by InnoDB. In the case of generated constraint name, it is formed by
adding table name as prefix. The table names are stored internally in
my_charset_filename. In the case of constraint name explicitly given
by the user, it is stored in UTF8 format itself. So, in some
situations the constraint name is in utf8 and in some situations it is
in my_charset_filename format. Hence this problem.
Solution:
Always store the foreign key constraint name in UTF-8 even when
automatically generated.
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There was a memory leak in the function dict_create_add_foreign_to_dictionary().
The allocated pars_info_t object is not freed in the error code path.
Solution:
Allocate the pars_info_t object after the error checking.
rb#2368 in review
OPENING MISSING PARTITION
In the ha_innobase::open() call, for normal tables, there is no retry logic.
But for partitioned tables, there is a retry logic introduced as fix for:
http://bugs.mysql.com/bug.php?id=33349https://support.mysql.com/view.php?id=21080
The Bug#33349, does not provide sufficient information to analyze the original
problem. The original problem reported by bug#33349 is also minor (just an
annoyance and no loss of functionality). Most importantly, the retry logic
has been introduced without any associated test case.
So we are removing the retry logic for partitioned tables. When the original
problem occurs, a different solution will be explored.