* remove old 5.2+ InnoDB support for virtual columns
* enable corresponding parts of the innodb-5.7 sources
* copy corresponding test cases from 5.7
* copy detailed Alter_inplace_info::HA_ALTER_FLAGS flags from 5.7
- and more detailed detection of changes in fill_alter_inplace_info()
* more "innodb compatibility hooks" in sql_class.cc to
- create/destroy/reset a THD (used by background purge threads)
- find a prelocked table by name
- open a table (from a background purge thread)
* different from 5.7:
- new service thread "thd_destructor_proxy" to make sure all THDs are
destroyed at the correct point in time during the server shutdown
- proper opening/closing of tables for vcol evaluations in
+ FK checks (use already opened prelocked tables)
+ purge threads (open the table, MDLock it, add it to tdc, close
when not needed)
- cache open tables in vc_templ
- avoid unnecessary allocations, reuse table->record[0] and table->s->default_values
- not needed in 5.7, because it overcalculates:
+ tell the server to calculate vcols for an on-going inline ADD INDEX
+ calculate vcols for correct error messages
* update other engines (mroonga/tokudb) accordingly
WL#7682 in MySQL 5.7 introduced the possibility to create light-weight
temporary tables in InnoDB. These are called 'intrinsic temporary tables'
in InnoDB, and in MySQL 5.7, they can be created by the optimizer for
sorting or buffering data in query processing.
In MariaDB 10.2, the optimizer temporary tables cannot be created in
InnoDB, so we should remove the dead code and related data structures.
for InnoDB tables"
Don't use thr_lock.c locks for InnoDB tables. Below is list of changes that
were needed to implement this:
- HANDLER OPEN acquireis MDL_SHARED_READ instead of MDL_SHARED
- HANDLER READ calls external_lock() even if SE is not going to be locked by
THR_LOCK
- InnoDB lock wait timeouts are now honored which are much shorter by default
than server lock wait timeouts (1 year vs 50 seconds)
- with @@autocommit= 1 LOCK TABLES disables autocommit implicitely, though
user still sees @@autocommt= 1
- the above starts implicit transaction
- transactions started by LOCK TABLES are now rolled back on disconnect
(previously everything was committed due to autocommit)
- transactions started by LOCK TABLES are now rolled back by ROLLBACK
(previously everything was committed due to autocommit)
- it is now impossible to change BINLOG_FORMAT under LOCK TABLES (at least
to statement) due to running transaction
- LOCK TABLES WRITE is additionally handled by MDL
- ...in contrast LOCK TABLES READ protection against DML is pure InnoDB
- combining transactional and non-transactional tables under LOCK TABLES
may cause rolled back changes in transactional table and "committed"
changes in non-transactional table
- user may disable innodb_table_locks, which will cause LOCK TABLES to be
noop basically
Removed tests for BUG#45143 and BUG#55930 which cover InnoDB + THR_LOCK. To
operate properly these tests require code flow to go through THR_LOCK debug
sync points, which is not the case after this patch. These tests are removed
by WL#6671 as well. An alternative is to port them to different storage engine.
* update (some) tests from 5.7
* update results (e.g. cardinality is no longer reported)
* uncomment MYSQL_PLUGIN_FULLTEXT_PARSER/MYSQL_FTS_PARSER code
* initialize m_prebuilt->m_fts_limit manually,
as we do not use ft_init_ext_with_hints()
Contains also:
MDEV-10549 mysqld: sql/handler.cc:2692: int handler::ha_index_first(uchar*): Assertion `table_share->tmp_table != NO_TMP_TABLE || m_lock_type != 2' failed. (branch bb-10.2-jan)
Unlike MySQL, InnoDB still uses THR_LOCK in MariaDB
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
enable tests that were fixed in MDEV-10549
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
fix main.innodb_mysql_sync - re-enable online alter for partitioned innodb tables
Contains also
MDEV-10547: Test multi_update_innodb fails with InnoDB 5.7
The failure happened because 5.7 has changed the signature of
the bool handler::primary_key_is_clustered() const
virtual function ("const" was added). InnoDB was using the old
signature which caused the function not to be used.
MDEV-10550: Parallel replication lock waits/deadlock handling does not work with InnoDB 5.7
Fixed mutexing problem on lock_trx_handle_wait. Note that
rpl_parallel and rpl_optimistic_parallel tests still
fail.
MDEV-10156 : Group commit tests fail on 10.2 InnoDB (branch bb-10.2-jan)
Reason: incorrect merge
MDEV-10550: Parallel replication can't sync with master in InnoDB 5.7 (branch bb-10.2-jan)
Reason: incorrect merge
Problem was that in-place online alter table was used on a table
that had mismatch between MySQL frm file and InnoDB data dictionary.
Fixed so that traditional "Copy" method is used if the MySQL frm
and InnoDB data dictionary is not consistent.
Step 2:
-- Introduce temporal memory array to buffer pool where to allocate
temporary memory for encryption/compression
-- Rename PAGE_ENCRYPTION -> ENCRYPTION
-- Rename PAGE_ENCRYPTION_KEY -> ENCRYPTION_KEY
-- Rename innodb_default_page_encryption_key -> innodb_default_encryption_key
-- Allow enable/disable encryption for tables by changing
ENCRYPTION to enum having values DEFAULT, ON, OFF
-- In create table store crypt_data if ENCRYPTION is ON or OFF
-- Do not crypt tablespaces having ENCRYPTION=OFF
-- Store encryption mode to crypt_data and redo-log
Merged lp:maria/maria-10.0-galera up to revision 3880.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Merged lp:maria/maria-10.0-galera up to revision 3879.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Update InnoDB to 5.6.14
Apply MySQL-5.6 hack for MySQL Bug#16434374
Move Aria-only HA_RTREE_INDEX from my_base.h to maria_def.h (breaks an assert in InnoDB)
Fix InnoDB memory leak
SYNTAX: ATOMIC_WRITES=['DEFAULT','ON','OFF']
Idea here is to be able to define innodb_doublewrite = 1 but with following rules:
ATOMIC_WRITES='DEFAULT' - if innodb_use_atomic_writes = 1, we do not write to doublewrite buffer the changes
if innodb_use_atomic_writes = 0, we write to doublewrite buffer
ATOMIC_WRITES='ON' - do not write to doublewrite buffer
ATOMIC_WRITES='OFF' - write to doublewrite buffer
Note that doublewrite buffer can't be used if innodb_doublewrite = 0.
The ha_innobase table handler contained two search key buffers
(srch_key_val1, srch_key_val2) of fixed size used to store the search
key. The size of these buffers where fixed at
REC_VERSION_56_MAX_INDEX_COL_LEN + 2. But this size is not sufficient
to hold the search key. Hence the following assert in
row_sel_convert_mysql_key_to_innobase() failed.
2438 /* Storing may use at most data_len bytes of buf */
2439
2440 if (UNIV_LIKELY(!is_null)) {
2441 ut_a(buf + data_len <= original_buf + buf_len);
2442 row_mysql_store_col_in_innobase_format(
2443 dfield, buf,
2444 FALSE, /* MySQL key value format col */
2445 key_ptr + data_offset, data_len,
2446 dict_table_is_comp(index->table));
2447 buf += data_len;
2448 }
The buffer size is now calculated with the formula
MAX_KEY_LENGTH + MAX_REF_PARTS*2. This properly takes into account
the extra bytes needed to store the length for each column. An index
can contain a maximum of MAX_REF_PARTS columns in it, and for each
column 2 bytes are needed to store length.
rb://1238 approved by Marko and Vasil Dimov.
BY A CONCURRENT TRANSACTIO
The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.
The ha_innobase::clone() is not there. The handler::clone() does not
take care of the ha_innobase->prebuilt->select_lock_type. Because of
this what happens is that for one index we do a locking read, and
for the other index we were doing a non-locking (consistent) read.
The patch introduces ha_innobase::clone() member function.
It is implemented similar to ha_myisam::clone(). It calls the
base class handler::clone() and then does any additional operation
required. I am setting the ha_innobase->prebuilt->select_lock_type
correctly.
rb://1060 approved by Marko
sql/sql_insert.cc:
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
******
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
small cleanup
******
small cleanup
SECONDARY INDEX IN INNODB
The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.
This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.
In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.
Large parts of this patch were written by Marko Mäkelä.
Test case added to innodb_mysql_lock.test.
- Added a lot of code comments
- Updated get_best_ror_intersec() to prefer index scan on not clustered keys before clustered keys.
- Use HA_CLUSTERED_INDEX to define if one should use HA_MRR_INDEX_ONLY
- For test of using index or filesort to resolve ORDER BY, use HA_CLUSTERED_INDEX flag instead of primary_key_is_clustered()
- Use HA_TABLE_SCAN_ON_INDEX instead of primary_key_is_clustered() to decide if ALTER TABLE ... ORDER BY will have any effect.
sql/ha_partition.h:
Added comment with warning for code unsafe to use with multiple storage engines at the same time
sql/handler.h:
Added HA_CLUSTERED_INDEX.
Documented primary_key_is_clustered()
sql/opt_range.cc:
Added code comments
Updated get_best_ror_intersec() to ignore clustered keys.
Optimized away cpk_scan_used and one instance of current_thd (Simpler code)
Use HA_CLUSTERED_INDEX to define if one should use HA_MRR_INDEX_ONLY
sql/sql_select.cc:
Changed comment to #ifdef
For test of using index or filesort to resolve ORDER BY, use HA_CLUSTERED_INDEX flag instead of primary_key_is_clustered()
(Change is smaller than what it looks beause of indentation change)
sql/sql_table.cc:
Use HA_TABLE_SCAN_ON_INDEX instead of primary_key_is_clustered() to decide if ALTER TABLE ... ORDER BY will have any effect.
storage/innobase/handler/ha_innodb.h:
Added support for HA_CLUSTERED_INDEX
storage/innodb_plugin/handler/ha_innodb.cc:
Added support for HA_CLUSTERED_INDEX
storage/xtradb/handler/ha_innodb.cc:
Added support for HA_CLUSTERED_INDEX
CMakeLists.txt: Remove the checks for mysql_storage_engine.cmake
and MYSQL_VERSION_ID.
ha_innodb.cc, ha_innodb.h: Remove the checks for MYSQL_VERSION_ID.
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock
- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.
- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.
Problem description:
The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.
Solution:
The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.
Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
mysql-test/suite/innodb/t/innodb-truncate.test:
Add test cases for truncate and foreign key checks.
Also test that InnoDB resets auto-increment on truncate.
mysql-test/suite/innodb/t/innodb.test:
FK is not necessary, test is related to auto-increment.
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/suite/innodb/t/innodb_mysql.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
Use delete instead of truncate, test is used to check
the interaction of FKs, triggers and delete.
mysql-test/suite/parts/inc/partition_check.inc:
Fix typo.
mysql-test/suite/sys_vars/t/foreign_key_checks_func.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/t/mdl_sync.test:
Modify test case to reflect and ensure that truncate takes
a exclusive metadata lock.
mysql-test/t/trigger-trans.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
sql/ha_partition.cc:
Reorganize the various truncate methods. delete_all_rows is now
passed directly to the underlying engines, so as truncate. The
code responsible for truncating individual partitions is moved
to ha_partition::truncate_partition, which is invoked when a
ALTER TABLE t1 TRUNCATE PARTITION p statement is executed.
Since the partition truncate no longer can be invoked via
delete, the bitmap operations are not necessary anymore. The
explicit reset of the auto-increment value is also removed
as the underlying engines are now responsible for reseting
the value.
sql/handler.cc:
Wire up the handler truncate method.
sql/handler.h:
Introduce and document the truncate handler method. It assumes
certain use cases of delete_all_rows.
Add method to retrieve the list of foreign keys referencing a
table. Method is used to avoid truncating tables that are
parent in a foreign key relationship.
sql/share/errmsg-utf8.txt:
Add error message for truncate and FK.
sql/sql_lex.h:
Introduce a flag so that the partition engine can detect when
a partition is being truncated. Used to give a special error.
sql/sql_parse.cc:
Function mysql_truncate_table no longer exists.
sql/sql_partition_admin.cc:
Implement the TRUNCATE PARTITION statement.
sql/sql_truncate.cc:
Change the truncate table implementation to use the new truncate
handler method and to not rely on row-by-row delete anymore.
The truncate handler method is always invoked with a exclusive
metadata lock. Also, it is no longer possible to truncate a
table that is parent in some non-self-referencing foreign key.
storage/archive/ha_archive.cc:
Rename method as the description indicates that in the future
this could be a truncate operation.
storage/blackhole/ha_blackhole.cc:
Implement truncate as no operation for the blackhole engine in
order to remain compatible with older releases.
storage/federated/ha_federated.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/heap/ha_heap.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/ibmdb2i/ha_ibmdb2i.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/innobase/handler/ha_innodb.cc:
Rename delete_all_rows to truncate. InnoDB now does truncate
under a exclusive metadata lock.
Introduce and reorganize methods used to retrieve the list
of foreign keys referenced by a or referencing a table.
storage/myisammrg/ha_myisammrg.cc:
Introduce truncate method that invokes delete_all_rows.
This is required in order to remain compatible with earlier
releases where truncate would resort to a row-by-row delete.
errors
In the fix of BUG#39934 in 5.1-rep+3, errors are generated when
binlog_format=row and a statement modifies a table restricted to
statement-logging (ER_BINLOG_ROW_MODE_AND_STMT_ENGINE); or if
binlog_format=statement and a statement modifies a table restricted to
row-logging (ER_BINLOG_STMT_MODE_AND_ROW_ENGINE).
However, some DDL statements that lock tables (e.g. ALTER TABLE,
CREATE INDEX and CREATE TRIGGER) were causing spurious errors,
although no row might be inserted into the binary log.
To fix the problem, we tagged statements that may generate
rows into the binary log and thence the warning messages are
only printed out when the appropriate conditions hold and rows
might be changed.
sql/log_event.cc:
Reorganized the Query_log_event's constructor based on the
CF_CAN_GENERATE_ROW_EVENTS flag and as such any statement
that has the associated flag should go through a cache
before being written to the binary log.
sql/share/errmsg-utf8.txt:
Improved the error message ER_BINLOG_UNSAFE_MIXED_STATEMENT according to Paul's
suggestion.
sql/sql_class.cc:
Created a hook to be used by innodb that checks if a statement
may write rows to the binary log. In other words, if it has
the CF_CAN_GENERATE_ROW_EVENTS flag associated.
sql/sql_class.h:
Defined the CF_CAN_GENERATE_ROW_EVENTS flag.
sql/sql_parse.cc:
Updated the sql_command_flags and added a function to check the
CF_CAN_GENERATE_ROW_EVENTS.
sql/sql_parse.h:
Added a function to check the CF_CAN_GENERATE_ROW_EVENTS.
storage/innobase/handler/ha_innodb.cc:
Added a call to the hook thd_generates_rows().
storage/innobase/handler/ha_innodb.h:
Defined an external reference to the hook thd_generates_rows().