Most notably, this includes MDEV-11623, which includes a fix and
an upgrade procedure for the InnoDB file format incompatibility
that is present in MariaDB Server 10.1.0 through 10.1.20.
In other words, this merge should address
MDEV-11202 InnoDB 10.1 -> 10.2 migration does not work
MySQL 5.7 allows temporary tables to be created in ROW_FORMAT=COMPRESSED.
The usefulness of this is questionable. WL#7899 in MySQL 8.0.0
prevents the creation of such compressed tables, so that all InnoDB
temporary tables will be located inside the predefined
InnoDB temporary tablespace.
Pick up and adjust some tests from MySQL 5.7 and 8.0.
dict_tf_to_fsp_flags(): Remove the parameter is_temp.
fsp_flags_init(): Remove the parameter is_temporary.
row_mysql_drop_temp_tables(): Remove. There cannot be any temporary
tables in InnoDB. (This never removed #sql* tables in the datadir
which were created by DDL.)
dict_table_t::dir_path_of_temp_table: Remove.
create_table_info_t::m_temp_path: Remove.
create_table_info_t::create_options_are_invalid(): Do not allow
ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE for temporary tables.
create_table_info_t::innobase_table_flags(): Do not unnecessarily
prevent CREATE TEMPORARY TABLE with SPATIAL INDEX.
(MySQL 5.7 does allow this.)
fil_space_belongs_in_lru(): The only FIL_TYPE_TEMPORARY tablespace
is never subjected to closing least-recently-used files.
MySQL 5.7 introduced partial support for user-created shared tablespaces
(for example, import and export are not supported).
MariaDB Server does not support tablespaces at this point of time.
Let us remove most InnoDB code and data structures that is related
to shared tablespaces.
MariaDB will likely never support MySQL-style encryption for
InnoDB, because we cannot link with the Oracle encryption plugin.
This is preparation for merging MDEV-11623.
- Atomic writes are enabled by default
- Automatically detect if device supports atomic write and use it if
atomic writes are enabled
- Remove ATOMIC WRITE options from CREATE TABLE
- Atomic write is a device option, not a table options as the table may
crash if the media changes
- Add support for SHANNON SSD cards
The InnoDB source code contains quite a few references to a closed-source
hot backup tool which was originally called InnoDB Hot Backup (ibbackup)
and later incorporated in MySQL Enterprise Backup.
The open source backup tool XtraBackup uses the full database for recovery.
So, the references to UNIV_HOTBACKUP are only cluttering the source code.
This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with
the notable difference that the file format changes are limited to
repurposing a previously unused data field in B-tree pages.
For persistent InnoDB tables, write the last used AUTO_INCREMENT
value to the root page of the clustered index, in the previously
unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC.
Unlike some other previously unused InnoDB data fields, this one was
actually always zero-initialized, at least since MySQL 3.23.49.
The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the
root page. The SX latch will allow concurrent read access to the root
page. (The field PAGE_ROOT_AUTO_INC will only be read on the
first-time call to ha_innobase::open() from the SQL layer. The
PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so
read/write races are not possible.)
During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level
function btr_cur_search_to_nth_level(), adding no extra page
access. [Adaptive hash index lookup will be disabled during INSERT.]
If some rare UPDATE modifies an AUTO_INCREMENT column, the
PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in
ha_innobase::update_row().
When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC
field.
During ALTER TABLE, the initial AUTO_INCREMENT value will be copied
from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will
update PAGE_ROOT_AUTO_INC in real time.
innodb_col_no(): Determine the dict_table_t::cols[] element index
corresponding to a Field of a non-virtual column.
(The MySQL 5.7 implementation of virtual columns breaks the 1:1
relationship between Field::field_index and dict_table_t::cols[].
Virtual columns are omitted from dict_table_t::cols[]. Therefore,
we must translate the field_index of AUTO_INCREMENT columns into
an index of dict_table_t::cols[].)
Upgrade from old data files:
By default, the AUTO_INCREMENT sequence in old data files would appear
to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain
the value 0 in each clustered index page. In new data files,
PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain
any AUTO_INCREMENT column.
For backward compatibility, we use the old method of
SELECT MAX(auto_increment_column) for initializing the sequence.
btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format
data file.
btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc()
that will resort to reading MAX(auto_increment_column) for data files
that did not use AUTO_INCREMENT yet. It was manually tested that during
the execution of innodb.autoinc_persist the compatibility logic is
not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty
clustered index root pages).
initialize_auto_increment(): Replaces
ha_innobase::innobase_initialize_autoinc(). This initializes
the AUTO_INCREMENT metadata. Only called from ha_innobase::open().
ha_innobase::info_low(): Do not try to lazily initialize
dict_table_t::autoinc. It must already have been initialized by
ha_innobase::open() or ha_innobase::create().
Note: The adjustments to class ha_innopart were not tested, because
the source code (native InnoDB partitioning) is not being compiled.
* remnant of 5.6, does not exist in 5.7. bad merge?
* also remove dict_table_get_col_name_for_mysql(), it was only
used when index_field_t::col_name was not NULL
WL#7682 in MySQL 5.7 introduced the possibility to create light-weight
temporary tables in InnoDB. These are called 'intrinsic temporary tables'
in InnoDB, and in MySQL 5.7, they can be created by the optimizer for
sorting or buffering data in query processing.
In MariaDB 10.2, the optimizer temporary tables cannot be created in
InnoDB, so we should remove the dead code and related data structures.
Contains also:
MDEV-10549 mysqld: sql/handler.cc:2692: int handler::ha_index_first(uchar*): Assertion `table_share->tmp_table != NO_TMP_TABLE || m_lock_type != 2' failed. (branch bb-10.2-jan)
Unlike MySQL, InnoDB still uses THR_LOCK in MariaDB
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
enable tests that were fixed in MDEV-10549
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
fix main.innodb_mysql_sync - re-enable online alter for partitioned innodb tables
Contains also
MDEV-10547: Test multi_update_innodb fails with InnoDB 5.7
The failure happened because 5.7 has changed the signature of
the bool handler::primary_key_is_clustered() const
virtual function ("const" was added). InnoDB was using the old
signature which caused the function not to be used.
MDEV-10550: Parallel replication lock waits/deadlock handling does not work with InnoDB 5.7
Fixed mutexing problem on lock_trx_handle_wait. Note that
rpl_parallel and rpl_optimistic_parallel tests still
fail.
MDEV-10156 : Group commit tests fail on 10.2 InnoDB (branch bb-10.2-jan)
Reason: incorrect merge
MDEV-10550: Parallel replication can't sync with master in InnoDB 5.7 (branch bb-10.2-jan)
Reason: incorrect merge
Step 1:
-- Remove page encryption from dictionary (per table
encryption will be handled by storing crypt_data to page 0)
-- Remove encryption/compression from os0file and all functions
before that (compression will be added to buf0buf.cc)
-- Use same CRYPT_SCHEME_1 for all encryption methods
-- Do some code cleanups to confort InnoDB coding style
Analysis: MySQL table definition contains also virtual columns. Similarly,
index fielnr references MySQL table fields. However, InnoDB table definition
does not contain virtual columns. Therefore, when matching MySQL key fieldnr
we need to use actual column name to find out referenced InnoDB dictionary
column name.
Fix: Add new function to match MySQL index key columns to InnoDB dictionary.
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))
Merge Facebook commit 154c579b828a60722a7d9477fc61868c07453d08
and e8f0052f9b112dc786bf9b957ed5b16a5749f7fd authored
by Steaphan Greene from https://github.com/facebook/mysql-5.6
Optimize prefix index queries to skip cluster index lookup when possible.
Currently InnoDB will always fetch the clustered index (primary key
index) for all prefix columns in an index, even when the value of a
particular record is smaller than the prefix length. This change
optimizes that case to use the record from the secondary index and avoid
the extra lookup.
Also adds two status vars that track how effective this is:
innodb_secondary_index_triggered_cluster_reads:
Times secondary index lookup triggered cluster lookup.
innodb_secondary_index_triggered_cluster_reads_avoided:
Times prefix optimization avoided triggering cluster lookup.
FROM A FUNCTION
Scenario:
In a stored procedure, CREATE TABLE statement is not allowed. But an
exception is provided for CREATE TEMPORARY TABLE. We can create a temporary
table in a stored procedure.
Let there be two stored functions f1 and f2 and two stored procedures p1 and
p2. Their properties are as follows:
. stored function f1() calls stored procedure p1().
. stored function f2() calls stored procedure p2().
. stored procedure p1() creates temporary table t1.
. stored procedure p2() does DML on t1.
Consider the following situation:
1. Autocommit mode is on.
2. select f1()
3. select f2()
Step 2: In this step, t1 would be created via p1(). A table level transaction
lock would have been taken. The ::external_lock() would not have been called
on this table. At the end of step 2, because of autocommit mode on, this table
level lock will be released.
Step 3: When we execute DML on table t1 via p2() we have two problems:
Problem 1:
The function ha_innobase::external_lock() would have been called but since
it is a select query no table level locks would have been taken. Hence the
following assert will fail:
ut_ad(lock_table_has(thr_get_trx(thr), index->table, LOCK_IX));
Solution:
The solution would be to identify this situation and take a table level lock
and use the proper lock type prebuilt->select_lock_type = LOCK_X for DML
operations.
Problem 2:
Another problem is that in step 3, ha_innobase::open() is never called on
the table t1.
Solution:
The solution would be to identify this situation and call re-init the handler
of table t1.
rb#6429 approved by Krunal.
If mysql.innodb_table_stats or mysql.innodb_index_stats is not found or has
unexpected structure output that error only once and no other error for
every table trying to use them. If they do exists, then print fetch or
recalculation errors only once / table or index.
SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
Update InnoDB to 5.6.14
Apply MySQL-5.6 hack for MySQL Bug#16434374
Move Aria-only HA_RTREE_INDEX from my_base.h to maria_def.h (breaks an assert in InnoDB)
Fix InnoDB memory leak
SYNTAX: ATOMIC_WRITES=['DEFAULT','ON','OFF']
Idea here is to be able to define innodb_doublewrite = 1 but with following rules:
ATOMIC_WRITES='DEFAULT' - if innodb_use_atomic_writes = 1, we do not write to doublewrite buffer the changes
if innodb_use_atomic_writes = 0, we write to doublewrite buffer
ATOMIC_WRITES='ON' - do not write to doublewrite buffer
ATOMIC_WRITES='OFF' - write to doublewrite buffer
Note that doublewrite buffer can't be used if innodb_doublewrite = 0.