In MariaDB Server 10.4, btr_cur_instant_init_low() assumes that
all PRIMARY KEY columns that are internally variable-length will
be encoded in 0 bytes in the metadata record. Sometimes, CHAR
columns can be encoded as variable-length. We should not
unnecessarily reserve space for a dummy string value in the
metadata record.
DropIndex, CreateIndex: Remove. The file row0trunc.cc only exists
in MariaDB Server 10.3 so that the crash recovery of TRUNCATE TABLE
operations from older 10.2 and 10.3 servers will work. This dead code
was being used for implementing the MySQL 5.7 WL#6501 TRUNCATE TABLE
that was replaced with a backup-safe implementation in MDEV-13564.
buf_read_ibuf_merge_pages(): Discard any page numbers that are
outside the current bounds of the tablespace, by invoking the
function ibuf_delete_recs() that was introduced in MDEV-20934.
This could avoid an infinite change buffer merge loop on
innodb_fast_shutdown=0, because normally the change buffer merge
would only be attempted if a page was successfully loaded into
the buffer pool.
dict_drop_index_tree(): Add the parameter trx_t*.
To prevent the DROP TABLE crash, do not invoke btr_free_if_exists()
if the entire .ibd file will be dropped. Thus, we will avoid a crash
if the BTR_SEG_LEAF or BTR_SEG_TOP of the index is corrupted,
and we will also avoid unnecessarily accessing the to-be-dropped
tablespace via the buffer pool.
In MariaDB 10.2, we disable the DROP TABLE fix if innodb_safe_truncate=0,
because the backup-unsafe MySQL 5.7 WL#6501 form of TRUNCATE TABLE
requires that the individual pages be freed inside the tablespace.
Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
Apart from page latches (buf_block_t::lock), mini-transactions
are keeping track of at most one dict_index_t::lock and
fil_space_t::latch at a time, and in a rare case, purge_sys.latch.
Let us introduce interfaces for acquiring an index latch
or a tablespace latch.
In a later version, we may want to introduce mtr_t members
for holding a latched dict_index_t* and fil_space_t*,
and replace the remaining use of mtr_t::m_memo
with std::set<buf_block_t*> or with a map<buf_block_t*,byte*>
pointing to log records.
In the test innodb.instant_alter,4k we would be flagging an error
for too large row size. That error was previously only being reported
if the table was being rebuilt. Thus, this merge is fixing a small
omission in MDEV-11369 (instant ADD COLUMN).
Move row size check to early CREATE/ALTER TABLE phase. Stop checking
on table open.
dict_index_add_to_cache(): remove parameter 'strict', stop checking row size
dict_index_t::record_size_info_t: this is a result of row size check operation
create_table_info_t::row_size_is_acceptable(): performs row size check.
Issues error or warning. Writes first overflow field to InnoDB log.
create_table_info_t::create_table(): add row size check
dict_index_t::record_size_info(): this is a refactored version
of dict_index_t::rec_potentially_too_big(). New version doesn't change global
state of a program but return all interesting info. And it's callers who
decide how to handle row size overflow.
dict_index_t::rec_potentially_too_big(): removed
A search with PAGE_CUR_GE may land on the supremum record on
a leaf page that is not the rightmost leaf page.
This could occur when all keys on the current page are
smaller than the search key, and the smallest key on the
successor page is larger than the search key.
ibuf_delete_recs(): Correct the debug assertion accordingly.
mtr_t::Impl, mtr_t::Command: Merge to mtr_t.
MTR_MAGIC_N: Remove.
MTR_STATE_COMMITTING: Remove. This state was only being set
internally during mtr_t::commit().
mtr_t::Command::m_locks_released: Remove (set-and-never-read member).
mtr_t::Command::m_start_lsn: Replaced with the return value of
finish_write() and a parameter to release_blocks().
mtr_t::Command::m_end_lsn: Removed as a duplicate of mtr_t::m_commit_lsn.
mtr_t::Command::prepare_write(): Replace a switch () with a
comparison against 0. Only 2 m_log_mode are allowed.
The XDES_CLEAN_BIT is always set for every element of
the page allocation bitmap in the extent descriptor pages.
Do not bother touching it, to avoid redundant writes.
page_rec_write_field(): Remove.
dict_create_index_tree_step(): If the SYS_INDEXES.PAGE does not change,
do not update it in the data dictionary. Typically, all index page numbers
would be unchanged before and after IMPORT TABLESPACE, except if some
secondary indexes were created after loading some data.
btr_root_fseg_adjust_on_import(): Remove the redundant mtr_t* parameter.
Redo logging is disabled during the page adjustments that IMPORT TABLESPACE
is performing.
fsp_alloc_seg_inode_page(): Ever since
commit 3926673ce7
all newly allocated pages are zero-initialized.
Assert that this is the case for the FSEG_ID fields.
btr_store_big_rec_extern_fields(): Remove the redundant initialization
of the most significant 32 bits of BTR_EXTERN_LEN. InnoDB never supported
BLOBs that are longer than 4GiB. In fact, dtuple_convert_big_rec()
would write emit an error message if a clustered index record tuple would
exceed 1,000,000,000 bytes in length.
The BTR_EXTERN_LEN in the BLOB pointers in clustered index leaf page
records is zero-initialized at least since
commit 41bb3537ba
dict_index_add_to_cache(): Make the 'index' a reference to a pointer,
so that the caller will avoid the expensive call to
dict_index_get_if_in_cache_low().
Due to a data corruption bug that may have occurred a long time earlier
(possibly involving physical backup and MySQL Bug #69122, which was
addressed in commit f166ec71b7)
it seems possible that the InnoDB change buffer might end up containing
entries, while no buffered changes exist according to the change buffer
bitmap pages in the .ibd files.
ibuf_delete_recs(): New function, to be invoked on slow shutdown only.
Remove all buffered changes for a specific page.
ibuf_merge_or_delete_for_page(): If the change buffer bitmap is clean
and a slow shutdown is in progress, invoke ibuf_delete_recs().
We do not want to do that during normal operation, due to the additional
overhead that is involved. The bitmap page should be consistent with
the change buffer in the first place.
InnoDB: Assertion failure in file .../dict/dict0dict.cc line ...
InnoDB: Failing assertion: table->can_be_evicted
This fixes a regression that was caused by the fix of MDEV-20621
(commit a41d429765).
MySQL 5.6 (and MariaDB 10.0) introduced eviction of tables from
the InnoDB data dictionary cache. Tables that are connected to
FOREIGN KEY constraints or FULLTEXT INDEX are exempt of the eviction.
With the problematic change, a table that would already be exempt
from eviction due to FOREIGN KEY would cause the problem if there
also was a FULLTEXT INDEX defined on it.
dict_load_table(): Only prevent eviction if table->can_be_evicted holds.
Don't save/restore HP_INFO as it could be changed by a concurrent thread.
different parts of HP_INFO are protected by different mutexes and
the mutex that protect most of the HP_INFO does not protect its open_list
data.
As a bonus, make heap_check_heap() to take const HP_INFO* and not
make any changes there whatsoever.
dict_table_rename_in_cache(): Use strcpy() instead of strncpy(),
because they are known to be equivalent in this case (the length
of old_name was already validated).
mariabackup: Invoke strncpy() with one less than the buffer size,
and explicitly add NUL as the last byte of the buffer.
- Add SLEEP() calls to the testcase to make it really test that the
time doesn't change.
- Always use .frm file creation time as a creation timestamp (attempts
to re-use older create_time when the table DDL changes are not good
because then create_time will change after server restart)
- Use the same method names as the upstream patch does
- Use std::atomic for m_update_time
innobase_drop_foreign_try(): Don't evict and reload the dict_foreign_t
during instant ALTER TABLE if the FOREIGN KEY constraint is being
dropped.
The MDEV-19630 fix (commit 07b1a26c33)
was incomplete, because it did not cover a case where the
FOREIGN KEY constraint is being dropped.
Apply the changes to InnoDB and XtraDB that had been
inadvertently skipped in the merge
commit ae476868a5
That merge failure sabotaged part of MDEV-20127:
>Revert a problematic auto_increment_increment 'fix' from 2014.
>This involves replacing the MDEV-8827 fix and in 10.1,
>removing some WSREP instrumentation.
The code changes were re-merged manually by executing the following:
# Get the parent of the problematic merge.
git checkout ae476868a5394041a00e75a29c7d45917e8dfae8^
# Perform the merge again.
git merge ae476868a5394041a00e75a29c7d45917e8dfae8^2
# Get the conflict resolution from that merge.
git checkout ae476868a5 .
# Note: Any changes to these files were removed (empty diff)!
git diff HEAD storage/{innobase,xtradb}/handler/ha_innodb.cc
# Apply the code changes:
git diff cf40393471b10ca68cc1d2804c22ab9203900978^2..MERGE_HEAD \
storage/{innobase,xtradb}/handler/ha_innodb.cc|
patch -p1
In debug build, whenever MEMORY table instance gets closed it performs
consistency check without protection. It may cause server crash if
executed concurrently with DML.
Moved consistency check to ha_heap::external_lock(F_UNLCK), so that it
is protected by THR_LOCK.