dict_foreign_qualify_index(): Avoid a redundant and harmful
computation of col_name of a virtual column. This fixes the
assertion failure.
dict_foreign_push_index_error(): Do not call dict_table_get_col_name()
on a virtual column. (It is unclear if this condition is actually
reachable.)
It does not hurt to delete non-existing records from SYS_TABLESPACES
and SYS_DATAFILES. Because MariaDB does not support CREATE TABLESPACE,
only the system tablespace (space_id=0) can contain multiple tables.
But, there are no entries for the system tablespace in these tables
(which actually are stored inside the system tablespace).
The purpose of the InnoDB buffer pool dump is to allow InnoDB to be
restarted with the same persistent data pages in the buffer pool.
The InnoDB temporary tablespace that was introduced in MariaDB 10.2.2
is always reinitialized on restart. Therefore, it does not make sense
to attempt to dump or restore any pages of the temporary tablespace.
ha_innobase::check_if_supported_inplace_alter(): Only check for
high_level_read_only. Do not unnecessarily refuse
ALTER TABLE...ALGORITHM=INPLACE if innodb_force_recovery was
specified as 1, 2, or 3.
innobase_start_or_create_for_mysql(): Block all writes from SQL
if the system tablespace was initialized with 'newraw'.
Problem:
=======
During validation of missing tablespace, missing tablespace id is
being compared with hash table of redo logs (recv_sys->addr_hash). But if the
hash table ran out of memory then there is a possibility that it will not contain
the redo logs of all tablespace. In that case, Server will load the InnoDB
even though there is a missing tablespace.
Solution:
========
If the recv_sys->addr_hash hash table ran out of memory then InnoDB needs
to scan the remaining redo log again to validate the missing tablespace.
The following INFORMATION_SCHEMA views were unnecessarily retrieving
the data from the SYS_TABLESPACES table instead of directly fetching
it from the fil_system cache:
information_schema.innodb_tablespaces_encryption
information_schema.innodb_tablespaces_scrubbing
InnoDB always loads all tablespace metadata into memory at startup
and never evicts it while the tablespace exists.
With this fix, accessing these views will be much faster and use less
memory, and include data about all tablespaces, including undo
tablespaces.
The view information_schema.innodb_sys_tablespaces will still reflect
the contents of the SYS_TABLESPACES table.
fil_iterate(), fil_tablespace_iterate(): Replace os_file_read()
with os_file_read_no_error_handling().
os_file_read_func(), os_file_read_no_error_handling_func():
Do not retry partial reads. There used to be an infinite amount
of retries. Because InnoDB extends both data and log files upfront,
partial reads should be impossible during normal operation.
Initialize block.page.zip only once.
PageConverter::update(): Initialize m_page_zip_ptr
as late as possible.
(We should really remove it at some point.)
PageConverter::operator(): Refer to block->page.zip instead of
m_page_zip_ptr.
AbstractCallback::get_frame(): Define static. Refer
to block->page.zip.data directly.
fil_iterate(): Refer to block->page.zip.data directly.
fil_tablespace_iterate(): Initialize block.page.zip.data as soon
as possible.
Reduce unnecessary inter-module calls for IMPORT TABLESPACE.
Move some IMPORT-related code from fil0fil.cc to row0import.cc.
PageCallback: Remove. Make AbstractCallback the base class.
PageConverter: Define some member functions inline.
assert on UTF-8 columns
Problem:
=======
(1) Multi-byte character cases are not considered during prefix index
cluster optimization check. It leads to fetch of improper results during
read operation.
(2) Strict assert in row_sel_field_store_in_mysql_format_func and it asserts
for prefix index record to mysql conversion.
Solution:
========
(1) Consider the case of multi-byte character during prefix index
cluster optimization check.
(2) Relax the assert in row_sel_field_store_in_mysql_format_func to allow
prefix index record to mysql format conversion.
The patch is taken from
1eee538087
Learn both valgrind and asan to catch this bug:
mem_heap_t* heap = mem_heap_create(1024);
byte* p = reinterpret_cast<byte*>(heap) + sizeof(mem_heap_t);
*p = 123;
Overflows of the last allocation in a block will be catched too.
mem_heap_create_block(): poison newly allocated memory
lock_rec_trx_wait(): Merge to the only caller lock_prdt_rec_move().
lock_rec_reset_nth_bit(), lock_set_lock_and_trx_wait(),
lock_reset_lock_and_trx_wait(): Define in lock0priv.h.
By definition, c_lock->trx->lock.wait_lock==c_lock cannot hold.
That is, the owner transaction of a lock cannot be waiting for that
particular lock. It must have been waiting for some other lock.
Remove the dead code related to that. Also, test c_lock for NULLness
only once.
Refactor lock_grant(). With innodb_lock_schedule_algorithm=VATS
some callers were passing an incorrect parameter owns_trx_mutex
to lock_grant().
lock_grant_after_reset(): Refactored from lock_grant(), without
the call to lock_reset_lock_and_trx_wait().
lock_grant_have_trx_mutex(): A variant of lock_grant() where the
caller already holds the lock->trx->mutex. The normal lock_grant()
will acquire and release lock->trx->mutex.
lock_grant(): Define as a wrapper that will acquire lock->trx->mutex.
lock_table_create(): Move the WSREP parameter c_lock last,
and make it NULL by default, to avoid the need for a wrapper
function.
lock_table_enqueue_waiting(): Move the WSREP parameter c_lock last.
Revert the dead code for MySQL 5.7 multi-master replication (GCS),
also known as
WL#6835: InnoDB: GCS Replication: Deterministic Deadlock Handling
(High Prio Transactions in InnoDB).
Also, make innodb_lock_schedule_algorithm=vats skip SPATIAL INDEX,
because the code does not seem to be compatible with them.
Add FIXME comments to some SPATIAL INDEX locking code. It looks
like Galera write-set replication might not work with SPATIAL INDEX.
The merge only covered 10.1 up to
commit 4d248974e0.
Actually merge the changes up to
commit 0a534348c7.
Also, remove the unused InnoDB field trx_t::abort_type.
Unlike commit a54abf0175 claimed,
the caller of THD::awake() may actually hold the InnoDB lock_sys->mutex.
That commit introduced a deadlock of threads in the replication slave
when running the test rpl.rpl_parallel_optimistic_nobinlog.
lock_trx_handle_wait(): Expect the callers to acquire and release
lock_sys->mutex and trx->mutex.
innobase_kill_query(): Restore the logic for conditionally acquiring
and releasing the mutexes. THD::awake() can be called from inside
InnoDB while holding one or both mutexes, via thd_report_wait_for() and
via wsrep_innobase_kill_one_trx().
ha_innobase::unlock_row(): Use a relaxed version of the
trx_state_eq() debug assertion, because rr_unlock_row()
may be invoked after an error has been already reported
and the transaction has been rolled back.
By definition, c_lock->trx->lock.wait_lock==c_lock cannot hold.
That is, the owner transaction of a lock cannot be waiting for that
particular lock. It must have been waiting for some other lock.
Remove the dead code related to that. Also, test c_lock for NULLness
only once.
As this is the only moderately critical fopened for writing file,
create an alternate path to use open and fdopen for non-glibc platforms
that support O_CLOEXEC (BSDs).
Tested on Linux (by modifing the GLIBC defination) to take this
alternate path:
$ cd /proc/23874
$ more fdinfo/71
pos: 0
flags: 02100001
mnt_id: 24
$ ls -la fd/71
l-wx------. 1 dan dan 64 Mar 14 13:30 fd/71 -> /dev/shm/var_auto_i7rl/mysqld.1/data/ib_buffer_pool.incomplete
Problem:
=======
Mariabackup exits during prepare phase if it encounters
MLOG_INDEX_LOAD redo log record. MLOG_INDEX_LOAD record
informs Mariabackup that the backup cannot be completed based
on the redo log scan, because some information is purposely
omitted due to bulk index creation in ALTER TABLE.
Solution:
========
Detect the MLOG_INDEX_LOAD redo record during backup phase and
exit the mariabackup with the proper error message.
buf_flush_page_cleaner_coordinator(): Signal the worker threads
to exit while waiting for them to exit. Apparently, signals are
sometimes lost, causing shutdown to occasionally hang when
multiple page cleaners (and buffer pool instances) are used,
that is, when innodb_buffer_pool_size is at least 1 GiB.
buf_flush_page_cleaner_close(): Merge with the only caller.