- InnoDB fails to check the overflow buffer while applying
the operation to the table that was rebuilt. This is caused
by commit 3cef4f8f0f (MDEV-515).
The error is caused by MDEV-30165 fix with the following commit:
d13a57ae81
There is logical error in lock_release_on_prepare_try():
if (supremum_bit)
lock_rec_unlock_supremum(*cell, lock);
else
lock_rec_dequeue_from_page(lock, false);
Because there can be other bits set in the lock's bitmap, and the lock
type can be suitable for releasing criteria, but the above logic
releases only supremum bit of the lock.
The fix is to release lock if it suits for releasing criteria and unlock
supremum if supremum is locked otherwise.
Tere is also the test for the case, which was reported by QA team. I
placed it in a separate files, because it requires debug build.
Reviewed by: Marko Mäkelä
trx_t::set_skip_lock_inheritance() must be invoked at the very beginning
of lock_release_on_prepare(). Currently trx_t::set_skip_lock_inheritance()
is invoked at the end of lock_release_on_prepare() when lock_sys and trx
are released, and there can be a case when locks on prepare are released,
but "not inherit gap locks" bit has not yet been set, and page split
inherits lock to supremum.
Also reset supremum bit and rebuild waiting queue when XA is prepared.
Reviewed by: Marko Mäkelä
fseg_free_extent(): After fsp_free_extent() succeeded, properly
mark the affected pages as freed. We failed to write FREE_PAGE records.
This bug was revealed or caused by
commit e938d7c18f (MDEV-32028).
lock_wait(): Never return the transient error code DB_LOCK_WAIT.
In commit 78a04a4c22 (MDEV-29869)
some assignments assign trx->error_state = DB_SUCCESS were removed,
and it was possible that the field was left at its initial value
DB_LOCK_WAIT.
The test case for this is nondeterministic; without this fix, it
would only occasionally fail.
Reviewed by: Vladislav Lesin
innodb_monitor_validate(): Let item_val_str() allocate the memory
in THD, so that it will be available to innodb_monitor_update().
In this way, there is no need to allocate another buffer, and
no problem if the call to innodb_monitor_update() is skipped due
to an invalid value that is passed to another configuration parameter.
There are some other callers to st_mysql_sys_var::val_str()
that validate configuration parameters that are related to FULLTEXT INDEX,
but they will allocate memory by invoking thd_strmake().
The problem is that s390x is not using the default bzip library we use
on other platforms, which causes compressed string lengths to be differnt
than what mtr tests expects.
Fixed by:
- Added have_normal_bzip.inc, which checks if compress() returns the
expected length.
- Adjust the results to match the expected one
- main.func_compress.test & archive.archive
- Don't print lengths that depends on compression library
- mysqlbinlog compress tests & connect.zip
- Don't print DATA_LENGTH for SET column_compression_zlib_level=1
- main.column_compression
- Server aborts when table doesn't have referenced index.
This is caused by 5f09b53bdb (MDEV-31086).
While iterating the foreign key constraints, we fail to
consider that InnoDB doesn't have referenced index for
it when foreign key check is disabled.
Problem:
========
InnoDB fails to mark the page status as FREED during
freeing of an extent of a segment. This behaviour affects
scrubbing and doesn't write all zeroes in file even though
pages are freed.
Solution:
========
InnoDB should mark the page status as FREED before
reinitialize the extent descriptor entry.
trx_t::commit_empty(): A special case of transaction "commit" when
the transaction was actually rolled back or the persistent undo log
is empty. In this case, we need to change the undo log header state to
TRX_UNDO_CACHED and move the undo log from rseg->undo_list to
rseg->undo_cached for fast reuse. Furthermore, unless this is the only
undo log record in the page, we will remove the record and rewind
TRX_UNDO_PAGE_START, TRX_UNDO_PAGE_FREE, TRX_UNDO_LAST_LOG.
We must also ensure that the system-wide transaction identifier
will be persisted up to this->id, so that there will not be warnings or
errors due to a PAGE_MAX_TRX_ID being too large. We might have modified
secondary index pages before being rolled back, and any changes of
PAGE_MAX_TRX_ID are never rolled back.
Even though it is not going to be written persistently anywhere,
we will invoke trx_sys.assign_new_trx_no(this), so that in the test
innodb.instant_alter everything will be purged as expected.
trx_t::write_serialisation_history(): Renamed from
trx_write_serialisation_history(). If there is no undo log,
invoke commit_empty().
trx_purge_add_undo_to_history(): Simplify an assertion and remove a
comment. This function will not be invoked on an empty undo log anymore.
trx_undo_header_create(): Add a debug assertion.
trx_undo_mem_create_at_db_start(): Remove a duplicated assignment.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
innodb_max_purge_lag_wait_update(): Return immediately if we are
in high_level_read_only mode.
srv_wake_purge_thread_if_not_active(): Relax a debug assertion.
If srv_read_only_mode holds, purge_sys.enabled() will not hold
and this function will do nothing.
trx_t::commit_in_memory(): Remove a redundant condition before
invoking srv_wake_purge_thread_if_not_active().
The test innodb.row_size_error_log_warnings_3 that was added in
commit 372b0e6355 (MDEV-20194)
failed to take into account the earlier adjustment in
commit cf574cf53b (MDEV-27634)
that is specific to many GNU/Linux distributions for the s390x.
The test innodb.alter_rename_files rather frequently hangs in
checkpoint_set_now. The test was removed in MariaDB Server 10.5
commit 37e7bde12a when the code that
it aimed to cover was simplified. Starting with MariaDB Server 10.5
the page flushing and log checkpointing is much simpler, handled
by the single buf_flush_page_cleaner() thread.
Let us remove the test to avoid occasional failures. We are not going
to fix the cause of the failure in MariaDB Server 10.4.
- InnoDB aborts when table is dropping the column. This is
caused by 5f09b53bdb (MDEV-31086).
While iterating the altered table fields, we fail to consider
the dropped columns.
* invoke check_expression() for all vcol_info's in
mysql_prepare_create_table() to check for FK CASCADE
* also check for SET NULL and SET DEFAULT
* to check against existing FKs when a vcol is added in ALTER TABLE,
old FKs must be added to the new_key_list just like other indexes are
* check columns recursively, if vcol1 references vcol2,
flags of vcol2 must be taken into account
* remove check_table_name_processor(), put that logic under
check_vcol_func_processor() to avoid walking the tree twice
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
Before MDEV-24671, the wait time was derived from my_interval_timer() /
1000 (nanoseconds converted to microseconds, and not microseconds to
milliseconds like I must have assumed). The lock_sys.wait_time and
lock_sys.wait_time_max are already in milliseconds; we should not divide
them by 1000.
In MDEV-24738 the millisecond counts lock_sys.wait_time and
lock_sys.wait_time_max were changed to a 32-bit type. That would
overflow in 49.7 days. Keep using a 64-bit type for those millisecond
counters.
Reviewed by: Marko Mäkelä
PROBLEM:
A deadlock was possible when a transaction tried to "upgrade" an already
held Record Lock to Next Key Lock.
SOLUTION:
This patch is based on observations that:
(1) a Next Key Lock is equivalent to Record Lock combined with Gap Lock
(2) a GAP Lock never has to wait for any other lock
In case we request a Next Key Lock, we check if we already own a Record
Lock of equal or stronger mode, and if so, then we change the requested
lock type to GAP Lock, which we either already have, or can be granted
immediately, as GAP locks don't conflict with any other lock types.
(We don't consider Insert Intention Locks a Gap Lock in above statements).
The reason of why we don't upgrage Record Lock to Next Key Lock is the
following.
Imagine a transaction which does something like this:
for each row {
request lock in LOCK_X|LOCK_REC_NOT_GAP mode
request lock in LOCK_S mode
}
If we upgraded lock from Record Lock to Next Key lock, there would be
created only two lock_t structs for each page, one for
LOCK_X|LOCK_REC_NOT_GAP mode and one for LOCK_S mode, and then used
their bitmaps to mark all records from the same page.
The situation would look like this:
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 1:
// -> creates new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode and sets bit for
// 1
request lock in LOCK_S mode on row 1:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 1,
// so it upgrades it to X
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 2:
// -> creates a new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode (because we
// don't have any after we've upgraded!) and sets bit for 2
request lock in LOCK_S mode on row 2:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 2,
// so it upgrades it to X
...etc...etc..
Each iteration of the loop creates a new lock_t struct, and in the end we
have a lot (one for each record!) of LOCK_X locks, each with single bit
set in the bitmap. Soon we run out of space for lock_t structs.
If we create LOCK_GAP instead of lock upgrading, the above scenario works
like the following:
// -> creates new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode and sets bit for
// 1
request lock in LOCK_S mode on row 1:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 1,
// so it creates LOCK_S|LOCK_GAP only and sets bit for 1
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 2:
// -> reuses the lock_t for LOCK_X|LOCK_REC_NOT_GAP by setting bit for 2
request lock in LOCK_S mode on row 2:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 2,
// so it reuses LOCK_S|LOCK_GAP setting bit for 2
In the end we have just two locks per page, one for each mode:
LOCK_X|LOCK_REC_NOT_GAP and LOCK_S|LOCK_GAP.
Another benefit of this solution is that it avoids not-entirely
const-correct, (and otherwise looking risky) "upgrading".
The fix was ported from
mysql/mysql-server@bfba840dfamysql/mysql-server@75cefdb1f7
Reviewed by: Marko Mäkelä
i_s_innodb_buffer_page_get_info(): Correct a condition.
After crash recovery, there may be some buffer pool pages in FREED state,
containing garbage (invalid data page contents). Let us ignore such pages
in the INFORMATION_SCHEMA output.
The test innodb.innodb_defragment_fill_factor will be removed, because
the queries that it is invoking on information_schema.innodb_buffer_page
would start to fail. The defragmentation feature was removed in
commit 7ca89af6f8 in MariaDB Server 11.1.
Tested by: Matthias Leich
The fix is in replacing the waiting for the whole purge finishing
with the the waiting for only delete-marked records purging finishing.
Reviewed by: Marko Mäkelä
- When foreign_key_check is disabled, allowing to modify the
column which is part of foreign key constraint can lead to
refusal of TRUNCATE TABLE, OPTIMIZE TABLE later. So it make
sense to block the column modify operation when foreign key
is involved irrespective of foreign_key_check variable.
Correct way to modify the charset of the column when fk is involved:
SET foreign_key_checks=OFF;
ALTER TABLE child DROP FOREIGN KEY fk, MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE parent MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE child ADD CONSTRAINT FOREIGN KEY (m) REFERENCES PARENT(m);
SET foreign_key_checks=ON;
fk_check_column_changes(): Remove the FOREIGN_KEY_CHECKS while
checking the column change for foreign key constraint. This
is the partial revert of commit 5f1f2fc0e4
and it changes the behaviour of copy alter algorithm
ha_innobase::prepare_inplace_alter_table(): Find the modified
column and check whether it is part of existing and newly
added foreign key constraint.
- InnoDB throws ASAN error while adding the index on virtual column
of system versioned table. InnoDB wrongly assumes that virtual
column collation type changes, creates new column with different
character set. This leads to failure while detaching the column
from indexes.