- InnoDB bulk insert fails to use encryption buffer for encrypting
the temporary log file. Declare the m_crypt_block, m_crypt_pfx in
row_merge_bulk_t to be used for encrypting the temporary file.
If transaction does bulk insert and disables the foreign_key_check
then InnoDB fails with the assert failure. InnoDB has strict
assertion that check_foreigns and unique_secondary_check
should be enabled if the transaction does bulk insert
in innodb_prepare_commit_versioned().
The assuption that Field::is_null() is always false when
Field_fbt::val_native() or Field_fbt::to_fbt() are called
was wrong.
In some cases, e.g. when this helper Field method is called:
inline String *val_str(String *str, const uchar *new_ptr)
we temporarily reset Field::ptr to some alternative record buffer
but don't reset null_ptr, so null_ptr still points to null flags
of the original record. In such cases it's meaningless to test
the original Field::null_ptr when Field::ptr is temporarily reset:
they don't relate to each other.
Removing the DBUG_ASSERT.
Starting with 10.5, InnoDB crash recovery tests seem to time out
more easily under Valgrind, which emulates multiple threads by
interleaving them in a single operating system thread.
These tests will still be covered by
AddressSanitizer and MemorySanitizer.
- InnoDB mistakenly identifies the non-unique FTS_DOC_ID index as
FTS_DOC_ID_INDEX while loading the table. dict_load_indexes()
should check whether the index is unique before assigning
fts_doc_id_index
Before version 10, GCC would think that a right shift of an
unsigned char returns int. Let us explicitly cast that back,
to silence a bogus -Wconversion warning.
- In case of discarded tablespace, InnoDB can't read the root page to
assign the n_core_null_bytes. Consecutive instant DDL fails because
of non-matching n_core_null_bytes.
fil_space_t::acquire_low(): Introduce a parameter that specifies
which flags should be avoided. At all times, referenced() must not
be incremented if the STOPPING flag is set. When fil_system.mutex
is not being held by the current thread, the reference must not be
incremented if the CLOSING flag is set (unless NEEDS_FSYNC is set,
in fil_space_t::flush()).
fil_space_t::acquire(): Invoke acquire_low(STOPPING | CLOSING).
In this way, the reference count cannot be incremented after
fil_space_t::try_to_close() invoked fil_space_t::set_closing().
If the CLOSING flag was set, we must retry acquire_low() after
acquiring fil_system.mutex.
fil_space_t::prepare_acquired(): Replaces prepare(true).
fil_space_t::acquire_and_prepare(): Replaces prepare().
This basically retries fil_space_t::acquire() after
acquiring fil_system.mutex.
- Redundant InnoDB table fails to set the flags2 while loading
the table. It leads to "Upgrade index name failure" during alter
operation. InnoDB should set the flags2 to FTS_AUX_HEX_NAME when
fts is being loaded
recv_sys_t::recover_deferred(): Hold the exclusive page latch until
the tablespace has been set up. Otherwise, the write of the page
may be lost due to non-existent tablespace. This race only affects
the recovery of the first page in a newly created tablespace.
This race condition was introduced in MDEV-24626.
Log MDL state transitions. Trace-friendly message
format. DBUG_LOCK_FILE replaced by thread-local storage.
Logged states legend:
Seized lock was acquired without waiting
Waiting lock is waiting
Acquired lock was acquired after waiting
Released lock was released
Deadlock lock was aborted due to deadlock
Timeout lock was aborted due to timeout >0
Nowait lock was aborted due to zero timeout
Killed lock was aborted due to kill message
OOM can not acquire because out of memory
Usage:
mtr --mysqld=--debug=d,mdl,query:i:o,/tmp/mdl.log
Cleanup from garbage messages:
sed -i -re \
'/(mysql|performance_schema|sys|mtr)\// d; /MDL_BACKUP_/ d' \
/tmp/mdl.log
Ever since commit 9608773f75
the InnoDB persistent statistics are enabled on all InnoDB tables
by default. We must filter out any output that indicates that the
statistics tables are being internally accessed by InnoDB.
The issue was that flush_tables() didn't take a MDL lock on cached
TABLE_SHARE before calling open_table() to do a HA_EXTRA_FLUSH call.
Most engines seams to have no issue with it, but apparantly this conflicts
with InnoDB in 10.6 when using TRUNCATE
Fixed by taking a MDL lock before trying to open the table in
flush_tables().
There is no test case as it hard to repeat the scheduling that causes
the error. I did run the test case in MDEV-28897 to verify
that the bug is fixed.
The test was reported to fail sporadicaly with this diff:
--- mysql-test/main/information_schema_tables.result
+++ mysql-test/main/information_schema_tables.reject
@@ -21,6 +21,8 @@
disconnect con1;
connection default;
DROP VIEW IF EXISTS vv;
+Warnings:
+Note 4092 Unknown VIEW: 'test.vv'
in the "The originally reported non-deterministic test" part.
Disabling warnings around the DROP VIEW statement.
The Spider mixes the comma join with other join types, and thus
ERROR 1054 occurs. This is well-known issue caused by the higher
precedence of JOIN over the comma (,).
We can fix the problem simply by using JOINs instead of commas.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on INSERT SELECT statements and it results in the
assertion error.
The heap-use-after-free is caused by the following mechanism:
* In the execution of FLUSH TABLE WITH READ LOCK, the function
spider_free_trx_conn() is called and the connections held by
SPIDER_TRX::trx_conn_hash are freed.
* Then, an instance of ha_spider maintains the freed connections
because they are also referenced from ha_spider::conns.
The ha_spider instance is kept in a lock structure until the
corresponding table is unlocked.
* Spider accesses ha_spider::conns on the implicit UNLOCK TABLE
issued by BEGIN.
In the first place, when the connections have been freed, it means
that there are really no remote table locked by Spider.
Thus, there is no need for Spider to access ha_spider::cons on the
implicit UNLOCK TABLE.
We can fix the bug by removing the above mentioned access to
ha_spider::conns. We also modified spider_free_trx_conn() so that it
frees the connections only when no table is locked to reduce the
chance of another heap-use-after-free on ha_spider::conns.
prepare_inplace_alter_table_dict(): If the table will not be rebuilt,
preserve all of the original ROW_FORMAT, including the compressed
page size flags related to ROW_FORMAT=COMPRESSED.
btr_root_raise_and_insert(), btr_lift_page_up(),
rtr_page_split_and_insert(): Reset DB_FAIL from a failure to
copy records on a ROW_FORMAT=COMPRESSED page to DB_SUCCESS
before retrying.
This fixes a regression that was introduced by
commit 0b47c126e3 (MDEV-13542).
btr_root_raise_and_insert(): Remove a redundant condition.
btr_page_split_and_insert() will invoke btr_page_split_and_insert()
if needed.
Now INSERT, UPDATE, ALTER statements involving incompatible data type pairs, e.g.:
UPDATE TABLE t1 SET col_inet6=col_int;
INSERT INTO t1 (col_inet6) SELECT col_in FROM t2;
ALTER TABLE t1 MODIFY col_inet6 INT;
consistently return an error at the statement preparation time:
ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
and abort the statement before starting interating rows.
This error is the same with what is raised for queries like:
SELECT col_inet6 FROM t1 UNION SELECT col_int FROM t2;
SELECT COALESCE(col_inet6, col_int) FROM t1;
Before this change the error was caught only during the execution time,
when a Field_xxx::store_xxx() was called for the very firts row.
The behavior was not consistent between various statements and could do different things:
- abort the statement
- set a column to the data type default value (e.g. '::' for INET6)
- set a column to NULL
A typical old error was:
ERROR 22007: Incorrect inet6 value: '1' for column `test`.`t1`.`a` at row 1
EXCEPTION:
Note, there is an exception: a multi-row INSERT..VALUES, e.g.:
INSERT INTO t1 (col_a,col_b) VALUES (a1,b1),(a2,b2);
checks assignment compability at the preparation time for the very first row only:
(col_a,col_b) vs (a1,b1)
Other rows are still checked at the execution time and return the old warnings
or errors in case of a failure. This is done because catching all rows at the
preparation time would change behavior significantly. So it still works
according to the STRICT_XXX_TABLES sql_mode flags and the table transaction ability.
This is too late to change this behavior in 10.7.
There is no a firm decision yet if a multi-row INSERT..VALUES
behavior will change in later versions.
On FreeBSD, tests run on persistent storage, and no asynchronous I/O
has been implemented. Warnings about 205-second waits on dict_sys.latch
may occur.