- Newly created InnoDB fulltext index does have only one column
and doesn't associate with primary key fields during DDL.
init_change_cols() has strict assertion that number of fields
should be greater than number of collation change columns.
This reverts part of commit 212994f704
(MDEV-28974) and implements a better fix that works in that special case
while avoiding other failures.
fil_name_process(): Do not rename the tablespace in deferred_spaces;
it already contains the latest name for the space id.
deferred_spaces::create(): In mariadb-backup --prepare,
replace absolute data directory file path with short name
relative to the backup directory and store it as filename while
deferring tablespace file creation.
The index root page contains the fields BTR_SEG_TOP and BTR_SEG_LEAF
which keep track of allocated pages in the index tree. These fields
are normally protected by an Update latch, so that concurrent read
access to other parts of the page will be possible.
When the index root page is already exclusively latched in the
mini-transaction, we must not try to acquire a lower-grade Update latch.
In fact, when the root page is already X or U latched in the
mini-transaction, there is no point to acquire another latch.
Moreover, after a U latch was acquired on top of an X-latch,
mtr_t::defer_drop_ahi() would trigger an assertion failure or
lock corruption in block->page.lock.u_x_upgrade() because X locks
already exist on the block.
This problem may have been introduced in
commit 03ca6495df (MDEV-24142).
btr_page_alloc_low(), btr_page_free(): Initially buffer-fix the root page.
If it is already U or X latched, release the buffer-fix. Else, upgrade
the buffer-fix to a U latch.
mtr_t::u_lock_register(): Upgrade a buffer-fix to U latch.
mtr_t::have_u_or_x_latch(): Check if U or X latches are already
registered in the mini-transaction.
Reason:
=======
Race condition between btr_search_drop_hash_index() and
btr_search_lazy_free(). One thread does resizing of buffer pool
and clears the ahi on all pages in the buffer pool, frees the
index and table while removing the last reference. At the same time,
other thread access index->heap in btr_search_drop_hash_index().
Solution:
=========
Acquire the respective ahi latch before checking index->freed()
btr_search_drop_page_hash_index(): Added new parameter to indicate
that drop ahi entries only if the index is marked as freed
btr_search_check_marked_free_index(): Acquire all ahi latches and
return true if the index was freed
TIMESTAMP columns were compared as strings in ALL/ANY comparison,
which did not work well near DST time change.
Changing ALL/ANY comparison to use "Native" representation to compare
TIMESTAMP columns, like simple comparison does.
Ever since commit 09177eadc3
the test innodb.row_format_redundant cannot work when the
data file was created with innodb_checksum_algorithm=full_crc32.
Backport of 10.5 commit a9d0bb12e6
As pointed out with MDEV-29308 there are issues with the code as is.
MariaDB is built as C++11 / C99. aligned_alloc() is not guarenteed
to be exposed when building with any mode other than C++17 / C11.
The other *BSD's have their stdlib.h header to expose the function
with C+11 anyway, but the issue exists in the C99 code too, the
build just does not use -Werror. Linux globally defines _GNU_SOURCE
hiding the issue as well.
Reason:
======
This issue is caused by race condition between fulltext DDL and
purge thread. DDL sets the signal to stop the purge thread to
process the new undo log records and wait for the ongoing
processed FTS table undo log records to finish.
But in dict_acquire_mdl_shared(),InnoDB release all innodb
table related locks before acquiring the mdl. At the same time,
DDL assumes that there are no purge threads working on fts table.
There is a possiblity that purge thread can skip processing the
valid undo log records if it checks purge_sys.must_wait_FTS() twice
in different places.
Solution:
==========
Add the purge_sys.must_wait_FTS() check in dict_acquire_mdl_shared()
to avoid the purge thread processing undo log records.
dict_open_table_on_id(): return -1 if the purge thread has to
wait
dict_acquire_mdl_shared(): Added 1 new parameters to indicate that
purge thread invoking the function, return -1 if the purge
thread has to wait.
Even though commit b817afaa1c passed
the test mariabackup.compress_qpress, that test turned out to be
too small to reveal one more problem that had previously been prevented
by the existence of ctrl_mutex. I did not realize that there can be
multiple concurrent callers to compress_write(). One of them is the
log copying thread; further callers are data file copying threads
(default: --parallel=1).
By default, there is only one compression worker thread
(--compress-threads=1).
compress_write(): Fix a race condition between threads that would
use the same worker thread object. Make thd->data_avail contain the
thread identifier of the submitter, and add thd->avail_cond to
notify other compress_write() threads that are waiting for a slot.
In MySQL 5.7, rollback segments 1 to 32 are used for temporary tables,
which is an unnecessary file format change from MySQL 5.6.
This format change was avoided in MariaDB Server by
commit 124bae082b (MDEV-12289).
An upgrade from MySQL 5.7 would crash due to dereferencing a null pointer,
which is a regression due to
commit 0b47c126e3 (MDEV-13542).
trx_rseg_t::get(): Return nullptr if no tablespace exists. This is where
the upgrade would crash.
trx_rseg_mem_restore(): Return DB_TABLESPACE_NOT_FOUND if the
undo tablespace does not exist. This is likely dead code.
- While creating a new InnoDB segment, allocates the extent
before allocating the inode or page allocation even though
the pages are present in fragment segment. This patch does
reserve the extent when InnoDB ran out of fragment pages
in the tablespace.
ha_spider::field_exchange() returns NULL and that results in a crash
or a assertion failure in spider_db_open_item_ident().
In the first place, there seems to be no need to call field_exchange()
for printing an identity (column name with alias). Thus, we simply
remove the call.
Analysis: The JSON functions(JSON_ARRAY[OBJECT|ARRAY_APPEND|ARRAY_INSERT|INSERT|SET|REPLACE]) result is truncated when the function is called based on LONGTEXT field. The overflow occurs when computing the result length due to the LONGTEXT max length is same as uint32 max length. It lead to wrong result length.
Fix: Add static_cast<ulonglong> to avoid uint32 overflow and fix the arguments used.
Analysis: JSON_VALUE() returns "null" string instead of NULL pointer.
Fix: When the type is JSON_VALUE_NULL (which is also a scalar) set
null_value to true and return 0 instead of returning string.
The mysqld_safe script was using bad grep options.
The line that was fixed was likely meant to be 2 separate grep commands,
piped into each other, with each one removing any lines that matched.
The `-E` option was unneeded, as the command is not using regex.
The mysqld_safe script was using bad grep options.
The line that was fixed was likely meant to be 2 separate grep commands,
piped into each other, with each one removing any lines that matched.
The `-E` option was unneeded, as the command is not using regex.