In the function test_if_cheaper_ordering we make a decision if using an index is better than
using filesort for ordering. If we chose to do range access then in test_quick_select we
should make sure that cost for table scan is set to DBL_MAX so that it is not picked.
- Enable the test `sphinx.sphinx` which was disabled by MDEV 10986,
comit ee0094d2fd
- Add test case to `sphinx.sphinx` to cover host as localhost instead of `127.0.0.1`
- Add result file for single test
There's an annoying bug that prevents a Sphinx table to connect to a searchd using a host name.
So the example table in the documentation https://mariadb.com/kb/en/library/about-sphinxse/#basic-usage that point's to "localhost" actually doesn't work.
After some investigation I found two errors. The first one is a wrong check after the getaddrinfo call. The second is a wrong usage of the returned struct.
page_mem_free(): Define in the same file with the only caller
page_cur_delete_rec().
page_dir_slot_set_rec(): Add const qualifier to a parameter.
page_dir_delete_slot(): Merge with the only caller page_dir_balance_slot().
page_dir_add_slot(): Merge with the only caller page_dir_split_slot().
page_dir_split_slot(), page_dir_balance_slot(): Define in the
same compilation unit with the callers, and simplify the code.
Problem:
=======
During dropping of fts index, InnoDB waits for fts_optimize_remove_table()
and it holds dict_sys->mutex and dict_operaiton_lock even though the
table id is not present in the queue. But fts_optimize_thread does wait
for dict_sys->mutex to process the unrelated table id from the slot.
Solution:
========
Whenever table is added to fts_optimize_wq, update the fts_status
of in-memory fts subsystem to TABLE_IN_QUEUE. Whenever drop index
wants to remove table from the queue, it can check the fts_status
to decide whether it should send the MSG_DELETE_TABLE to the queue.
Removed the following functions because these are all deadcode.
dict_table_wait_for_bg_threads_to_exit(),
fts_wait_for_background_thread_to_start(),fts_start_shutdown(), fts_shudown().
Problem:
=======
Transaction left with nonempty table locks list. This leads to
assumption that table_locks is not subset of trx_locks. Problem is that
lock_wait_timeout_thread() doesn't remove the table lock from
table_locks for transaction.
Solution:
========
In lock_wait_timeout_thread(), remove the lock from table vector of
transaction.
It is not reproducible, but the issue seems to be the same as with
MDEV-20490 and rocksdb.ttl_primary_read_filtering - a compaction caused
by DROP TABLE gets behind and compacts away the expired rows for the next
test. Fix this in the same way.
In mysql-server/commit@f46329044f
the InnoDB function btr_cur_open_at_rnd_pos() was corrected so that
it would return a status that indicates whether the cursor was
successfully positioned. But this change was not correctly merged to
MariaDB in 2e814d4702.
btr_cur_open_at_rnd_pos(): In the code path that was introduced in
MDEV-8588, properly return failure status.
No deterministic test case was found for this failure.
It was caught after removing the function
page_copy_rec_list_end_to_created_page() in a development branch.
As a result, the fill factor of index trees would improve, and
supposedly, so would the probability of btr_cur_open_at_rnd_pos()
reaching the intentionally corrupted page in the test
innodb.leaf_page_corrupted_during_recovery.
The wrong return value would cause
btr_estimate_number_of_different_key_vals() to wrongly invoke
btr_rec_get_externally_stored_len() on a non-leaf page and
trigger an assertion failure at the start of that function.
- During trx_undo_report_rename(), InnoDB can fail to write undo log
for it if undo log doesn't fit in the undo page. In that case, InnoDB
adds one more undo log page and retry to write the rename undo log.
But the assert is wrong and it doesn't allow to fail even for one time.
Files for PAGE_COMPRESSED tables that were created with
innodb_checksum_algorithm=full_crc32 store the value of
innodb_compression_algorithm at the time of the file creation.
The server-wide setting of innodb_compression_algorithm
may be changed after file creation. We must ignore any mismatch
when opening a data file, and for writes, we must use the
choice of algorithm that is stored in the file.
fil_space_t::is_flags_full_crc32_equal(): Ignore the
innodb_compression_algorithm but do compare innodb_page_size.
fil_space_t::is_flags_non_full_crc32_equal(): Ignore the
innodb_compression_algorithm.
The setting innodb_change_buffering_debug=2 was supposed to inject
a crash during change buffer merge. There is no public test for
that functionality, and even if there were, it would be better
to use DEBUG_SYNC to halt the thread that does change buffer merge,
force a redo log flush from another thread, and finally kill the
server externally.
Remove debug output,
remove overriding of the Windows C runtime flags(linker warning)
do not add code that depends on restsdk if library is not going
to be linked.
freaking Connect
Remove debug output,
remove overriding of the Windows C runtime flags(linker warning)
do not add code that depends on restsdk if library is not going
to be linked.
freaking Connect
Support Create_time and Update_time for MyRocks tables.
- Create_time is stored in the MyRocks' internal data dictionary.
- Update_time is in-memory only (like in InnoDB).