fil_page_decompress(): Remove a rather useless debug check.
We should have test coverage for reading page_compressed pages
from files, either due to buffer pool page eviction or due to
server restarts.
A similar check was removed from fil_space_encrypt() in
commit 0b36c27e0c (MDEV-20307).
The usage message for the innodb_compression_algorithm system variable
did not list snappy, which was added as an optional compression algorithm
in MariaDB 10.1.3 and might actually work since
commit 90c52e5291 (MDEV-12615)
in MariaDB 10.1.24.
Unfortunately, we will include also unavailable compression algorithms
in the list, because ENUM parameters allow numeric values, and we do
not want innodb_compression_algorithm=3 to change meaning depending on
the way how the source code was compiled.
InnoDB only reserves 13 bits for the heap number in the record header,
limiting the heap number to be at most 8191. But, when using
innodb_page_size=64k and secondary index records of 7 bytes each,
it is possible to exceed the maximum heap number.
btr_cur_optimistic_insert(): Let the operation fail if the
maximum number of records would be exceeded.
page_mem_alloc_heap(): Move to the same compilation unit with the
only caller, and let the operation fail if the maximum heap number
has been allocated already.
The debug assertion is bogus, and we had removed it in
commit b1ab211dee (MDEV-15053)
in the MariaDB Server 10.5 branch.
For a small data file, fil_space_extend_must_retry() would always
allocate a minimum size of 4*innodb_page_size.
It is possible that random read-ahead will be triggered for
a smaller file than this. In the observed case, the read-ahead
was triggered for a 6-page file that used ROW_FORMAT=COMPRESSED
with 8KiB page size. So, the desired file size was 49152 bytes,
but the actual size was 65536 bytes.
In 10.3, DBUG_ASSERT() may expand to something that includes
__builtin_expect(), which expects integer arguments, not pointers.
To avoid any compiler warnings, let us use an explicit rather than
implicit comparison to the null pointer.
innobase_pk_order_preserved(): Treat an added AUTO_INCREMENT
column in the same way as an added existing column.
In either case, the column values are not guaranteed to
be constant, and thus the ordering may change if such a column
is added before any existing PRIMARY KEY columns.
prepare_inplace_alter_table_dict(): Initialize
dict_table_t::persistent_autoinc before invoking
innobase_pk_order_preserved().
fil_system_t::keyrotate_next(): If space && space->is_in_rotation_list
does not hold, iterate from the start of the list.
In debug builds, we would typically have hit SIGSEGV because the
iterator would have wrapped a null pointer. It might also be that
we are dereferencing a stale pointer.
There is no test case, because the encryption is very nondeterministic
in nature, due to the use of background threads.
This scenario can be hit by setting the following:
SET GLOBAL innodb_encryption_threads=5;
SET GLOBAL innodb_encryption_rotate_key_age=0;
The problem is caused by the fact that adding the
--defaults-group-suffix option to fix MDEV-18863 causes
mysqld to read all options from the appropriate sections
of the config file, including options specific to mysqld_multi.
Reading unknown options (which are not supported by mysqld)
causes mysqld to terminate with an error.
However, the MDEV-18863 problem has been completely fixed
by passing options on the command line, and now there is no
need to specify the --defaults-group-suffix option (we just
need to give priority to options passed through the command
line, so as not to break MDEV-18863).
Some tests relied on the fact that DATETIME->DATE conversion
always produce a truncation (with a warning). This is not the case
when the SQL statement is executed at current time '00:00:00' sharp.
Adding a new SET TIMESTAMP statements to make sure time is not '00:00:00'.
The test encryption.create_or_replace would occasionally fail,
because some fil_space_t::n_pending_ops would never be decremented.
fil_crypt_find_space_to_rotate(): If rotate_thread_t::should_shutdown()
holds due to innodb_encryption_threads having been reduced, do
release the reference.
fil_space_remove_from_keyrotation(), fil_space_next(): Declare the
functions static, simplify a little, and define in the same compilation
unit with the only caller, fil_crypt_find_space_to_rotate().
fil_crypt_key_mutex: Remove (unused).
Example of the failure:
http://buildbot.askmonty.org/buildbot/builders/bld-p9-rhel7/builds/4417/steps/mtr/logs/stdio
```
main.mysqld--help 'unix' w17 [ fail ]
Test ended at 2020-06-20 18:51:45
CURRENT_TEST: main.mysqld--help
--- /opt/buildbot-slave/bld-p9-rhel7/build/mysql-test/main/mysqld--help.result 2020-06-20 16:06:49.903604179 +0300
+++ /opt/buildbot-slave/bld-p9-rhel7/build/mysql-test/main/mysqld--help.reject 2020-06-20 18:51:44.886766820 +0300
@@ -1797,10 +1797,10 @@
sync-relay-log-info 10000
sysdate-is-now FALSE
system-versioning-alter-history ERROR
-table-cache 421
+table-cache 2000
table-definition-cache 400
-table-open-cache 421
-table-open-cache-instances 1
+table-open-cache 2000
+table-open-cache-instances 8
tc-heuristic-recover OFF
tcp-keepalive-interval 0
tcp-keepalive-probes 0
mysqltest: Result length mismatch
```
mtr: table_open_cache_basic autosized:
Lets assume that >400 are available and that
we can set the result back to the start value.
All of these system variables are autosized and can
generate MTR output differences.
Closes#1527
fix_fields for the arguments of the NTH_VALUE function was updating the same reference,
so for the second argument (or after the first argument) the items were not resolved
to their corresponding field from the view as they were updating the reference to the
first argument.
depending on build config the error might be hidded,
in particular liblz4.so and libjemalloc.so make it to disappear,
but with -DWITH_INNODB_LZ4=NO -DWITH_JEMALLOC=NO it reappears.
Removing the ORDER BY clause from the UNION when UNION is inside an IN/ALL/ANY/EXISTS subquery.
The rewrites are done for subqueries but this rewrite is not done for the fake_select of
the UNION.
The issue here is when records are read from the temporary file
(filesort result in this case) via a cache(rr_from_cache).
The cache is initialized with init_rr_cache.
For correlated subquery the cache allocation is happening at each execution
of the subquery but the deallocation happens only once and that was
when the query execution was done.
So generally for subqueries we do two types of cleanup
1) Full cleanup: we should free all resources of the query(like temp tables).
This is done generally when the query execution is complete or the subquery
re-execution is not needed (case with uncorrelated subquery)
2) Partial cleanup: Minor cleanup that is required if
the subquery needs recalculation. This is done for all the structures that
need to be allocated for each execution (example SORT_INFO for filesort
is allocated for each execution of the correlated subquery).
The fix here would be free the cache used by rr_from_cache in the partial
cleanup phase.
lock_rec_has_to_wait_in_queue(): Remove an obviously redundant assertion
that was added in commit a8ec45863b
and also enclose a Galera-specific condition in #ifdef WITH_WSREP.
Problem:- rpl_parallel2 was failing non-deterministically
Analysis:-
When FLUSH TABLES WITH READ LOCK is executed, it will allow all worker
threads to complete their ongoing transactions and then it will pause them.
At this state FTWRL will proceed to acquire global read lock. FTWRL first
blocks threads from starting new commits, then upgrades the lock to block
commit of existing transactions.
Step1:
FLUSH TABLES WITH READ LOCK - Blocks new commits
Step2:
* STOP SLAVE command enables 'force_abort=1' which unblocks workers,
they continue to execute events.
* T1: Waits in 'record_gtid' call to update 'gtid_slave_pos' table with
its current GTID, but it is blocked becuase of Step1.
* T2: Holds COMMIT lock and waits for T1 to commit.
Step3:
FLUSH TABLES WITH READ LOCK - Waiting to get BLOCK_COMMIT.
This results in deadlock. When STOP SLAVE command allows paused workers to
proceed, workers should skip the execution of all further events, similar
to 'conservative' parallel mode.
Solution:-
We will assign 1 to skip_event_group when we are aborted in do_ftwrl_wait.
rpl_parallel_entry->pause_sub_id is only reset when force_abort is off in
rpl_pause_after_ftwrl.
Lex_input_stream::scan_ident_delimited() could go beyond the end
of the input when a starting backtick (`) delimiter did not have a
corresponding ending backtick.
Fix: catch the case when yyGet() returns 0, which means
either eof-of-query or straight 0x00 byte inside backticks,
and make the parser fail on syntax error, displaying the left
backtick as the syntax error place.
In case of filename in a script like this:
SET CHARACTER_SET_CLIENT=17; -- 17 is 'filename'
SELECT doc.`Children`.0 FROM t1;
the ending backtick was not recognized as such because my_charlen() returns 0 for
a straight backtick (backticks must normally be encoded as @0060 in filename).
The same fix works for 'filename': the execution skips the backtick
and reaches the end of the query, then yyGet() returns 0.
This fix is OK for now. But eventually 'filename' should either be disallowed
as a parser character set, or fixed to handle encoded punctuation properly.
and inaccurately
Analysis: The list of all privileges is 118 characters wide. However, the
format of error message was: "%-.32s command denied to user...". get_length()
sets the maximum width to 32 characters. As a result, only first 32
characters of list of privilege are stored.
Fix: Changing the format to "%-.100T..." so that get_length() sets width to
100. Hence, first 100 characters of the list of privilege are stored and the
type specifier 'T' appends '...' so that truncation can be seen.
Diagnostics_area::sql_errno upon query from I_S with LIMIT ROWS EXAMINED
open_normal_and_derived_table() fails because the query was already killed
as rows examined by the query are more than the limit. However, this isn't a
real error.
Fix: Check if there is actually an error before calling thd->sql_errno()
and later send a warning in handle_select() if no real error.
lock_rec_has_to_wait
wsrep_kill_victim
lock_rec_create_low
lock_rec_add_to_queue
DeadlockChecker::select_victim()
THD can't change from normal transaction to BF (brute force) transaction
here, thus there is no need to syncronize access in wsrep_thd_is_BF
function.
lock_rec_has_to_wait_in_queue
Add condition that lock is not NULL and add assertions if we are in
strong state.