recalculate long unique hash in Write_rows_log_event
and Update_rows_log_event.
normally generated columns (stored and indexed virtual)
are deterministic and their values don't need to be recalculated
on the slave as they're already present in the row image.
but the long unique hash function was changed in MDEV-27653,
so a row event from the old master will have the old hash,
but a table created on the new slave will need a new hash.
The problem is in manager/worker communication when worker sends
WARNINGS and then TESTRESULT. If manager yet didn't read WARNINGS
response both responses get into the same buffer, can_read() will
indicate we have data only once and we must read all the data from the
socket at once. Otherwise TESTRESULT response is lost and manager
waits it forever.
The fix now instead of single line reads the socket in a loop. But if
there is only one response in the buffer the second read will be
blocked waiting until new data arrives. That can be overcame by
blocking(0) which sets the handle into non-blocking mode. If there is
no data second read just returns undef.
The problem is non-blocking mode is not supported by all perl flavors
on Windows. Strawberry and ActiveState do not support it. Cygwin and
MSYS2 do support. There is some ioctl() hack that was known to "work"
but it doesn't do what is expected (it does not return data when there
is data). So for Windows if it is not Cygwin we disable the fix.
MSYS2 is basically Cygwin, except it has more easy installation (but
with tools which are not used) and it has some more control of path
conversion via MSYS2_ARG_CONV_EXCL and MSYS2_ENV_CONV_EXCL. So it
should be more Windows-friendly than Cygwin.
Installation
Similar to Cygwin, except installing patch requires additional command
run from shell:
pacman -S patch
MSYS2 still doesn't work as it returns wierd "Bad address" when
exec-ing forked process from create_process(). Same exec from
standalone perl -e runs just fine... :(
Cygwin is more Unix-oriented. It does not treat \n as \r\n in regexps
(fixed by \R), it supplies Unix-style paths (fixed by
mixed_path()). It does some cleanup on paths when running exe, so it
will be different in exe output (like with $exe_mysqld, comparing
basename() is enough).
Cygwin installation
1. Just install latest perl version (only base package) and
patchutils from cygwin-setup;
2. Don't forget to add c:\cygwin64\bin into system path
before any other perl flavors;
3. There is path-style conflict (see below), you must replace
c:\cygwin64\bin\sh.exe with the wrapper. Run MTR with
--cygwin-subshell-fix=do for that. Make sure you are running Cygwin
perl for the option to work.
4. Restart buildbot via net stop buildbot; net start buildbot
Path-style conflict of Cygwin-ish Perl
Some exe paths are passed to mysqltest which are executed by a native
call. This requires native-style paths (\-style). These exe paths also
executed by Perl itself. Either by MTR itself which is not so
critical, but also by tests' --perl blocks which is impossible to
change. And if Perl detects shell-expansion or uses pipe command it
passess this exe path to /bin/sh which is Cygwin-compiled bash that
cannot work with \-style (or at least in -c processing). Thus we require
\-style on some parts of MTR execution and /-style on another parts.
The examples of tests which cover these different parts are:
main.mysqlbinlog_row_compressed \
main.sp_trans_log
That could be great to force Perl to use something different from
/bin/sh, but unfortunately /bin/sh is compiled-in into binary. So the
only solution left is to overwrite /bin/sh with some wrapper script
which passes the command to cmd.exe instead of bash.
See "Path-style conflict" in "MDEV-30836 MTR Cygwin fix" for explanation.
To install subshell fix use --cygwin-subshell-fix=do
To uninstall use --cygwin-subshell-fix=remove
This works only from Cygwin environment. As long as perl on PATH is
from Cygwin you are on Cygwin environment. Check it with
perl --version
This is perl 5, version 36, subversion 1 (v5.36.1) built for
x86_64-cygwin-threads-multi
run_test_server() is actually manager main loop. We move out this
function into Manager package and split into run() and
parse_protocol(). The latter is needed for the fix. Moving into
separate package helps to make some common variables which was local
to run_test_server().
Functions from the main package is now prefixed with main:: (should be
reorganized somehow later or auto-imported).
- Server aborts when table doesn't have referenced index.
This is caused by 5f09b53bdb (MDEV-31086).
While iterating the foreign key constraints, we fail to
consider that InnoDB doesn't have referenced index for
it when foreign key check is disabled.
innodb_max_purge_lag_wait_update(): Return immediately if we are
in high_level_read_only mode.
srv_wake_purge_thread_if_not_active(): Relax a debug assertion.
If srv_read_only_mode holds, purge_sys.enabled() will not hold
and this function will do nothing.
trx_t::commit_in_memory(): Remove a redundant condition before
invoking srv_wake_purge_thread_if_not_active().
The test innodb.row_size_error_log_warnings_3 that was added in
commit 372b0e6355 (MDEV-20194)
failed to take into account the earlier adjustment in
commit cf574cf53b (MDEV-27634)
that is specific to many GNU/Linux distributions for the s390x.
Restore code to make InnoDB choose the second transaction as a deadlock
victim if two transactions deadlock that need to commit in-order for
parallel replication. This code was erroneously removed when VATS was
implemented in InnoDB.
Also add a test case for InnoDB choosing the right deadlock victim.
Also fixes this bug, with testcase that reliably reproduces:
MDEV-28776: rpl.rpl_mark_optimize_tbl_ddl fails with timeout on sync_with_master
Note: This should be null-merged to 10.6, as a different fix is needed
there due to InnoDB locking code changes.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Remove the exception that InnoDB does not report auto-increment locks waits
to the parallel replication.
There was an assumption that these waits could not cause conflicts with
in-order parallel replication and thus need not be reported. However, this
assumption is wrong and it is possible to get conflicts that lead to hangs
for the duration of --innodb-lock-wait-timeout. This can be seen with three
transactions:
1. T1 is waiting for T3 on an autoinc lock
2. T2 is waiting for T1 to commit
3. T3 is waiting on a normal row lock held by T2
Here, T3 needs to be deadlock killed on the wait by T1.
Note: This should be null-merged to 10.6, as a different fix is needed
there due to InnoDB lock code changes.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The test innodb.alter_rename_files rather frequently hangs in
checkpoint_set_now. The test was removed in MariaDB Server 10.5
commit 37e7bde12a when the code that
it aimed to cover was simplified. Starting with MariaDB Server 10.5
the page flushing and log checkpointing is much simpler, handled
by the single buf_flush_page_cleaner() thread.
Let us remove the test to avoid occasional failures. We are not going
to fix the cause of the failure in MariaDB Server 10.4.
Field_varstring::get_copy_func() did not take into account
that functions do_varstring1[_mb], do_varstring2[_mb] do not support
compressed data.
Changing the return value of Field_varstring::get_copy_func()
to `do_field_string` if there is a compresion and truncation
at the same time. This fixes the problem, so now it works as follows:
- val_str() uncompresses the data
- The prefix is then calculated on the uncompressed data
Additionally, introducing two new copying functions
- do_varstring1_no_truncation()
- do_varstring2_no_truncation()
Using new copying functions in cases when:
- a Field_varstring with length_bytes==1 is changing to a longer
Field_varstring with length_bytes==1
- a Field_varstring with length_bytes==2 is changing to a longer
Field_varstring with length_bytes==2
In these cases we don't care neither of compression nor
of multi-byte prefixes: the entire data gets fully copied
from the source column to the target column as is.
This is a kind of new optimization, but this also was needed
to preserve existing MTR test results.
This is also related to
MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool
JOIN_CACHE_HASHED::put_record()
Valgrind exposed a problem with the join_cache for hash joins:
=25636== Conditional jump or move depends on uninitialised value(s)
==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table()
(sql_join_cache.cc:2901)
The reason for this was that avg_record_length contained a random value
if one had used SET optimizer_switch='optimize_join_buffer_size=off'.
This causes either 'random size' memory to be allocated (up to
join_buffer_size) which can increase memory usage or, if avg_record_length
is less than the row size, memory overwrites in thd->mem_root, which is
bad.
Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init()
before it's used.
There is no test case for MDEV-31893 as valgrind of join_cache_notasan
checks that.
I added a test case for MDEV-31348.
The test case accessed slave-relay-bin.000003 without waiting for the IO
thread to write it first. If the IO thread was slow, this could fail.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This is also related to
MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool
JOIN_CACHE_HASHED::put_record()
Valgrind exposed a problem with the join_cache for hash joins:
=25636== Conditional jump or move depends on uninitialised value(s)
==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table()
(sql_join_cache.cc:2901)
The reason for this was that avg_record_length contained a random value
if one had used SET optimizer_switch='optimize_join_buffer_size=off'.
This causes either 'random size' memory to be allocated (up to
join_buffer_size) which can increase memory usage or, if avg_record_length
is less than the row size, memory overwrites in thd->mem_root, which is
bad.
Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init()
before it's used.
There is no test case for MDEV-31893 as valgrind of join_cache_notasan
checks that.
I added a test case for MDEV-31348.
- InnoDB aborts when table is dropping the column. This is
caused by 5f09b53bdb (MDEV-31086).
While iterating the altered table fields, we fail to consider
the dropped columns.
There was two related problems:
(1) Galera node that is defined as a slave to async MariaDB
master at restart might do SST (state stransfer) and
part of that it will copy mysql.gtid_slave_pos table.
Problem is that updates on that table are not replicated
on a cluster. Therefore, table from donor that is not
slave is copied and joiner looses gtid position it was
and start executing events from wrong position of the binlog.
This incorrect position could break replication and
causes node to be dropped and requiring user action.
(2) Slave sql thread might start executing events before
galera is ready (wsrep_ready=ON) and that could also
cause node to be dropped from the cluster.
In this fix we enable replication of mysql.gtid_slave_pos
table on a cluster. In this way all nodes in a cluster
will know gtid slave position and even after SST joiner
knows correct gtid position to start.
Furthermore, we wait galera to be ready before slave
sql thread executes any events to prevent too early
execution.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
don't construct open ranges from prefix blob keys for < (less than)
just as it's already done for > (greater than)
because prefix KEY_PART doesn't create prefix Field for blobs
(see open_table_from_share() near "Create a new field for the key part"),
so stored_field_cmp_to_item() will compare the original field to the
value not taking the prefix length into account.
A simple "SET SESSION gtid_seq_no= DEFAULT" did not work, it would straight
up crash the server! Also, explicitly setting gtid_seq_no to 0 gave an error
in --gtid-strict-mode=1.
Setting to DEFAULT or 0 should disable any prior setting of
gtid_seq_no, so that the next transaction is allocated the next GTID
in sequence, as normal.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
MDEV-31749 sporadic assert in MDEV-30619 new test
If the workers of a parallel replica are busy (potentially with long
queues), but the SQL thread has no events left to distribute (so it
goes idle), then the next event that comes from the primary will
update mi->last_master_timestamp with its timestamp, even if the
workers have not yet finished.
This patch changes the parallel replica logic which updates
last_master_timestamp after idling from using solely sql_thread_caught_up
(added in MDEV-29639) to using the latter with rli queued/dequeued
event counters.
That is, if the queued count is equal to the dequeued count, it
means all events have been processed and the replica is considered
idle when the driver thread has also distributed all events.
Low level details of the commit include
- to make a more generalized test for Seconds_Behind_Master on
the parallel replica, rpl_delayed_parallel_slave_sbm.test
is renamed to rpl_parallel_sbm.test for this purpose.
- pause_sql_thread_on_next_event usage was removed
with the MDEV-30619 fixes. Rather than remove it, we adapt it
to the needs of this test case
- added test case to cover SBM spike of relay log read and LMT
update that was fixed by MDEV-29639
- rpl_seconds_behind_master_spike.test is made to use
the negate_clock_diff_with_master debug eval.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Restrict vcol_cleanup_expr() in close_thread_tables() to only simple
locked tables mode. Prelocked is cleaned up like normal statement: in
close_thread_table().
First UPDATE under START TRANSACTION does nothing (nstate= nstate),
but anyway generates history. Since update vector is empty we get into
(!uvect->n_fields) branch which only adds history row, but does not do
update. After that we get current row with wrong (old) row_start value
and because of that second UPDATE tries to insert history row again
because it sees trx->id != row_start which is the guard to avoid
inserting multiple trx_id-based history rows under same transaction
(because we have same trx_id and we get duplicate error and this bug
demostrates that). But this try anyway fails because PK is based on
row_end which is constant under same transaction, so PK didn't change.
The fix moves vers_make_update() to an earlier stage of
calc_row_difference(). Therefore it prepares update vector before
(!uvect->n_fields) check and never gets into that branch, hence no
need to handle versioning inside that condition anymore.
Now trx->id and row_start are equal after first UPDATE and we don't
try to insert second history row.
== Cleanups and improvements ==
ha_innobase::update_row():
vers_set_fields and vers_ins_row are cleaned up into direct condition
check. SQLCOM_ALTER_TABLE check now is not used as this is dead code,
assertion is done instead.
upd_node->is_delete is set in calc_row_difference() just to keep
versioning code as much in one place as possible. vers_make_delete()
is still located in row_update_for_mysql() as this is required for
ha_innodbase::delete_row() as well.
row_ins_duplicate_error_in_clust():
Restrict DB_FOREIGN_DUPLICATE_KEY to the better conditions.
VERSIONED_DELETE is used specifically to help lower stack to
understand what caused current insert. Related to MDEV-29813.
On create table tmp as select ... we exited Item_func::fix_fields()
with error. fix_fields_if_needed('foo' or 'bar') failed and we
returned true, but already changed const_item_cache. So the item is in
inconsistent state: fixed == false and const_item_cache == false.
Now we cleanup the item before the return if Item_func::fix_fields()
fails to process.
Constraints processing row_ins_check_foreign_constraint() was not
called because row_upd_check_references_constraints() didn't see
update as delete: node->is_delete was false.
Since MDEV-30378 we check for TRG_EVENT_DELETE to detect versioned
delete in ha_innobase::update_row().
Now we can use TRG_EVENT_DELETE to set upd_node->is_delete, so
constraints processing is triggered correctly.
1. Exclude merging history rows into fts index.
The check !history_fts && (index->type & DICT_FTS) was just incorrect
attempt to avoid history in fts index.
2. Don't check for duplicates for history rows.
Also fixes:
MDEV-30982 UBSAN: runtime error: null pointer passed as argument 2, which is declared to never be null in my_strnncoll_binary on DELETE
Calling memcmp() with a NULL pointer is undefined behaviour
according to the C standard, even if the length argument is 0.
Adding tests for length==0 before calling memcmp() into:
- my_strnncoll_binary()
- my_strnncoll_8bit_bin
Problem:
Item_func_conv::val_str() copied the ASCII string with the numeric base
conversion result directly to the function result string. In case of a
tricky character set (e.g. utf32) it produced an illformed string.
Fix:
Copy the base conversion result to the function result as is only if
the function character set is ASCII compatible, go through a
character set conversion otherwise.
Change history in the affected code:
- Since 10.4.8 (MDEV-20397 and MDEV-23311), functions ROUND(), CEILING(),
FLOOR() return a TIME value for a TIME input.
- Since 10.4.14 (MDEV-23525), MIN() and MAX() calculate a result for a TIME
input using val_native() rather than val_str().
Problem:
The patch for MDEV-23525 did not take into account combinations like
MIN(ROUND(time)), MAX(FLOOR(time)), etc.
MIN() and MAX() with ROUND(time), CEILING(time), FLOOR(time) as an argument
call the method val_native() of the undelying classes Item_func_round and
Item_func_int_val. However these classes implemented the method val_native()
as DBUG_ASSERT(0).
Fix:
This patch adds a TIME-specific code inside:
- Item_func_round::val_native()
- Item_func_int_val::val_native()
still with DBUG_ASSERT(0) for all other data types,
as other data types do not call val_native() of these classes.
We'll need a more generic solition eventualy, e.g.
turn Item_func_round and Item_func_int_val into Item_handled_func.
However, this change would be too risky for 10.4 at this point.
MDEV-31503 ALTER SEQUENCE ends up in optimistic parallel slave binlog out-of-order
The OOO error still was possible even after MDEV-31077. This time
it occured through open_table() when the sequence table was not in
the table cache *and* the table was created before the last server
restart.
In such context a internal (read-only) transaction is committed
and it was not blocked from doing a wakeup() call to subsequent
transactions.
Fixed with extending suspend_subsequent_commits() effect for the entirety
of Sql_cmd_alter_sequence::execute().
An elaborated MDEV-31077 test proves the fixes of both failure scenarios.
Also the bug condition suggests a workaround to pre-SELECT sequence
tables before START SLAVE.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
The largest_started_sub_id needs to be set under LOCK_parallel_entry
together with testing stop_sub_id. However, in-between was the logic for
do_ftwrl_wait(), which temporarily releases the mutex. This could lead to
inconsistent stopping amongst worker threads and lost data.
Fix by moving all the stop-related logic out from unrelated do_gco_wait()
and do_ftwrl_wait() and into its own function do_stop_handling().
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>