safety first - tell mariadb client not to execute dangerous
cli commands, they cannot be present in the dump anyway.
wrapping the command in /*!999999 ..... */ guarantees that
if a non-mariadb-cli client loads the dump and sends it to the
server - the server will ignore the command it doesn't understand
mysql --sandbox
disables system (\!), tee (\T), pager with an argument(\P foo), source (\.)
does *not* disable edit (\e). Use EDITOR=/bin/false to disable
or, for example, EDITOR=rnano for something more useful
does *not* disable pager (\P) without an argument. Use
PAGER=cat or, for example PAGER=less LESSSECURE=1 for something
more useful
using a disabled command is an error, which can be ignored with --force
Also, a "sandbox" command (\-) - enables the sandbox mode until EOF
(current file or the session, if interactive)
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
This commit fixes sporadic failures in galera_3nodes_sr.GCF-336
test. The following changes have been made here:
1) A small addition to the test itself which should make
it more deterministic by waiting for non-primary state
before COMMIT;
2) More careful handling of the wsrep_ready variable in
the server code (it should always be protected with mutex).
No additional tests are required.
dgcov.pl was putting gcov execution counters on wrong lines
in the report (*.dgcov files were correct), because it was
incrementing the new line number for diff lines starting from '-'
(lines from the old file, not present in the new)
this fixes galera.galera_sst_mariabackup_table_options
Note that `man snprintf` says
The functions snprintf() and vsnprintf() do not write more
than size bytes (including the terminating null byte
('\0')). If the output was truncated due to this limit, then
the return value is the number of characters (excluding the
terminating null byte) which would have been written to the
final string if enough space had been available.
UDF isn't supposed to use my_error(), it should return
the error message in the provided error message buffer
Fixes valgrind:
==93993== Conditional jump or move depends on uninitialised value(s)
==93993== at 0x484ECCD: strnlen (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==93993== by 0x1AD2B2C: process_str_arg (my_vsnprintf.c:259)
==93993== by 0x1AD47E0: my_vsnprintf_ex (my_vsnprintf.c:696)
==93993== by 0x1A3B91E: my_error (my_error.c:120)
==93993== by 0xF87BE8: udf_handler::fix_fields(THD*, Item_func_or_sum*, unsigned int, Item**) (item_func.cc:3638)
followup for 267dd5a993
the test waits for the event to get stuck on MASTER_DELAY,
but on a slow/overloaded slave the event might pass MASTER_DELAY
before the test starts waiting.
Wait for the event to get stuck on the LOCK TABLES (after MASTER_DELAY),
the event cannot avoid that,
Refinement of the original patch.
Move the code to reset the kill up into the parent class
Xid_apply_log_event, to also fix the similar issue for XA COMMIT.
Increase the number of slave retries in the test case
rpl.rpl_parallel_multi_domain_xa to fix some sporadic failures. The test
generates massive amounts of conflicting transactions in multiple
independent domains, which can cause multiple rollback+retry for a
transaction as it conflicts with transactions in other domains one-by-one.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Don't deadlock kill event groups in other domains if they are not
SPECULATE_OPTIMISTIC. Such event groups may not be able to safely roll back
and retry (eg. DDL).
But do deadlock kill a transaction T2 from a blocked transaction U in another
domain, even if T2 has lower sub_id than U. Otherwise, in case of a cycle
T2->T1->U->T2, we might not break the cycle if U is not SPECULATE_OPTIMISTIC
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.
Suchwise, given a hash collision it may choose an incorrect row.
handler::position would be correct and very convenient to use here.
dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
In strict mode a timestamp(0) column could be directly assigned from
another timestamp(N>0) column with the value '1970-01-01 00:00:00.1'
(at time zone '+00:00'), or with any other value '1970-01-01 00:00:00.XXXXXX'
with non-zero microsecond value XXXXXX.
This assignment happened silently without warnings or errors.
It worked as follows:
- The value {tv_sec=0, tv_usec=100000}, which is '1970-01-01 00:00:00.1'
was rounded to {tv_sec=0, tv_usec=0}, which is '1970-01-01 00:00:00.0'
- Then {tv_sec=0, tv_usec=0} was silently re-interpreted as zero datetime.
After the fix this assignment always raises a warning,
which in case of the strict mode is escalated to an error.
The problem in this scenario is that '1970-01-01 00:00:00' cannot be stored,
because its timeval value {tv_sec=0, tv_usec=0} is reserved for zero datetimes.
Thus the warning should be raised no matter if sql_mode allows or disallows
zero dates.
Field_timestampf::val_native() checked only the
first four bytes to detect zero dates.
That was not enough. Fixing the code to check all packed_length()
bytes to detect zero dates.
The code in Field_timestamp::save_in_field() did not catch
zero datetime and stored it to the other field like a usual value
using store_timestamp_dec(), which knows nothing about zero date and
treats {tv_sec=0, tv_usec=0} as a normal timeval value corresponding to
'1970-01-01 00:00:00 +00:00'.
Fixing the code to catch the special combination (ts==0 && sec_pat==0) and
store it using store_time_dec() with a zero datetime passed as an argument.
Assertion "from->s->online_alter_binlog == NULL" fails in
copy_data_between_tables, signalizing that a table share is being reused
(in another alter) after a lock upgrade to EXCLUSIVE fails.
Commit 3059f27 relaxed the lock to be upgraded to MDL_SHARED_NO_WRITE, leaving
it to happen later by a common path wait_while_table_is_used() call.
However the error handling there is not enough for online alter case, where we
require (for now) the table to be flushed, in order to clean up the memory
properly.
* Add another lock upgrade (to MDL_EXCLUSIVE) after the second replication stage
in copy_data_between_tables.
The error from this upgrade will be handled by the branch presented further in
the function.
MDEV-33450 Assertion fails in main.alter_table_online_debug
`TABLE_SHARE` that is being online-altered has a shared `s->online_alter_binlog`
member that all concurrent DMLs are writing to. Online alter thread deletes it
under the MDL_EXCLUSIVE. If upgrading the lock to MDL_EXCLUSIVE fails, table as
marked as `flushed` and it's freed automatically when its usage drops to zero.
In commit 3059f27 the lock upgrade was relaxed to MDL_SHARED_NO_WRITE to allow
concurrent SELECT threads during the final `online_alter_read_from_binlog()`
pass. An attempt to upgrade the lock to MDL_EXCLUSIVE was still happening, but
much later — after the code that marked the table `flushed`.
That is, if the upgrade failed, the table was left with a stale
`s->online_alter_binlog` triggering an assert in a future online alter.
To fix this, upgrade the lock to MDL_EXCLUSIVE earlier, after the final
`online_alter_read_from_binlog()`.
When binlog_get_pending_rows_event was refactored, one usage in
binlog_need_stmt_format has not been taken in mind.
As binlog_get_pending_rows_event now requires existing cache_mngr, this check
is now made first.
When calculating next wakeup timepoint for the timer thread, with large
thread_pool_stall_limit values, 32bit int overflow can happen.
Fixed by making one operand 8 byte large.
Also fixed the type of tick_interval to be unsigned, so it does not
go negative for very thread_pool_stall_limit.
This patch augments Gtid_log_event with the user thread-id.
In particular that compensates for the loss of this info in
Rows_log_events.
Gtid_log_event::thread_id gets visible in mysqlbinlog output like
#231025 16:21:45 server id 1 end_log_pos 537 CRC32 0x1cf1d963 GTID 0-1-2 ddl thread_id=10
as a 32 bit unsigned integer. Note this is a 32-bit value, as
the connection id can only be 32 bits (see MDEV-15089 for
details).
While the size of Gtid event has grown by 4 bytes
replication from OLD <-> NEW is not affected by it. This patch
also slightly changes the logic to convert Gtid events to Query
events for older replicas which don't support Gtid. Instead of
hard-coding the padding of the sys var section of the generated
Query event, the length to pad is dynamically calculated based
on the length of the Gtid event.
This work was started by the late Sujatha Sivakumar.
Brandon Nesterenko took it over, reviewed initial patches and
extended the work.
Also thanks to Andrei for his help in finalizing the fixes for
MDEV-33924, which were squashed into this patch.
Reviewed-by:
=============
Andrei Elkin <andrei.elkin@mariadb.com>
Kristian Nielsen <knielsen@knielsen-hq.org>
- ZLIB_LIBRARIES, not ZLIB_LIBRARY
- ZLIB_INCLUDE_DIRS, not ZLIB_INCLUDE_DIR
For building libmariadb, ZLIB_LIBRARY/ZLIB_INCLUDE_DIR are still defined
This workaround will be removed later.
This is based on https://github.com/intel/intel-ipsec-mb/
and has been tested both on x86 and x86-64, with code that
was generated by several versions of GCC and clang.
GCC 11 or clang 8 or later should be able to compile this,
and so should recent versions of MSVC.
Thanks to Intel Corporation for providing access to hardware,
for answering my questions regarding the code, and for
providing the coefficients for the CRC-32C computation.
crc32_avx512(): Compute a reverse polynomial CRC-32 using
precomputed tables and carry-less product, for up to 256 bytes
of unaligned input per loop iteration.
Reviewed by: Vladislav Vaintroub
In our unit test, let us rely on our own reference
implementation using the reflected
CRC-32 ISO 3309 and CRC-32C polynomials. Let us also
test with various lengths.
Let us refactor the CRC-32 and CRC-32C implementations
so that no special compilation flags will be needed and
that some function call indirection will be avoided.
pmull_supported: Remove. We will have pointers to two separate
functions crc32c_aarch64_pmull() and crc32c_aarch64().
The two I_S plugins SPIDER_ALLOC_MEM and SPIDER_WRAPPER_PROTOCOL
only makes sense if the main SPIDER plugin is installed. Further,
SPIDER_ALLOC_MEM requires a mutex that requires SPIDER init to fill
the table.
We also update the spider init query to override
--transaction_read_only=on so that it does not affect the spider init.
Also fixed error handling in spider_db_init() so that failure in
spider table init does not result in memory leak
CURRENT_TEST: binlog_encryption.rpl_parallel_gco_wait_kill
mysqltest: In included file "./suite/rpl/t/rpl_parallel_gco_wait_kill.test":
included from /home/buildbot/amd64-ubuntu-2004-debug/build/mysql-test/suite/binlog_encryption/rpl_parallel_gco_wait_kill.test at line 2:
At line 334: Can't initialize replace from 'replace_result $thd_id THD_ID'
An sql thread can reach the "Slave has read all relay log" state
and then start reading relay log again. Let's use a more generic
pattern to retrieve the sql thread ID even if it's not
in the "read all relay log" state.
Clear any pending deadlock kill after completing XA PREPARE, and before
updating the mysql.gtid_slave_pos table in a separate transaction.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
One case is conflicting transactions T1 and T2 with different domain id, in
optimistic parallel replication in non-GTID mode. Then T2 will
wait_for_prior_commit on T1; and if T1 got a row lock wait on T2 it would
hang, as different domains caused the deadlock kill to be skipped in
thd_rpl_deadlock_check().
More generally, if we have transactions T1 and T2 in one domain/master
connection, and independent transactions U in another, then we can
still deadlock like this:
T1 row low wait on U
U row lock wait on T2
T2 wait_for_prior_commit on T1
This commit enforces the deadlock kill in these cases. If the waited-for
transaction is speculatively applied, then it will be deadlock killed in
case of a conflict, even if the two transactions are in different domains
or master connections.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>