Problem was assertion assuming we always hold
THD::LOCK_thd_data mutex that is not true.
In most cases this is true but function is
also used from InnoDB lock manager and
there we can't take THD::LOCK_thd_data to
obey mutex ordering. Removed assertion as
wsrep transaction state can't change even
that case.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
C/C 3.4 disables mysql_old_password by default, so
add an option for the `connect` command to support specifying
allowed authentication plugins (MARIADB_OPT_RESTRICTED_AUTH).
use it to enable mysql_old_password when needed for testing
in the $case=2 - it's wrong to kill after the first binlog EOF,
because that might happen between INSERT(4) and INSERT(5).
So, wait for the slave to acknowledge INSERT(5) before killing
the master, that is, both connection threads must pass
repl_semisync_master.wait_after_sync()
The slave IO thread sets MYSQL_SET_CHARSET_DIR. The code for this option
however is not thread-safe in sql-common/client.c. The value set is
temporarily written to mysys global variable `charsets-dir` and can be seen
by other threads running in parallel, which can result in use-after-free
error.
Problem was visible as random failures of test cases in suite multi_source
with Valgrind or MSAN.
Work-around by not setting this option for slave connect, it is redundant
anyway as it is just setting the default value.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The root cause of the failure is a bug in the Linux network stack:
https://lore.kernel.org/netdev/87sf0ldk41.fsf@urd.knielsen-hq.org/T/#u
If the slave does a connect(2) at the exact same time that kill -9 of the
master process closes the listening socket, the FIN or RST packet is lost in
the kernel, and the slave ends up timing out waiting for the initial
communication from the server. This timeout defaults to
--slave-net-timeout=120, which causes include/master_gtid_wait.inc to time
out first and fail the test.
Work-around this problem by reducing the --slave-net-timeout for this test
case. If this problem turns up in other tests, we can consider reducing the
default value for all tests.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
As of version 3.2.0, OpenSSL updated the error message in new versions
("https://github.com/openssl/openssl/commit/81b741f68984"). Update the
tests and result files such that they are compatible with both original
and new error messages.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services,
Inc.
mtr_t::memmove(): Revert to the parent of
commit a032f14b34
where there was supposed to be an equivalent change
that would avoid hitting a warning in some old version of GCC
when this change was part of another 10.6 based developmet branch.
For some reason, this change is not equivalent but will cause
massive amounts of backup failures in the stress tests
run by Matthias Leich, caught by
commit 4179f93d28 in 10.6.
ibuf_remove_free_page(): Correct the calculation of root_savepoint().
The first entry acquired by ibuf_tree_root_get() will be ibuf.index.lock
and not the change buffer root page.
Thanks to Matthias Leich for finding this bug in RQG.
Unfortunately, this code is very difficult to cover
in our regression test suite.
The libpmem dependency that had been added in
commit 3daef523af (MDEV-17084)
did not achieve any measurable performance improvement when
comparing the same PMEM device with and without "mount -o dax"
using the Linux ext4 file system.
Because Red Hat has deprecated libpmem, let us remove the code
altogether.
Note: This is a 10.6 version of
commit 3f9f5ca48e
which will retain PMEM support in MariaDB Server 10.11.
Because the Red Hat Enterprise Linux 8 core repository does not include
libpmem, let us implement the necessary subset ourselves.
pmem_persist(): Implement for 64-bit x86, ARM, POWER, RISC-V, Loongarch
in a way that should be compatible with the https://github.com/pmem/pmdk/
implementation of pmem_persist().
The CMake option WITH_INNODB_PMEM can be used for enabling or disabling
this interface at compile time. By default, it is enabled on all applicable
systems that are covered by our CI system.
Note: libpmem had not been previously enabled for Loongarch in our
Debian packaging. It was enabled for RISC-V, but we will not enable it
by default on RISC-V or Loongarch because we lack CI coverage.
The generated code for x86_64 was reviewed and tested on two
Intel implementations: one that only supports clflush, and
another that supports both clflushopt and clwb.
The generated machine code was also reviewed on https://godbolt.org
using various compiler versions. Godbolt helpfully includes an option
to compile to binary code and display the encoding, which was
useful on POWER.
Reviewed by: Vladislav Vaintroub
bulk_insert_apply_for_table(dict_table_t*)
This issue is caused by
commit 188c5da72a (MDEV-32453).
trx_t::bulk_insert_apply_for_table(): Remove the assert
check_unique_secondary and check_foreigns. InnoDB can
apply the bulk insert operation even after disabling
the check_foreigns and check_unique_secondary variable.
New option works just like --tab, wrt output (sql file for table definition
and tab-separated for data, same options, e.g --parallel)
Compared to --tab it allows --databases and --all-databases.
When --dir is used , it creates directory structure in the output directory,
pointed to by --dir. For every database to be dumped, there will be a
directory with database name.
All options that --tab supports, are also supported by --dir, in particular
--parallel
it's a pointer into the net buffer, so it might be overwritten by the
next read or write. And the next plugin switch (in multi-auth) will
try to compare it (in send_plugin_request_packet) which is normally
harmless but fails the assert with Lex_ident::is_valid_ident()
This patch also fixes:
MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER
- Adding a wrapper function CHARSET_INFO::streq(), to compare
two strings for equality. For now it calls strnncoll() internally.
In the future it will turn into a virtual function.
- Adding new accent sensitive case insensitive collations:
- utf8mb4_general1400_as_ci
- utf8mb3_general1400_as_ci
They implement accent sensitive case insensitive comparison.
The weight of a character is equal to the code point of its
upper case variant. These collations use Unicode-14.0.0 casefolding data.
The result of
my_charset_utf8mb3_general1400_as_ci.strcoll()
is very close to the former
my_charset_utf8mb3_general_ci.strcasecmp()
There is only a difference in a couple dozen rare characters, because:
- the switch from "tolower" to "toupper" comparison, to make
utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
- the switch from Unicode-3.0.0 to Unicode-14.0.0
This difference should be tolarable. See the list of affected
characters in the MDEV description.
Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
Unlike utf8mb4_general_ci, it does not treat all BMP characters
as equal.
- Adding classes representing names of the file based database objects:
Lex_ident_db
Lex_ident_table
Lex_ident_trigger
Their comparison collation depends on the underlying
file system case sensitivity and on --lower-case-table-names
and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.
- Adding classes representing names of other database objects,
whose names have case insensitive comparison style,
using my_charset_utf8mb3_general1400_as_ci:
Lex_ident_column
Lex_ident_sys_var
Lex_ident_user_var
Lex_ident_sp_var
Lex_ident_ps
Lex_ident_i_s_table
Lex_ident_window
Lex_ident_func
Lex_ident_partition
Lex_ident_with_element
Lex_ident_rpl_filter
Lex_ident_master_info
Lex_ident_host
Lex_ident_locale
Lex_ident_plugin
Lex_ident_engine
Lex_ident_server
Lex_ident_savepoint
Lex_ident_charset
engine_option_value::Name
- All the mentioned Lex_ident_xxx classes implement a method streq():
if (ident1.streq(ident2))
do_equal();
This method works as a wrapper for CHARSET_INFO::streq().
- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
in class members and in function/method parameters.
- Replacing all calls like
system_charset_info->coll->strcasecmp(ident1, ident2)
to
ident1.streq(ident2)
- Taking advantage of the c++11 user defined literal operator
for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
data types. Use example:
const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;
is now a shorter version of:
const Lex_ident_column primary_key_name=
Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
Add assertions about limitations one has when using Index Condition
Pushdown:
- add handler::assert_icp_limitations()
- call this function from functions that may attempt violations.
Verified that assert_icp_limitations() as well as calls to it are
compiled away in release build.
When the change buffer records for a page span across multiple change
buffer leaf pages or the starting record is at the beginning of a page
with a left sibling, ibuf_delete_recs deletes only the records in first
page and fails to move to subsequent pages.
Subsequently a slow shutdown hangs trying to delete those left over
records.
Fix-A: Position the cursor to an user record in B-tree and exit only
when all records are exhausted.
Fix-B: Make sure we call ibuf_delete_recs during slow shutdown for
pages with IBUF entries to cleanup any previously left over records.
- When `libcurl` is installed in path out of default path, like on
Windows, `include_directories` failed to find `curl/curl.h`.
- Fix `cmake` by using modern syntax with imported target and
`find_package`
- Fix warnings treated as the errors
- Remove `HASHICORP_HAVE_EXCEPTIONS` macro and related code
- Add package to `Server` component in Windows
- Tested with `$ ./mysql-test/mtr --suite=vault`
- Closes PR #3068
- Reviewer: <wlad@mariadb.com>
<julius.goryavsky@mariadb.com>
If replicating an event in ROW format, and InnoDB detects a deadlock
while searching for a row, the row event will error and rollback in
InnoDB and indicate that the binlog cache also needs to be cleared,
i.e. by marking thd->transaction_rollback_request. In the normal
case, this will trigger an error in Rows_log_event::do_apply_event()
and cause a rollback. During the Rows_log_event::do_apply_event()
cleanup of a successful event application, there is a DBUG_ASSERT in
log_event_server.cc::rows_event_stmt_cleanup(), which sets the
expectation that thd->transaction_rollback_request cannot be set
because the general rollback (i.e. not the InnoDB rollback) should
have happened already. However, if the replica is configured to skip
deadlock errors, the rows event logic will clear the error and
continue on, as if no error happened. This results in
thd->transaction_rollback_request being set while in
rows_event_stmt_cleanup(), thereby triggering the assertion.
This patch fixes this in the following ways:
1) The assertion is invalid, and thereby removed.
2) The rollback case is forced in rows_event_stmt_cleanup() if
transaction_rollback_request is set.
Note the differing behavior between transactions which are skipped
due to deadlock errors and other errors. When a transaction is
skipped due to an ignored deadlock error, the entire transaction is
rolled back and skipped (though note MDEV-33930 which allows
statements in the same transaction after the deadlock-inducing one
to commit). When a transaction is skipped due to ignoring a
different error, only the erroring statements are rolled-back and
skipped - the rest of the transaction will execute as normal. The
effect of this can be seen in the test results. The added test case
to rpl_skip_error.test shows that only statements which are ignored
due to non-deadlock errors are ignored in larger transactions. A
diff between rpl_temporary_error2_skip_all.result and
rpl_temporary_error2.result shows that all statements in the errored
transaction are rolled back (diff pasted below):
: diff rpl_temporary_error2.result rpl_temporary_error2_skip_all.result
49c49
< 2 1
---
> 2 NULL
51c51
< 4 1
---
> 4 NULL
53c53
< * There will be two rows in t2 due to the retry.
---
> * There will be one row in t2 because the ignored deadlock does not retry.
57d56
< 1
59c58
< 1
---
> 0
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
On Windows systems, occurrences of ERROR_SHARING_VIOLATION due to
conflicting share modes between processes accessing the same file can
result in CreateFile failures.
mysys' my_open() already incorporates a workaround by implementing
wait/retry logic on Windows.
But this does not help if files are opened using shell redirection like
mysqltest traditionally did it, i.e via
--echo exec "some text" > output_file
In such cases, it is cmd.exe, that opens the output_file, and it
won't do any sharing-violation retries.
This commit addresses the issue by introducing a new built-in command,
'write_line', in mysqltest. This new command serves as a brief alternative
to 'write_file', with a single line output, that also resolves variables
like "exec" would.
Internally, this command will use my_open(), and therefore retry-on-error
logic.
Hopefully this will eliminate the very sporadic "can't open file because
it is used by another process" error on CI.
We have quite a few assertions
ut_a(m_prebuilt->trx == thd_to_trx(ha_thd()));
in low-level functions.
These had better be debug assertions for performance reasons.
It should suffice to check that condition in the less frequently invoked
ha_innobase::change_active_index().
convert_search_mode_to_innobase(): Return whether the mode is
unsupported, and optionally update ha_innobase::m_last_match_mode.
ha_innobase::index_read(): Only branch on find_flag once, and
simplify the error handling after invoking row_search_mvcc().
ha_innobase::rnd_pos(): Remove an assertion that is duplicating one
in ha_innobase::index_read(), which we are calling unconditionally.
ha_innobase::records_in_range(): Check only once whether
min_key, max_key are null pointers.
row_sel_convert_mysql_key_to_innobase(): Declare all parameters
except the conversion buffer pointer (buf) to be nonnull.
Reviewed by: Debarun Banerjee
Support index condition pushdown within partitioned tables.
- ha_partition will pass the pushed index condition into all of the used
partitions.
- We require that all of the partitions to handle the pushed index
condition in the same way.
- When using ICP, one may read rows (e.g. call h->index_read_map(buf, ...)
only to buf= table->record[0], for two reasons:
* Pushed index condition's Item_field objects point into record[0]
* InnoDB requires this: it calls offset() which assumes record[0].
So, when using ICP, ha_partition will read partition records to
table->record[0] and then will copy record away if it needs it to be
elsewhere.
rtr_pcur_getnext_from_path(): Remove a bogus assertion
that may cause a data races with buf_LRU_block_free_non_file_page().
If my_latch_mode == BTR_MODIFY_LEAF, we would have released all page
latches and buffer-fixes by invoking mtr->rollback_to_savepoint(1).
After this point, the btr_cur->page_cur.block is no longer valid and
must not be accessed.
Before 03ca6495df this assertion had
been disabled, because the preprocessor symbol UNIV_RTR_DEBUG
had never been enabled (except when explicitly specified in
CMAKE_CXX_FLAGS).
Reviewed by: Debarun Banerjee