problem:
=======
- During load statement, InnoDB bulk operation relies on temporary
directory and it got crash when tmpdir is exhausted.
Solution:
========
During bulk insert, LOAD statement is building the clustered
index one record at a time instead of page. By doing this,
InnoDB does the following
1) Avoids creation of temporary file for clustered index.
2) Writes the undo log for first insert operation alone
extra/mariabackup/xtrabackup.cc:3407:15: error: conversion from ‘lsn_t’ {aka ‘long long unsigned int’} to ‘size_t’ {aka ‘unsigned int’} may change value [-Werror=conversion]
followup for 6acada713a
This tests the compile using a spec file. This spec
file is defined by environment variables.
If the spec file doesn't exist (SLES/OpenSUSE),
isn't supported (e.g. clang), or has incorrect environment
variables the linker flag won't be used.
Stripping output avoid gcc ICE bug
https://bugzilla.redhat.com/show_bug.cgi?id=2336272
Launcher used to make env variables show up in the
build step rather than just the configure step.
Providing build information compiled into the
executable provides the ability of core file
handlers to access information on the distro
and source package version. This information
can sometime be lost between the source and
an upstream bug report.
The Debian dh-package-notes includes the
makefile included in debian/rules that
sets linking flags to the right values.
The jammy version of dh-package-notes does
not include the same makefile implementation
as the others.
ref: https://systemd.io/ELF_PACKAGE_METADATA/
fix
recv_sys_t::parse(): Correctly handle the storing==BACKUP case,
and simplify some logic around storing==YES as well.
The added test mariabackup.undo_truncate is based on an idea of
Thirunarayanan Balathandayuthapani. It nondeterministically (not on
every run) covers this logic, including the function backup_undo_trunc(),
for both innodb_encrypt_log=ON and innodb_encrypt_log=OFF.
Reviewed by: Debarun Banerjee
log_t::resize_start(): If the ib_logfile101 cannot be created,
be sure to reset log_sys.resize_lsn.
log_t::resize_abort(): In case SET GLOBAL innodb_log_file_size is
aborted, delete the ib_logfile101.
Let us enable pmem_persist() on RISC-V and LoongArch, because those are
available in the Debian CI.
In commit 3f9f5ca48e these were initially
disabled by default.
According to the available documentation, these instructions are
available in all ISA versions. On LoongArch there would also be
__builtin_loongarch_dbar() that generates the same code.
innodb_log_file_mmap: Use a constant documentation string that
refers to persistent memory also when it is not available in the build.
HAVE_INNODB_MMAP: Remove, and unconditionally enable this code.
log_mmap(): On 32-bit systems, ensure that the size fits in 32 bits.
log_t::resize_start(), log_t::resize_abort(): Only handle memory-mapping
if HAVE_PMEM is defined. The generic memory-mapped interface is only for
reading the log in recovery. Writable memory mappings are only for
persistent memory, that is, Linux file systems with mount -o dax.
Reviewed by: Debarun Banerjee, Otto Kekäläinen
log_t::persist(): Remove the parameter holding_latch, and assert
latch_holding_any(). We used to avoid acquiring a latch when log
resizing was not in progress. That allowed a race condition to occur
where log_t::write_checkpoint() has just completed log resizing.
In that case, we could wrongly invoke pmem_persist() on the old
log_sys.buf instead of the new one, which was shortly before known
as log_sys.resize_buf.
log_write_persist(): A non-inline wrapper function that will
invoke log_sys.persist() while holding a shared log_sys.latch.
By default, CMAKE_BUILD_TYPE RelWithDebInfo or Release implies -DNDEBUG,
which disables the assert() macro. MariaDB is deviating from that.
Let us be explicit to use assert() only in debug builds.
This fixes up 1b8358d943
The macros ut_ad() and DBUG_ASSERT() can evaluate their argument twice.
That is wrong for any read-modify-write arguments.
Thanks to Nikita Malyavin for pointing this out.
recv_sys_t::parse(): When parsing an OPTION record, invoke
l.copy_if_needed() before checking if the payload is OPT_PAGE_CHECKSUM
followed by a 32-bit page checksum.
This fixes up the merge 57d4a242da of
commit 4179f93d28 (MDEV-18976).
The impact of this can be observed by running a debug instrumented
build on the test encryption.recovery_memory. There should be over
5,000 invocations of log_phys_t::page_checksum(). Without this fix,
there should be less than 100 of them (when the OPT_PAGE_CHECKSUM
byte happens to encrypt to itself).
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich
This fixes another regression that had been introduced in
commit b249a059da (MDEV-34850).
This should prevent failures of mariadb-backup --backup of
the following type:
mariabackup: Failed to read undo log tablespace space id …
and there is no undo tablespace truncation redo record.
This error has not been hit by our internal testing, and we
currently have no regression test to cover this.
recv_sys_t::parse<storing=NO>(): Do invoke
fil_space_set_recv_size_and_flags() and do parse enough of page 0
to facilitate that.
This fixes a regression that had been introduced in
commit b249a059da (MDEV-34850).
In a multi-batch crash recovery, we would fail to invoke
fil_space_set_recv_size_and_flags() while parsing the remaining log,
before starting the first recovery batch.
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich
when testing MDEV-34539 create a table specifically for the test,
don't use a system table as a shortcut to save a couple of lines.
followup for 8d813f080b
use the same condition in
fill_schema_table_from_frm() when open_table_from_share() fails, as in
fill_schema_table_from_frm() when tdc_aquire_share() fails and as in
fill_schema_table_from_open() when open_table_from_share() fails
get_all_tables() skipped tables if the user has no privileges on
the schema itself and no granted privilege on any tables in the schema.
that is, it was skipping performance_schema tables (privileges
on them aren't explicitly granted, but internally hard-coded)
To fix:
* extend ACL_internal_table_access::check() method with
`bool any_combination_will_do`
* fix all perfschema privilege checks to take it into account.
* don't reuse table_acl_check object for all tables, initialize it
for every table otherwise GRANT_INTERNAL_INFO will leak
* remove incorrect privilege check from get_all_tables()
cannot have an assert in Warning_info::push_warning()
because SQL command SIGNAL can set an absolutely arbitrary
message, even an empty one or ending with '\n'
move the assert into push_warning() and my_message_sql().
followup for 9508a44c37
Most InnoDB functions do not throw any exceptions, not even indirectly
std::bad_alloc, which could be thrown by a C++ memory allocation function.
Let us annotate many functions with noexcept in order to reduce the code
footprint related to exception handling.
Reviewed by: Thirunarayanan Balathandayuthapani
The issue is caused by a logic error in Item_sum::get_tmp_table_item() method:
it resets arguments of the item to point to the result fields during
change_ref_to_tmp_fields() call. However, Item_sum arguments must not be modified.
It is enough for Item_sum objects to call ancestor's implementation
Item::get_tmp_table_item().
This fix is in accordance with MySQL commit 2e3dc09087c24798c90e05163ed3d931f6b93db3
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Add a simple test to verify the server behaves in a safe manner if configured
with ciphers that aren't compatible with the server certificate.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Add a simple test to verify that the server will fail to start up when no valid
cipher suites are passed to `ssl-cipher`.
As different TLS libraries and versions have differing cipher suite support, it
would be a good idea to ensure the server behaves in a safe manner if it is
configured with invalid cipher suites.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
The LOCK_global_system_variables must not be held when taking mutexes
such as LOCK_commit_ordered and LOCK_log, as this causes inconsistent
mutex locking order that can theoretically cause the server to
deadlock.
To avoid this, temporarily release LOCK_global_system_variables in two
system variable update functions, like it is done in many other
places.
Enforce the correct locking order at server startup, to more easily
catch (in debug builds) any remaining wrong orders that may be hidden
elsewhere in the code.
Note that when this is merged to 11.4, similar unlock/lock of
LOCK_global_system_variables must be added in update_binlog_space_limit()
as is done in binlog_checksum_update() and fix_max_binlog_size(), as this
is a new function added in 11.4 that also needs the same fix. Tests will
fail with wrong mutex order until this is done.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
ha_innobase::delete_table(): Clear trx->dict_operation_lock_mode
after, not before invoking trx->rollback(), so that
row_undo_mod_parse_undo_rec() will be invoked with dict_locked=true
and dict_sys_t::freeze() will not be invoked for loading a table
definition. Inside dict_sys_t::freeze(), an assertion !have_any()
would fail when the current thread is already holding the latch.
This fixes up commit c5fd9aa562 (MDEV-25919).
Reviewed by: Debarun Banerjee
during FLUSH PRIVILEGES, allow_all_hosts temporarily goes out of sync
with acl_check_hosts and acl_wild_hosts.
As it's tested in acl_check_host() without a mutex, let's re-test it
under a mutex to make sure the value is correct.
Note that it's just an optimization and it's ok to see outdated
allow_all_hosts value here.
* replace the message away in the test result
* remove "feedback plugin:" prefix, it's a server message, not plugin's
* downgrade to the warning, because
1) it's not a failure, no operation was aborted, server still works
2) it's something actionable, so not a [Note] either