Mariabackup (mariadb-backup) supports the --use-memory option that
sets the buffer pool size for innodb. However, current SST scripts
do not use this option. This commit adds support for this option,
the value for which can be specified via the "use_memory" parameter
in the configuration file in the [sst], [mariabackup] or [xtrabackup]
sections (supported only for compatibility with old configurations).
In addition, if the innodb_buffer_pool_size option is specified in
the user configuration (in the main server configuration sections)
or passed to the SST scripts or the server via arguments, its value
is also passed to mariadb-backup as the value for the --use-memory
option.
A new section name [mariabackup] has also been added, which can
be used instead of the deprecated [xtrabackup] (the section name
"mariabackup" was specified in the documentation, but was not
actually supported by SST scripts before this commit).
Fix a regression introduced by commit d98ac851 (MDEV-29935, MDEV-26247) causing
MAX_TABLES overflow in `setup_table_map()`. The check for MAX_TABLES was moved
outside of the loop that increments table numbers, allowing overflows during
loop iterations. Since setup_table_map() operates on a 64-bit bitmap, table
numbers exceeding 64 triggered the UBSAN check.
This commit returns the overflow check within the loop and adds a debug
assertion to `setup_table_map()` to ensure no bitmap overrun occurs.
When running the ./mtr tests and getting failures, rather than provide a
dead-link to mysql.com, this points developers to the Jira instance.
Signed-off-by: Eric Herman <eric@freesa.org>
opt_calc_index_goodness(): Correct an inaccurate condition.
We can very well use a clustered index of a table that is subject
to online rebuild. But we must not choose an index that has not been
committed (it is a secondary index that was not fully created)
or that is corrupted or not a normal B-tree index.
opt_search_plan_for_table(): Remove some redundant code, now that
opt_calc_index_goodness() checks against corrupted indexes.
The test case allows this code to be exercised. The main observation
in the following:
./mtr --rr innodb.stats_persistent
rr replay var/log/mysqld.1.rr/latest-trace
should be that when opt_search_plan_for_table() is being invoked by
dict_stats_update_persistent() on the being-altered statistics table
in the 2nd call after ha_innobase::inplace_alter_table(),
and the fix in opt_calc_index_goodness() is absent,
it would choose the code path if (n_fields == 0), that is, a full
table scan, instead of searching for the record. The GDB commands to
execute in "rr replay" would be as follows:
break ha_innobase::inplace_alter_table
continue
break opt_search_plan_for_table
continue
continue
next
next
…
Reviewed by: Vladislav Lesin
strerror_s on Linux will, for unknown error codes, display
'Unknown error <codenum>' and our tests are written with this assumption.
However, on macOS, sterror_s returns 'Unknown error: <codenum>' in the
same case, which breaks tests. Make my_strerror consistent across the
platforms by removing the ':' when present.
The previous commit for fixing MDEV-35446 disabled setting
Galera errors on COM_STMT_PREPARE commands.
As a side effect, a number of tests were started to fail
due to the client receiving different error codes from the
ones expected in the test dependending on whether --ps-protocol
was used.
Also, in the case of test galera_ftwrl, it was found that
it is expected that during COM_STMT_PREPARE command, we
may perform a sync wait operation, which can fail with
LOCK_WAIT_TIMEOUT error.
The revised fix consists in anticipating the call to
wsrep_after_command_before_result(), so that we check for
BF aborts or errors during statement prepare, before sending
back the statement metadata message to client.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Partition tests requiring lower_case_table_names = 2 (default on macOS)
fail on mac because the product has changed over time but the tests were
not run regularly enough to observe their breakage.
The limit of socket length on unix according to libc is 108, see
sockaddr_un::sun_path, but in the table it is a string of max length
64, which results in truncation of socket and failure to connect by
plugins using servers such as spider.
The test turns out to be senstive to @@global.gtid_cleanup_batch_size.
With a rather small default value of the latter
SELECTing from mysql.gtid_slave_pos may not be deterministic: tests
that run before may increase a pending for automitic deletion batch.
The test is refined to set its own value for the batch size which
is virtually unreachable.
Thanks to Kristian Nielsen for the analysis.
zlib-ng results in different compression length. The compression
length isn't that important as the test output examines the uncompressed
results.
fixes for zlib-ng
backport of 75488a57f2
This problem occured for statements like `INSERT INTO t1 SELECT 1`,
which do not have tables in the SELECT part. In such scenarios
SELECT_LEX::insert_tables was not properly set at `setup_tables()`,
and this led to either incorrect execution or a crash
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Item_func_concat_ws::val_str():
- collects the result into the string "str" passed as a parameter.
- calls val_str(&tmp_buffer) to get arguments.
At some point due to heuristic it decides to swap the buffers:
- collect the result into &tmp_buffer
- call val_str(str) to get arguments
Item_func_password::val_str_ascii() returns a String pointing to its
member tmp_value[SCRAMBLED_PASSWORD_CHAR_LENGTH+1].
As a result, it's possible that both str and tmp_buffer in
Item_func_concat_ws::val_str() point to Item_func_password::tmp_value.
Then, memcmp() called on overlapping memory fragrments.
Fixing Item_func_password::val_str_ascii() to use Item::copy()
instead of Item::set().
This bug has the same nature as the issues
MDEV-34718: Trigger doesn't work correctly with bulk update
MDEV-24411: Trigger doesn't work correctly with bulk insert
To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
add_special_frame_cursors() did not check the return
value offset_func->fix_fields(). It can return an error
if the data type does not support the operator "minus".
When UNION ALL is used with LIMIT ROWS EXAMINED, and when the limit is
exceeded for a SELECT that is not the last in the UNION, interrupt the
execution and call end_eof on the result. This makes sure that the
results are sent, and the query result status is conclusive rather
than empty, which would cause an assertion failure.
Added get_footprint() implementation for FreeBSD (and for other
non-Linux systems), and added "apparent file size" mode for Linux
to take into account the real file size (without compression) when
used with filesystems like ZFS.
This commit fixes some functions in wsrep_sst_common
to ensure that now and in the future return codes from
a number of helper functions will be zero on success.
Fixed some issues in the script code, mainly related
to handling situations when a failure occurs:
1) the signal handler in the mariadb-backup SST script
was using an uninitialized variable when trying to kill
a hung streaming process;
2) inaccurate error messages were being logged sometime;
3) after completing SST, temporary or old (extra) files
could remain in database directories.
A prepared SELECT statement because of CF_REEXECUTION_FRAGILE needs to
check the table is the same definition as previously otherwise a
re-prepare of the statement can occur.
When running many 'SELECT DEFAULT(name) FROM table1_containing_sequence'
in parallel the TABLE_LIST::is_the_same_definition may be called when
m_table_ref_type is TABLE_REF_NULL because it hasn't been checked yet.
In this case populate the TABLE_LIST with the values determined by the
TABLE_SHARE and allow the execution to continue.
As a result of this, the main.ps_ddl test doesn't need to reprepare
as the defination hasn't changed. This is another case where
TABLE_LIST::is_the_same_definition is called when m_table_ref_type is
TABLE_REF_NULL, but that doesn't mean that the defination is different.
Fixing a wrong DBUG_ASSERT.
thd->start_time and thd->start_time_sec_part cannot be 0 at the same time.
But thd->start_time can be 0 when thd->start_time_sec_part is not 0,
e.g. after:
SET timestamp=0.99;
Most resource limit information is excessive, particularly
limits that aren't limited.
We restructure the output by considering the Linux format
of /proc/limits which had its soft limits beginning at offset
26. "u"limited lines are skipped.
Example output:
Resource Limits (excludes unlimited resources):
Limit Soft Limit Hard Limit Units
Max stack size 8388608 unlimited bytes
Max processes 127235 127235 processes
Max open files 32198 32198 files
Max locked memory 8388608 8388608 bytes
Max pending signals 127235 127235 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
This is 8 lines less that what was before.
The FreeBSD limits file was /proc/curproc/rlimit and a different format
so a loss of non-Linux proc enabled OSes isn't expected.
Provide bug url in addition to how to report the bug.
Remove obsolete information like key_buffers and used connections
as they haven't meaningfully added value to a bug report for quite
a while. Remove information that comes from long fixed interfaces
in glibc/kernel.
Encourage the use of a full backtrace from the core with debug
symbols.
Lets be realistic about the error messages, its the users we are addressing
not developers so wording around getting the information communicated
is the key aspect.
All the user readable text and instructions are in on place, as
non-understandable is the end of the reading process for the user.
Remove the duplicate printing of the query.
Use my_progname rather than "mysqld" to reflex the program name.
So the signal handler output is now in the form:
1. User instructions
2. Server Information
3. Stacktrace
4. connection/query/optimizer_switch
5. Core information and resource limits
6. Kernel information
The segfault in wsrep_check_sequence is due to a
null pointer deference on:
db_type= thd->lex->create_info.db_type->db_type;
Where create_info.db_type is null. This occured under
a used_engine==true condition which is set in the calling
function based on create_info.used_fields==HA_CREATE_USED_ENGINE.
However the create_info.used_fields was a left over
from the parsing of the previous failed CREATE TABLE where
because of its failure, db_type wasn't populated.
This is corrected by cleaning the create_info when we start
to parse ALTER SEQUENCE statements.
Other paths to wsrep_check_sequence is via CREATE SEQUENCE
and CREATE TABLE LIKE which both initialize the create_info
correctly.
os_innodb_umask was of the incorrect type resulting in warnings
in clang-19. The correct type is mode_t.
As os_innodb_umask was set during innnodb_init from my_umask,
corrected the type there along with its companion my_umask_dir.
Because of this, the defaults mask values in innodb never
had an effect.
The resulting change allow found signed differences in
my_create{,_nosymlink}, open_nosymlinks:
mysys/my_create.c:47:20: error: operand of ?: changes signedness from ‘int’ to ‘mode_t’ {aka ‘unsigned int’} due to unsignedness of other operand [-Werror=sign-compare]
47 | CreateFlags ? CreateFlags : my_umask);
Ref: clang-19 warnings:
[55/123] Building CXX object storage/innobase/CMakeFiles/innobase.dir/os/os0file.cc.o
storage/innobase/os/os0file.cc:1075:46: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1075 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
storage/innobase/os/os0file.cc:1249:46: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1249 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
storage/innobase/os/os0file.cc:1381:45: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1381 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
Threads can normally exit without a explicit pthread_exit call.
There seem to date to old glibc bugs, many around 2.2.5.
The semi related bug was https://bugs.mysql.com/bug.php?id=82886.
To improve safety in the signal handlers DBUG_* code was removed.
These where also needed to avoid some MSAN unresolved stack issues.
This is effectively a backport of 2719cc4925.
These tests rely on THR_KEY_mysys but it is not initialized. On
Linux, the corresponding thread variable is null, but on macOS it has a
nonzero value. In all cases, initialize the variable explicitly by
calling MY_INIT and my_end appropriately.
The Write_rows_log_event originally allocated the m_rows_buf up-front, and
thus is_valid() checks that the buffer is allocated correctly. But at some
point this was changed to allocate the buffer lazily on demand. This means
that a a valid event can now have m_rows_buf==NULL. The is_valid() code was
not changed, and thus is_valid() could return false on a valid event.
This caused a bug for REPLACE INTO t() VALUES(), () which generates a
write_rows event with no after image; then the m_rows_buf was never
allocated and is_valid() incorrectly returned false, causing an error in
some other parts of the code.
Also fix a couple of missing special cases in the code for mysqlbinlog to
correctly decode (in comments) row events with missing after image.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The partitioning error handling code was looking at
thd->lex->alter_info.partition_flags in non-alter-table cases, in which cases
the value is stale and contains whatever was set by any earlier ALTER TABLE.
This could cause the wrong error code to be generated, which then in some cases
can cause replication to break with "different errorcode" error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Limit SHOW BINLOG/RELAYLOG EVENTS in show_rpl_debug_info.inc to 200 lines.
Reviewed-by: Daniel Black <daniel@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Fractional part < 100000 microseconds was printed without leading zeros,
causing such timestamps to be applied incorrectly in mariadb-binlog | mysql
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
MDEV-32329 (patch) pushdown from having into where: Server crashes at sub_select
When generating an Item_equal with a Item_ref that refers to a field
outside of a subselect, remove_item_direct_ref() causes the dependency
(depended_from) on the outer select to be lost, which causes trouble
for code downstream that can no longer determine the scope of the Item.
Not calling remove_item_direct_ref() retains the Item's dependency.
Test cases from MDEV-32395 and MDEV-32329 are included.
Some fixes from other developers:
Monty:
- Fixed wrong code in Item_equal::create_pushable_equalities()
that could cause wrong item to be used if there was no matching items.
Daniel Black:
- Added test cases from MDEV-32329
Igor Babaev:
- Provided fix for removing call to remove_item_direct_ref() in
eliminate_item_equal()
MDEV-32395: update_depend_map_for_order: SEGV at /mariadb-11.3.0/sql/sql_select.cc:16583
Include test cases from MDEV-32329.
gcc 7.5.0 does not understand __attribute__((no_sanitize("undefined"))
I moved the usage of this attribute from sql/set_var.h to
include/my_attribute.h and created a macro for it depending on
compiler used.