Fixing buildbot failures on mariabackup.aria_log_dir_path_rel.
The problem was that directory_exists() was called with the
relative aria_log_dir_path value, while the current directory
in mariadb-backup is not necessarily equal to datadir when MTR is running.
Fix:
- Moving building the absolute path un level upper:
from the function copy_back_aria_logs() to the function copy_back().
- Passing the built absolute path to both directory_exists() and
copy_back_aria_logs() as a parameter.
This bug caused server crash when processing a multi-update statement that
used views if optimizer tracing was enabled.
The bug was introduced in the patch for MDEV-30539 that could incorrectly
detect the most top level selects of queries if views were used in them.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- `mariadb-backup --backup` was fixed to fetch the value of the
@@aria_log_dir_path server variable and copy aria_log* files
from @@aria_log_dir_path directory to the backup directory.
Absolute and relative (to --datadir) paths are supported.
Before this change aria_log* files were copied to the backup
only if they were in the default location in @@datadir.
- `mariadb-backup --copy-back` now understands a new my.cnf and command line
parameter --aria-log-dir-path.
`mariadb-backup --copy-back` in the main loop in copy_back()
(when copying back from the backup directory to --datadir)
was fixed to ignore all aria_log* files.
A new function copy_back_aria_logs() was added.
It consists of a separate loop copying back aria_log* files from
the backup directory to the directory specified in --aria-log-dir-path.
Absolute and relative (to --datadir) paths are supported.
If --aria-log-dir-path is not specified,
aria_log* files are copied to --datadir by default.
- The function is_absolute_path() was fixed to understand MTR style
paths on Windows with forward slashes, e.g.
--aria-log-dir-path=D:/Buildbot/amd64-windows/build/mysql-test/var/...
fp->field_length was unsigned and therefore the negative
condition around it.
Backport of cc182aca93 fixes it, however to correct the
consistent use of types pcf->Length needs to be unsigned
too.
At one point pcf->Precision is assigned from pcf->Length so
that's also unsigned.
GetTypeSize is assigned to length and has a length argument.
A -1 default value seemed dangerious to case, so at least 0
should assert if every hit.
trx_purge_free_segment(): The buffer-fix only prevents a block from
being freed completely from the buffer pool, but it will not prevent
the block from being evicted. Recheck the page identifier after
acquiring an exclusive page latch. If it has changed, backtrack and
invoke buf_page_get_gen() to look up the page normally.
Similar to 567b6812 continue to replace use of strcat() and
strcpy() with safer options strncat() and strncpy().
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services
do_shutdown_server(): After sending SIGKILL, invoke wait_until_dead().
Thanks to Sergei Golubchik for pointing out that the previous fix
does not actually work.
stored externally
row_merge_buf_add(): Has strict assert that fixed length mismatch
shouldn't happen while rebuilding the redundant row format table
btr_index_rec_validate(): Fixed size column can be stored externally.
So sum of inline stored length and external stored length of the
column should be equal to total column length
This issue happens when race condition happens when DDL
and fts optimize thread. DDL adds the new index to fts cache.
At the same time, fts optimize thread clears the cache
and reinitialize it. Take cache init lock before reinitializing
the cache. fts_sync_commit() should take dict_sys mutex
to avoid the deadlock with create index.
do_shutdown_server(): Call wait_until_dead() also when we are forcibly
killing the process (timeout=0). We have evidence that killing
the process may take some time and cause mystery failures in
crash recovery tests. For InnoDB, several failures were observed between
commit da094188f6 and
commit 0ee1082bd2
when no advisory file locking was being used by default.
Assertion `thd->mdl_context.is_lock_owner()` fires when a client is
disconnected, while transaction and and a table is opened through
`HANDLER` interface.
Reason for the assertion is that when a connection closes, its ongoing
transaction is eventually rolled back in
`Wsrep_client_state::bf_rollback()`. This method also releases explicit
which are expected to survive beyond the transaction lifetime.
This patch also removes calls to `mysql_ull_cleanup()`. User level
locks are not supported in combination with Galera, making these calls
unnecessary.
A GROUP BY query which uses "MIN(pk)" and has "pk<>const" in the
WHERE clause would produce wrong result when handled with "Using index
for group-by". Here "pk" column is the table's primary key.
The problem was introduced by fix for MDEV-23634. It made the range
optimizer to not produce ranges for conditions in form "pk != const".
However, LooseScan code requires that the optimizer is able to
convert the condition on the MIN/MAX column into an equivalent range.
The range is used to locate the row that has the MIN/MAX value.
LooseScan checks this in check_group_min_max_predicates(). This fix
makes the code in that function to take into account that "pk != const"
does not produce a range.
The glibc headers declare fallocate only if _GNU_SOURCE is defined.
Without this change, the probe fails with C compilers which do not
support implicit function declarations even if the system does in
fact support the fallocate function.
Upstream rocksdb does not need this because the probe is run with the
C++ compiler, and current g++ versions define _GNU_SOURCE
automatically.
This commit adds a new 'no-sni' option to socat which is required to
properly authenticate with newer socat versions (after version 1.7.4+).
This option is needed to disable the automatic use of the SNI feature
(Server Name Indication) since the SST script directly specifies the
commonname if necessary and automatic activation of the SNI feature
is unnecessary in such scenarios.
This bug could affect multi-update statements as well as single-table
update statements processed as multi-updates when the where condition
contained a range condition over a non-indexed varchar column. The
optimizer calculates selectivity of such range conditions using histograms.
For each range the buckets containing endpoints of the the range are
determined with a procedure that stores the values of the endpoints in the
space of the record buffer where values of the columns are usually stored.
For a range over a varchar column the value of a endpoint may exceed the
size of the buffer and in such case the value is stored with truncation.
This truncations cannot affect the result of the calculation of the range
selectivity as the calculation employes only the beginning of the value
string. However it can trigger generation of an unexpected error on this
truncation if an update statement is processed.
This patch prohibits truncation messages when selectivity of a range
condition is calculated for a non-indexed column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This is a non-functional change.
simplifying the code logic:
- removing global variables ds_data and ds_meta
- passing these variables as parameters to functions instead
- adding helper classes: Datasink_free_list and Backup_datasinks
- moving some function accepting a ds_ctxt parameter
as methods to ds_ctxt.
If we are inside stored function or trigger we should not commit
or rollback current statement transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
server has systemd support and calls sd_notify() to communicate
the status to systemd.
mariabackup links the whole server in, but it should not notify
systemd, because it's not started or managed by systemd.
Query cache should be invalidated if we are not in applier. For some
reason this condition was incorrect starting from 10.5 but it is
correct in 10.4.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Adding a new argument "flag" to MY_COLLATION_HANDLER::strnncollsp_nchars()
and a flag MY_STRNNCOLLSP_NCHARS_EMULATE_TRIMMED_TRAILING_SPACES.
The flag defines if strnncollsp_nchars() should emulate trailing spaces
which were possibly trimmed earlier (e.g. in InnoDB CHAR compression).
This is important for NOPAD collations.
For example, with this input:
- str1= 'a ' (Latin letter a followed by one space)
- str2= 'a ' (Latin letter a followed by two spaces)
- nchars= 3
if the flag is given, strnncollsp_nchars() will virtually restore
one trailing space to str1 up to nchars (3) characters and compare two
strings as equal:
- str1= 'a ' (one extra trailing space emulated)
- str2= 'a ' (as is)
If the flag is not given, strnncollsp_nchars() does not add trailing
virtual spaces, so in case of a NOPAD collation, str1 will be compared
as less than str2 because it is shorter.
- Field_string::cmp_prefix() now passes the new flag.
Field_varstring::cmp_prefix() and Field_blob::cmp_prefix() do
not pass the new flag.
- The branch in cmp_whole_field() in storage/innobase/rem/rem0cmp.cc
(which handles the CHAR data type) now also passed the new flag.
- Fixing UCA collations to respect the new flag.
Other collations are possibly also affected, however
I had no success in making an SQL script demonstrating the problem.
Other collations will be extended to respect this flags in a separate
patch later.
- Changing the meaning of the last parameter of Field::cmp_prefix()
from "number of bytes" (internal length)
to "number of characters" (user visible length).
The code calling cmp_prefix() from handler.cc was wrong.
After this change, the call in handler.cc became correct.
The code calling cmp_prefix() from key_rec_cmp() in key.cc
was adjusted according to this change.
- Old strnncollsp_nchar() related tests in unittest/strings/strings-t.c
now pass the new flag.
A few new tests also were added, without the flag.
The tests innodb.import_tablespace_race, innodn.restart, and innodb.innodb-wl5522 move
the tablespace file between the data directory and the tmp directory specified by
global environment variables. However this is risky because it's not unusual that the
set tmp directory (often under /tmp) is mounted on another disk partition or device,
and 'move_file' command may fail with "Errcode: 18 'Invalid cross-device link.'"
For innodb.import_tablespace_race and innodb.innodb-wl5522, moving files
across directories is not necessary. Modify the tests so they rename
files under the same directory. For innodb.restart, instead of moving
between datadir and MYSQL_TMPDIR, move the files under MYSQLTEST_VARDIR.
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new license.
I am contributing on behalf of my employer Amazon Web Services, Inc.
The tests innodb.import_tablespace_race, innodn.restart, and innodb.innodb-wl5522 move
the tablespace file between the data directory and the tmp directory specified by
global environment variables. However this is risky because it's not unusual that the
set tmp directory (often under /tmp) is mounted on another disk partition or device,
and 'move_file' command may fail with "Errcode: 18 'Invalid cross-device link.'"
To stabilize mysqltest in the described scenario, and prevent such
behavior in the future, let make_file() check both from file path and to
file path and make sure they are either both under MYSQLTEST_VARDIR or
MYSQL_TMP_DIR.
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new license.
I am contributing on behalf of my employer Amazon Web Services, Inc.
This is allowed:
STRING_WITH_LEN("string literal")
This is not:
char *str = "pointer to string";
... STRING_WITH_LEN(str) ..
In C++ this is also allowed:
const char str[] = "string literal";
... STRING_WITH_LEN(str) ...
Test fails sporadically and very rarely on this:
```
let $org_queries= `SHOW STATUS LIKE 'Queries'`;
SELECT f1();
CALL p1();
let $new_queries= `SHOW STATUS LIKE 'Queries'`;
let $diff= `SELECT SUBSTRING('$new_queries',9)-SUBSTRING('$org_queries',9)`;
```
if COM_QUIT from one of the earlier (in the test) disconnect's
happens between the two SHOW STATUS commands.
Because COM_QUIT increments "Queries".
The directly previous test uses wait_condition to wait for
its disconnects to complete. But there are more disconnects earlier
in the test file and nothing waits for them.
Let's change wait_condition to wait for *all* disconnect to complete.
MariaDB server prints the stack information if a crash happens.
It traverses the stack frames in function `print_with_addr_resolve`.
For *EACH* frame, it tries to parse the file name and line number of the
frame using `addr2line`, or prints `backtrace_symbols_fd` if `addr2line`
fails.
1. Logic in `addr_resolve` function uses addr2line to get the file name
and line numbers. It has a timeout of 500ms to wait for the response
from addr2line. However, that's not enough on small instances
especially if the debug information is in a separate file or
compressed.
Increase the timeout to 5 seconds to support some edge cases, as
experiments showed addr2line may take 2-3 seconds on some frames.
2. While parsing a frame inside of a shared library using `addr2line`,
the file name and line numbers could be `??`, empty or `0` if the
debug info is not loaded.
It's easy to reproduce when glibc-debuginfo is not installed.
Instead of printing a meaningless frame like:
:0(__GI___poll)[0x1505e9197639]
...
??:0(__libc_start_main)[0x7ffff6c8913a]
We want to print the frame information using `backtrace_symbols_fd`,
with the shared library name and a hexadecimal offset.
Stacktrace example on a real instance with this commit:
/lib64/libc.so.6(__poll+0x49)[0x145cbf71a639]
...
/lib64/libc.so.6(__libc_start_main+0xea)[0x7f4d0034d13a]
`addr_resolve` has considered the case of meaningless combination of
file name and line number returned by `addr2line`. e.g. `??:?`
However, conditions like `:0` and `??:0` are not handled. So now the
function will rollback to `backtrace_symbols_fd` in above cases.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
CREATE [TEMPORARY] SEQUENCE is internally CREATE+INSERT (initial value)
and it is replicated using statement based replication. In Galera
we use either TOI or RSU so we should skip commit time hooks
for it.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
With binlogs enabled, debug assertion ut_ad(xid_seqno > wsrep_seqno)
fired in trx_rseg_update_wsrep_checkpoint() when an applier thread
synced the seqno out of order for write set which had failed
certification. This was caused by releasing commit
order too early when binlogs were on, allowing group
commit to run in parallel and commit following transactions
too early.
Fixed by extending the commit order critical section to cover
call to wsrep_set_SE_checkpoint() also when binlogs are on.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
When using LEFT() function with a string that is without a charset,
the function crashes. This is because the function assumes that
the string has a charset, and tries to use it to calculate the
length of the string.
Two functions, UNHEX and WEIGHT_STRING, returned a string without
the charset being set to a not null value.
The fix is to set charset when calling val_str on these two functions.
Reviewed-by: Alexander Barkov <bar@mariadb.com>
Reviewed-by: Daniel Black <daniel@mariadb.org>
Let us make innodb_buffer_pool_filename a read-only variable
so that a malicious user cannot cause an important file to be
deleted on InnoDB shutdown. An attempt to delete a directory
will fail because it is not a regular file, but what if the
variable pointed to (say) ibdata1, ib_logfile0 or some *.ibd file?
It does not seem to make much sense for this parameter to be
configurable in the first place, but we will not change that in order
to avoid breaking compatibility.
Problem:
UNIX_TIMESTAMP() called for a expression of the TIME data type
returned NULL.
Inside Type_handler_timestamp_common::Item_val_native_with_conversion
the call for item->get_date() did not convert TIME to DATETIME
automatically (because it does not have to, by design).
As a result, Type_handler_timestamp_common::TIME_to_native() received
a MYSQL_TIME value with zero date 0000-00-00 and therefore returned "true"
(indicating SQL NULL value).
Fix:
Removing the call for item->get_date().
Instantiating Datetime(item) instead.
This forces automatic TIME to DATETIME conversion
(unless @@old_mode is zero_date_time_cast).