This commit contains a fix to use modern syntax for selecting
character classes in the tr utility options.
Also one of the tests for SST via rsync (galera_sst_rysnc2) is made
more reliable (to avoid rare failures during automatic testing).
Fixing a typo in the fix for MDEV-19804, wrong return value in a bool function:
< return NULL;
> return true;
The problem was found because it did not compile on some platforms.
Strangley, it did not have visible problems on other platforms,
which did not fail to compile, although "return NULL" should compile to
"return false" rather than "return true".
tv_usec is a (suseconds_t) so we cast to it. Prevents the AIX(gcc-10) warning:
include/my_time.h: In function 'void my_timeval_trunc(timeval*, uint)':
include/my_time.h:249:65: warning: conversion from 'long int' to 'suseconds_t' {aka 'int'} may change value [-Wconversion]
249 | tv->tv_usec-= my_time_fraction_remainder(tv->tv_usec, decimals);
|
macOS is: conversion from 'long int' to '__darwin_suseconds_t' {aka 'int'} may change value
On Windows suseconds_t isn't defined so we use the existing
long return type of my_time_fraction_remainder.
Reviewed by Marko Mäkelä
Closes: #2079
This bug report is about the same issue as MDEV-28129 and MDEV-21173.
The issue is that the macros YYABORT is called instead of MYSQL_YYABORT
on parse error. In result the method LEX::cleanup_lex_after_parse_error
is not called to clean up data structures created on parsing of
the statement.
mariabackup does not load dictionary during backup, but it loads
tablespaces, that is why fil_system.max_assigned_id is not set with
dict_check_tablespaces_and_store_max_id(). There is no sense to issue the
warning during backup.
Second execution of a prepared statement for a query containing a constant
subquery with union that can be optimized away, could result in server abnormal
termination for debug build or incorrect result set output for release build.
For example, the following test case crashes a server built with debug on second
run of the statement EXECUTE stmt
CREATE TABLE t1 (a INT);
PREPARE stmt FROM 'EXPLAIN SELECT * FROM t1 HAVING 6 IN ( SELECT 6 UNION SELECT 5 )';
EXECUTE stmt;
EXECUTE stmt;
The reason for incorrect result set output or abnormal server termination
is careless working with the data member fake_select_lex->options inside
the function mysql_explain_union(). Once the flag SELECT_DESCRIBE is set in
the data member fake_select_lex->option before calling the methods
SELECT_LEX_UNIT::prepare/SELECT_LEX_UNIT::execute
the original value of the option is no longer restored.
As a consequence, next time the prepared statement is re-executed we have
the fake_select_lex with the flag SELECT_DESCRIBE set in the data member
fake_select_lex->option, that is incorrect. In result, the method
Item_subselect::assigned()
is not invoked during evaluation of a constant condition (constant subquery
with union) that being performed on OPTIMIZE phase of query handling.
This leads to the fact that records in the temporary table are not deleted
before calling
table->file->ha_enable_indexes(HA_KEY_SWITCH_ALL)
in the method st_select_lex_unit::optimize().
In result table->file->ha_enable_indexes(HA_KEY_SWITCH_ALL) returns error
and DBUG_ASSERT(0) is fired.
Stack trace to the line where the error generated on re-enabling indexes
for next subselect iteration is below:
st_select_lex_unit::optimize (at sql_union.cc:954)
handler::ha_enable_indexes (at handler.cc:4338)
ha_heap::enable_indexes (at ha_heap.cc:519)
heap_enable_indexes (at hp_clear.c:164)
The code snippet to clarify raising the error is also listed:
int heap_enable_indexes(HP_INFO *info)
{
int error= 0;
HP_SHARE *share= info->s;
if (share->data_length || share->index_length)
error= HA_ERR_CRASHED; <<== set error the value HA_ERR_CRASHED
since share->data_length != 0
To fix this issue the original value of unit->fake_select_lex->options
has to be saved before setting the flag SELECT_DESCRIBE and restored
on return from invocation of SELECT_LEX_UNIT::prepare/SELECT_LEX_UNIT::execute
As main() invokes parse_page() when -S or -D are set, it can be a case
when parse_page() is invoked when -D filename is not set, that is why
any attempt to write to page dump file must be done only if the file
name is set with -D.
The bug is caused by 2ef7a5a13a
(MDEV-13443).
records_are_comparable() requires this condition:
bitmap_is_subset(table->write_set, table->read_set)
On first iteration vers_update_fields() changes write_set and
read_set. On second iteration the above condition fails.
Added missing read bit for ROW_START. Also reorganized
bitmap_set_bit() so it is called only when needed.
Throw ER_NOT_FORM_FILE if this is wrong FRM data (warning with
ER_VERS_FIELD_WRONG_TYPE is still printed for deeper knowledge of what
was happened).
Keep ER_VERS_FIELD_WRONG_TYPE for creating partitioned table with
trx-versioning. Tested by MDEV-15951 in trx_id.test
On affected machine, the error happens sporadically in
innodb.instant_alter_limit.
Procmon shows SetRenameInformationFile failing with ERROR_ACCESS_DENIED.
In this case, the destination file was previously opened rsp oplocked by
Windows defender antivirus.
The fix is to retry MoveFileEx on ERROR_ACCESS_DENIED.
In commit 437da7bc54 (MDEV-19534),
the default value of the global variable srv_checksum_algorithm
in innochecksum was changed from SRV_CHECKSUM_ALGORITHM_INNODB
to implied 0 (innodb_checksum_algorithm=crc32). As a result,
the function buf_page_is_corrupted() would by default invoke
buf_calc_page_crc32() in innochecksum, and crc32_inited would hold.
This would cause "innochecksum" to fail on a particular page.
The actual problem is older, introduced in 2011 in
mysql/mysql-server@17e497bdb7
(MySQL 5.6.3). It should affect the validation of pages of old
data files that were written with innodb_checksum_algorithm=innodb.
When using innodb_checksum_algorithm=crc32 (the default setting
since MariaDB Server 10.2), some valid pages would be rejected
only because exactly one of the two checksum fields accidentally
matches the innodb_checksum_algorithm=crc32 value.
buf_page_is_corrupted(): Simplify the logic of non-strict
checksum validation, by always invoking buf_calc_page_crc32().
Remove a bogus condition that if only one of the checksum fields
contains the value returned by buf_calc_page_crc32(), the page
is corrupted.
This bug affected queries with IN predicates that contain parameter markers
in the value list. Such queries are executed via prepared statements.
The problem appeared only if the number of elements in the value list
was greater than the set value of the system variable
in_predicate_conversion_threshold.
The patch unconditionally prohibits conversion of an IN predicate to the
equivalent IN predicand if the value list of the IN predicate contains
parameters markers.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
from mysql.plugin table
Fix: Since mysql_upgrade runs commands from mysql_system_tables.fix,
added sql commands to check for semisync plugins in
INFORMATION_SCHEMA.PLUGINS and if they aren't there then delete them
from mysql.plugin.
Problem: In regular replication, when master binlogged using statement format
slave might not have written an event to its binary log when the Query
event aimed at a temporary table.
Specifically this was observed with LOAD DATA INFILE.
This effect was possible because unlike master slave holds temporary
tables in its pool and the master side check of existence of a
temporary table at the format bin-logging decision did not apply.
Solution: replace THD::has_thd_temporary_tables() with
THD::has_temporary_tables which allows to identify temporary table
presence on either side.
--
Reviewed by Andrei Elkin.
This commit adds a mtr test for reproducing a test scenario where despite of
innodb_disallow_writes blocking, writes to file system can still happen.
The test launches a garbd node, which triggers one of the cluster node to switch to
SST donor state. In this state, all disk activity should be halted, and e.g.
innodb_disallow_writes has been set. The test records md5sum aggregate over mariadb
data directory when the node enters the donor state, and records another md5sum
when the node leaves the donor state. If there is no IO activity in data directory, these
hashes should be equal.
For this test, the Donor state processing, has beeen instrumented so that, SST donor thread can be
stopped when entering the donor state. The test uses this new dbug sync point,
to control when to record the md5sums.
New SST script was added: wsrep_sst_backup, and garbd uses backup method to lauch the donor
node to call this script, and to enter in donor state.
The backup script could be later extended as general purpose backup method for the cluster.
This commit fixes also one race condition happening in wsrep_sst_rsync, like this:
* wsrep_rsync_sst script requests for flush tables,
and then waits in a loop until mariadbd has created file tables_flushed,
as confirmation that FLUSH TABLES has completed
* mariadbd's SST donor thread, wakes for the flush table request and then performs FTWRL,
and after this it creates the tables_flushed file
* note that SST script will now continue to startup rsync sending
* mariadbd's SST donor thread now calls for sst_disallow_writes(),
so that innodb would setup disk IO blockage, however rsyncing may already be ongoing at this point
This race condition is fixed in this commit, by performing all disk IO blocking before
creating the tables_flushed file.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Problem:
========
When the mysql.gtid_slave_pos table uses the InnoDB engine, and
mysqld starts, it reads the table and begins a transaction. After
reading the value, it should end the transaction and release all
associated locks. The bug reported in DBAAS-7828 shows that when
autocommit is off, the locks are not released, resulting in
indefinite hangs on future attempts to change gtid_slave_pos. In
particular, the transaction was not properly finalized because
thd->server_status was not updated to reflect the end of the
transaction.
Solution:
========
This patch updates the code to properly commit the transaction after
reading gtid_slave_pos during mysqld start-up.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:
========
When using mariadb-binlog with --raw and --stop-never, events from
the master's currently active log file should be written to their
respective log file specified by --result-file, and shown on-disk.
There is a bug where mariadb-binlog does not flush the result file
to disk when new events are received
Solution:
========
Add a function call to flush mariadb-binlog’s result file after
receiving an event in --raw mode.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:
========
When both semi-sync and slave compression are enabled, the numbering
on packet headers can become out of sync between the primary and
replica servers. More specifically, after the master flushes its
write, it should increment the counters that track packets. The
bug is such that the master only updates the normal packet counter
and leaves the compressed packet counter alone.
Solution:
========
After the master flushes, additionally increment the compressed
packet counter.
Reviewed By:
============
Andrei Elkin: <andrei.elkin@mariadb.com>
This bug could affect prepared statements for the command CREATE VIEW with
specification that contained unnamed basic constant in select list. If
generation of a valid name for the corresponding view column required
resolution of conflicts with names of other columns that were explicitly
defined then execution of such prepared statement and following deallocation
of this statement led to reading from freed memory.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
DECIMAL columns in I_S must be explicitly set of some value.
I_S columns do not have `DEFAULT 0` (after MDEV-18918), so during
restore_record() their record fragments pointed by Field::ptr are
initialized to zero bytes 0x00.
But an array of 0x00's is not a valid binary DECIMAL value.
So val_decimal() called for such Field_new_decimal generated a warning
when seeing a wrong binary encoded DECIMAL value in the record.
Fix:
Explicitly setting INFORMATION_SCHEMA.PROCESSLIST.PROGRESS
to the decimal value of 0 if no progress information is available.
fix a 2015 typo in build scripts.
--without-plugin=plugin_file_key_management translates to
-DPLUGIN_PLUGIN_FILE_KEY_MANAGEMENT=NO
replace it with a line from 10.4 that builds the plugin
dynamically.
The problem was that instructions sp_instr_cursor_copy_struct and
sp_instr_copen uses the same lex, adding and removing "tail" of
prelocked tables and forgetting that tail of all tables is kept in
LEX::query_tables_last. If the LEX used only by one instruction
or the query do not have prelocked tables it is not important.
But to work correctly in all cases LEX::query_tables_last should
be reset to make new tables added in the correct list (after last
table in the LEX instead after last table of the prelocking "tail"
which was cut).
submodules.cmake: don't use "--depth 1" with old Git
Old Git may not work with "--depth 1" when the referenced commit hash
is far from HEAD.
Newer Git improves the situation. For example:
fb43e31f2b
It's safe to not use "--depth 1" with old Git.
Closes#2049
TYPELIBs for ENUM/SET columns could erroneously undergo redundant
hex-unescaping at the table open time.
Fix:
- Prevent multiple unescaping of the same TYPELIB
- Prevent sharing TYPELIBs between columns with different mbminlen
Per Marko's comment in JIRA, sql_kill is passing the thread id
as long long. We change the format of the error messages to match,
and cast the thread id to long long in sql_kill_user.
The 10.5 test error main.grant_kill showed up a incorrect
thread id on a big endian architecture.
The cause of this is the sql_kill_user function assumed the
error was ER_OUT_OF_RESOURCES, when the the actual error was
ER_KILL_DENIED_ERROR. ER_KILL_DENIED_ERROR as an error message
requires a thread id to be passed as unsigned long, however a
user/host was passed.
ER_OUT_OF_RESOURCES doesn't even take a user/host, despite
the optimistic comment. We remove this being passed as an
argument to the function so that when MDEV-21978 is implemented
one less compiler format warning is generated (which would
have caught this error sooner).
Thanks Otto for reporting and Marko for analysis.
commit '6de482a6fefac0c21daf33ed465644151cdf879f'
10.3 no longer errors in truncate_notembedded.test
but per comments, a non-crash is all that we are after.