Author: Sergei Petrunia <sergey@mariadb.com>
Date: Wed Oct 11 19:02:25 2023 +0300
MDEV-32301: Server crashes at Arg_comparator::compare_row
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
A subquery in form "(SELECT not_null_value LIMIT 1 OFFSET 1)" will
produce no rows which will translate into scalar SQL NULL value.
The code in Item_singlerow_subselect::fix_length_and_dec() failed to
take the LIMIT/OFFSET clause into account and used to set
item_subselect->maybe_null=0, despite that SQL NULL will be produced.
If such subselect was used in ORDER BY, this would cause a crash in
filesort() code when it would get a NULL value for a not-nullable item.
also made subselect_engine::no_tables() const function.
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
The code inside Item_subselect::fix_fields() could fail to check
that left expression had an Item_row, like this:
(('x', 1.0) ,1) IN (SELECT 'x', 1.23 FROM ... UNION ...)
In order to hit the failure, the first SELECT of the subquery had
to be a degenerate no-tables select. In this case, execution will
not enter into Item_in_subselect::create_row_in_to_exists_cond()
and will not check if left_expr is composed of scalars.
But the subquery is a UNION so as a whole it is not degenerate.
We try to create an expression cache for the subquery.
We create a temp.table from left_expr columns. No field is created
for the Item_row. Then, we crash when trying to add an index over a
non-existent field.
Fixed by moving the left_expr cardinality check to a point in
check_and_do_in_subquery_rewrites() which gets executed for all
cases.
It's better to make the check early so we don't have to care about
subquery rewrite code hitting Item_row in left_expr.
SST for mariabackup may not destroy old files if datadir or
other working directory is declared as a symlink due to the lack
of the "-L" option among the find utility options, similarly SST
for rsync in some cases may not transfer data directories if they
are created as symlinks. This fix adds the missing option and
generally unifies the work with find utility options to avoid
failures in the interpretation of directories and regular
expressions.
While checking for altered column in foreign key constraints,
InnoDB fails to ignore virtual columns. This issue caused
by commit 5f09b53bdb4e973e7c7ec2c53a24c98321223f98(MDEV-31086).
The crash inside my_vsnprintf_utf32() happened correctly,
because the caller methods:
Field_string::sql_rpl_type()
Field_varstring::sql_rpl_type()
mis-used the charset library and sent pure ASCII data to the
virtual function snprintf() of a utf32 CHARSET_INFO.
It was wrong to use Field::charset() in sql_rpl_type().
We're printing the metadata (the data type) here, not the column data.
The string contraining the data type of a CHAR/VARCHAR column
is a pure ASCII string.
Fixing to use res->charset() to print, like all virtual implementations
of sql_type() do.
Review was done by Andrei Elkin.
Thanks to Andrei for proposing MTR test improvents.
The cmake configuration step is single-threaded and already consuming
too much time. We should not make it worse by adding invocations like
MY_CHECK_CXX_COMPILER_FLAG().
Let us prefer something that works on any supported version
of GCC (4.8.5 or later) or clang, as well as recent versions
of the Intel C compiler.
This replaces commit 1fde785315
When spider_db_delete_all_rows() is called, the supplied spider->conns
may have already been freed. The existing mechanism has spider_trx own
the connections in trx_conn_hash and it may free a conn during the
cleanup after a query. When running a delete query and if the table is
in the table cache, ha_spider::open() would not be called which would
recreate the conn. So we recreate the conn when necessary during
delete by calling spider_check_trx_and_get_conn().
We also reduce code duplication as delete_all_rows() and truncate()
has almost identical code, and there's no need to assign
wide_handler->sql_command in these functions because it has already
been correctly assigned.
Windows C runtime does not implement line buffering mode for stdio.
This sometimes makes output from different tests interleaved in MTR
MTR relies on this buffering (lines won't output until "\n") to correctly
work in parallel scenarios.
Implement do-it-yourself line buffering on Windows, to workaround.
Windows C runtime does not implement line buffering mode for stdio.
This sometimes makes output from different tests interleaved in MTR
MTR relies on this buffering (lines won't output until "\n") to correctly
workin parallel scenarios.
Implement do-it-yourself line buffering on Windows, to workaround.
Flush stdout on finalizing of mysqldump/mysqlbinlog output
to avoid truncation.
The same patch has been applied to the mysqltest.cc code with
commit 34ff714b0d
Author: Magnus Svensson <msvensson@mysql.com>
Date: Fri Nov 14 11:06:56 2008 +0100
WL#4189 Make mysqltest flush log file at close if logfile is stdout
but not to mysqldump.c/mysqlbinlog.cc
copy_back(): Also copy the dummy empty ib_logfile0 so that
MariaDB Server 10.8 or later can be started after
--copy-back or --move-back.
Thanks to Daniel Black for reporting this.
mysqladmin's presumption about the cause of an error by looking
at the error code was presumptious. Server knows best, so pass
the error along.
Avoid returning -1 as a exit code, Linux makes this 255 and Windows
keeps this as -1.
MONITOR_OVLD_ROW_LOCK_CURRENT_WAIT monitor should has
MONITOR_DISPLAY_CURRENT flag set in its definition, as it shows the
current state and does not accumulate anything.
Reviewed by: Marko Mäkelä
Fix order_by_optimizer_innodb and order_by_innodb tests.
The problem was that the query could be ran before InnoDB was
ready to provide a realistic statistic for #records in the table.
It provided a number that was too low, which caused the optimizer
to decide that range access plan wasn't advantageous and discard it.
An "ITERATE innerLoop" did not work properly inside
a WHILE loop, which itself is inside an outer FOR loop:
outerLoop:
FOR
...
innerLoop:
WHILE
...
ITERATE innerLoop;
...
END WHILE;
...
END FOR;
It erroneously generated an integer increment code for the outer FOR loop.
There were two problems:
1. "ITERATE innerLoop" worked like "ITERATE outerLoop"
2. It was always integer increment, even in case of FOR cursor loops.
Background:
- A FOR loop automatically creates a dedicated sp_pcontext stack entry,
to put the iteration and bound variables on it.
- Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated
slack entry.
The old code erroneously assumed that sp_pcontext::m_for_loop
either describes the most inner loop (in case the inner loop is FOR),
or is empty (in case the inner loop is not FOR).
But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop:
it describes the closest FOR loop, even if this FOR loop has nested
non-FOR loops inside.
So when we're near the ITERATE statement in the above script,
sp_pcontext::m_for_loop is not empty - it stores information about
the FOR loop labeled as "outrLoop:".
Fix:
- Adding a new member sp_pcontext::Lex_for_loop::m_start_label,
to remember the explicit or the auto-generated label correspoding
to the start of the FOR body. It's used during generation
of "ITERATE loop_label" code to check if "loop_label" belongs
to the current FOR loop pointed by sp_pcontext::m_for_loop,
or belongs to a non-FOR nested loop.
- Adding LEX methods sp_for_loop_intrange_iterate() and
sp_for_loop_cursor_iterate() to reuse the code between
methods handling:
* ITERATE
* END FOR
- Adding a test for Lex_for_loop::is_for_loop_cursor()
and generate a code either a cursor fetch, or for an integer increment.
Before this change, it always erroneously generated an integer increment
version.
- Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside
Lex_for_loop_st::init(), to avoid not initialized members.
- Cleanup: Removing a redundant method:
Lex_for_loop_st::init(const Lex_for_loop_st &other)
Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
Problem:
Item_func_date_format::val_str() and make_date_time() did not take into
account that the format string and the result string
(separately or at the same time) can be of a tricky character set
like UCS2, UTF16, UTF32. As a result, DATE_FORMAT() could generate
an ill-formed result which crashed on DBUG_ASSERTs testing well-formedness
in other parts of the code.
Fix:
1. class String changes
Removing String::append_with_prefill(). It was not compatible with
tricky character sets. Also it was inconvenient to use and required
too much duplicate code on the caller side.
Adding String::append_zerofill() instead. It's compatible with tricky
character sets and is easier to use.
Adding helper methods Static_binary_string::q_append_wc() and
String::append_wc(), to append a single wide character
(a Unicode code point in my_wc_t).
2. storage/spider changes
Removing spider_string::append_with_prefill().
It used String::append_with_prefix() inside, but it was unused itself.
3. Changing tricky charset incompatible code pieces in make_date_time()
to compatible replacements:
- Fixing the loop scanning the format string to iterate in terms
of Unicode code points (using mb_wc()) rather than in terms
of "char" items.
- Using append_wc(my_wc_t) instead of append(char) to append
a single character to the result string.
- Using append_zerofill() instead of append_with_prefill() to
append date/time numeric components to the result string.
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
otherwise, e.g.
./mtr main.mysql_install_db_win_admin,innodb
results in sporadic
mysql-test-run: *** ERROR: Could not run main.mysql_install_db_win_admin with 'innodb' combination(s)
depending on whether it'll process the skip (not windows admin)
or the innodb.combinations first (if skip is processed first,
innodb combination wasn't, making the further code think that the test
doesn't have innodb combination)
The error was specific to threadpool/compressed protocol.
set_thd_idle() set socket state to idle twice, causing assert failure.
This happens if unread compressed data on connection,after query was
finished. On a protocol level, this means a single compression packet
contains multiple command packets.
state() == s_prepared || state() == s_must_abort || state() == s_aborting ||
state() == s_cert_failed || state() == s_must_replay' failed
When applier tries to execute write rows event it find out
in table_def::compatible_with that value is not compatible
and sets error and thd->is_slave_error but thd->is_error()
is false. Later in rpl_group_info::slave_close_thread_tables
we commit stmt. This is bad for Galera because later in
apply_write_set we notice that event apply was not successful
and try to rollback transaction, but wsrep transaction
is already in s_committed state.
This is fixed on rpl_group_info::slave_close_thread_tables
so that in Galera case we rollback stmt if thd->is_slave_error
or thd->is_error() is set. Then later we can rollback wsrep
transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem is that mysql.galera_slave_pos table is replicated,
thus it should be InnoDB to allow rolling back in case
of replay.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The function setup_windows() called at the prepare phase of processing a
select builds a list of all window specifications used in the select. This list
is built on the statement memory and it must be done only once.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
MDEV-31455: main.events_stress or events.events_stress fails with view-protocol
MDEV-31457: main.delete_use_source fails (hangs) with view-protocol
Fixed tests:
main.sum_distinct-big, main.delete_use_source - disabled view-protocol
for some cases because they use transactions without autocommit
main.events_stress, main.merge-big - disabled service connection
for some queries since it is necessary that the query SELECT pass
in the same session
.snapshot exists as a directory on NetApp storage and
should not be copied during the sst process.
Thanks Daniel Czadek for the bug report.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that with BINLOG-statement you can execute
binlog events on master also (not only in applier).
Fix removes too strict part wsrep_thd_is_applying from
assertion. Note that actual event in test is intentionally
corrupted to test should this error being ignored.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|| state() == s_prepared || state() == s_committing
|| state() == s_must_abort || state() == s_replaying'
failed.
CACHE INDEX and LOAD INDEX INTO CACHE are local operations.
Therefore, do not replicate them with Galera.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
=========
During commit, server calls prepare_commit_versioned to
determine the transaction modified system-versioned data.
Due to binlog_do_db option, we disable the binlog for the
statement. But prepare_commit_versioned() is being
called only when binlog is enabled for the statement.
Fix:
===
prepare_commit_versioned() should happen irrespective
of binlog state. So if the server has any read-write operation
then we should call prepare_commit_versioned().
There are many filesystem related errors that can occur with
MariaBackup. These already outputed to stderr with a good description of
the error. Many of these are permission or resource (file descriptor)
limits where the assertion and resulting core crash doesn't offer
developers anything more than the log message. To the user, assertions
and core crashes come across as poor error handling.
As such we return an error and handle this all the way up the stack.
The fix is to return 3-state value from Range_rowid_filter::build()
call:
1. The filter was built successfully;
2. The filter was not built, but the error was not fatal, i.e. there is
no need to rollback transaction. For example, if the size of container to
storevrow ids is not enough;
3. The filter was not built because of fatal error, for example,
deadlock or lock wait timeout from storage engine. In this case we
should stop query plan execution and roll back transaction.
Reviewed by: Sergey Petrunya