Semisync ack (master side) receiver thread is made to report
details of faced errors.
In case of 'magic byte' error, a hexdump of the received packet
is always (level) NOTEd into the error log.
In other cases an exact server level error is print out
as a warning (as it may not be critical) under log_warnings > 2.
An MTR test added for the magic byte error. For others existing mtr
tests cover that, provided log_warnings > 2 is set.
test_if_skip_sort_order() should catch the join types JT_EQ_REF,
JT_CONST and JT_SYSTEM and skip sort order for these.
Such join types imply retrieving of a single row of data, and sorting
of a single row can always be skipped.
Blaming hardware and poor libraries seems on the rare end of the
scale of things that go wrong. Accept the blame, apologize, and
kindly request a bug report.
Also url change on stack traces is changed to include mariadbd.
Thanks Vlad for also raising that blaming was wrong.
This commit checks the validity of value change of wsrep_sst_method variable.
The validity check is same as happens in donor node when incoming SST request
is parsed.
The commit has also a mtr test: wsrep.wsrep_variables_sst_method which verifies
that wsrep_sst_method can be succesfully changed to acceptable values and that
the SET command results in error if invalid value was entered.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Changing the code handling sql_mode-dependent function DECODE():
- removing parser tokens DECODE_MARIADB_SYM and DECODE_ORACLE_SYM
- removing the DECODE() related code from sql_yacc.yy/sql_yacc_ora.yy
- adding handling of DECODE() with help of a new Create_func_func_decode
If a replica is actively delaying a transaction when restarted (STOP
SLAVE/START SLAVE), when the sql thread is back up,
Seconds_Behind_Master will present as 0 until the configured
MASTER_DELAY has passed. That is, before the restart,
last_master_timestamp is updated to the timestamp of the delayed
event. Then after the restart, the negation of sql_thread_caught_up
is skipped because the timestamp of the event has already been used
for the last_master_timestamp, and their update is grouped together
in the same conditional block.
This patch fixes this by separating the negation of
sql_thread_caught_up out of the timestamp-dependent block, so it is
called any time an idle parallel slave queues an event to a worker.
Note that sql_thread_caught_up is still left in the check for internal
events, as SBM should remain idle in such case to not "magically" begin
incrementing.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
standard table KEY_COLUMN_USAGE should only show keys where
a user has some privileges on every column of the key
standard table TABLE_CONSTRAINTS should show tables where
a user has any non-SELECT privilege on the table or on any column
of the table
standard table REFERENTIAL_CONSTRAINTS is defined in terms of
TABLE_CONSTRAINTS, so the same rule applies. If the user
has no rights to see the REFERENCED_TABLE_NAME value, it should be NULL
SHOW INDEX (and STATISTICS table) is non-standard, but it seems
reasonable to use the same logic as for KEY_COLUMN_USAGE.
The assert's reason was in missed FL_DDL flagging of CREATE-or-REPLACE
Query event.
MDEV-27365 fixes covered only the non-pre-existing table execution branch so
did not see a possibility of implicit commit in
the middle of execution in a rollback branch when the being CREATEd
sequence table is actually replaced.
The pre-existing table branch cleared the DDL modification
flag so the query lost FL_DDL in binlog and its parallel execution
on slave may have ended up with the assert to indicate the query
is raced by a following in binlog order event.
Fixed with applying the MDEV-27365 pattern.
An mtr test is added to cover the rollback situation.
The description test [ pass ] with a generous number of mtr parallel
reties.
Backporting a part of MDEV-32026 (which also fixed MDEV-32025 in 11.3)
from 11.3 to 10.4.
The reported crash happened with --lower-case-table-names=2
on statements like:
ALTER DATABASE Db1 DEFAULT CHARACTER SET utf8;
ALTER DATABASE `#mysql50#D+b1` UPGRADE DATA DIRECTORY NAME;
lock_schema_name() expects a normalized database name
and assert if a non-normalized name comes.
mysql_alter_db_internal() and mysql_upgrade_db() get
a non-normalized database name in the parameter.
Fixing them to normalize the database name before passing
it to lock_schema_name().
Problem was that total order isolation (TOI) is started before
we know sequence implementing storage engine. This led to
situation where table implementing persistent storate
for sequence in case of MyISAM was created on applier causing
errors later in test execution.
Therefore, in both CREATE SEQUENCE and ALTER TABLE to implementing
persistent storage we need to check implementing storage engine
after open_tables and this check must be done in both master
and applier, because if implementing storage engine is MyISAM
it does not support rollback.
Added tests to make sure that if sequence implementing storage
engine is MyISAM or we try to alter it to MyISAM user gets error
and changes are not replicated.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
In MDEV-31086, SET FOREIGN_KEY_CHECKS=0 cannot bypass checks that
make column types of foreign keys incompatible. An unfortunate
consequence is that adding an AUTO_INCREMENT is considered
incompatible in Field_{num,decimal}::is_equal and for the purpose
of FK checks this isn't relevant.
innodb.foreign_key - pragmaticly left wait_until_count_sessions.inc at
end of test to match the second line of test.
Reporter: horrockss@github - https://github.com/MariaDB/mariadb-docker/issues/528
Co-Author: Marko Mäkelä <marko.makela@mariadb.com>
Reviewer: Nikita Malyavin
For the future reader this was attempted:
Removing AUTO_INCREMENT checks from Field_{num,decimal}::is_equals
failed in the following locations (noted for future fixing):
* MyISAM and Aria (not InnoDB) don't adjust AUTO_INCREMENT next number
correctly, hence added a test to main.auto_increment to catch
the next person that attempts this fix.
* InnoDB must perform an ALGORITHM=COPY to populate NULL values of
an original table (MDEV-19190 mtr test period.copy), this requires
ALTER_STORED_COLUMN_TYPE to be set in fill_alter_inplace_info
which doesn't get hit because field->is_equal is true.
* InnoDB must not perform the change inplace (below patch)
* innodb.innodb-alter-timestamp main.partition_innodb test would
also need futher investigation.
InnoDB ha_innobase::check_if_supported_inplace_alter to support the
removal of Field_{num,decimal}::is_equal AUTO_INCREMENT checks would need the following change
diff --git a/storage/innobase/handler/handler0alter.cc b/storage/innobase/handler/handler0alter.cc
index a5ccb1957f3..9d778e2d39a 100644
--- a/storage/innobase/handler/handler0alter.cc
+++ b/storage/innobase/handler/handler0alter.cc
@@ -2455,10 +2455,15 @@ ha_innobase::check_if_supported_inplace_alter(
/* An AUTO_INCREMENT attribute can only
be added to an existing column by ALGORITHM=COPY,
but we can remove the attribute. */
- ut_ad((MTYP_TYPENR((*af)->unireg_check)
- != Field::NEXT_NUMBER)
- || (MTYP_TYPENR(f->unireg_check)
- == Field::NEXT_NUMBER));
+ if ((MTYP_TYPENR((*af)->unireg_check)
+ == Field::NEXT_NUMBER)
+ && (MTYP_TYPENR(f->unireg_check)
+ != Field::NEXT_NUMBER))
+ {
+ ha_alter_info->unsupported_reason = my_get_err_msg(
+ ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_AUTOINC);
+ DBUG_RETURN(HA_ALTER_INPLACE_NOT_SUPPORTED);
+ }
With this change the main.auto_increment test for bug #14573, under
innodb, will pass without the 2 --error ER_DUP_ENTRY entries.
The function header comment was updated to reflect the MDEV-31086
changes.
While cleaning up a failed CREATE TABLE LIKE <sequence>, `mysql_rm_table_no_locks`
erroneously attempted to remove all tables involved in the query, including
the source table (sequence).
Fix to temporarily modify `table_list` to ensure that only the intended
table is removed during the cleanup.
The memory allocated for an instance of the class Item_direct_ref_to_item
was leaked on second execution of a query run as a prepared statement and
involving conversion of strings with different character sets.
The reason of leaking the memory was caused by the fact that a statement
arena could be already set by the moment the method
Type_std_attributes::agg_item_set_converter() is called.
don't forget to reset mdl_context.m_deadlock_overweight when
taking the THD out of the cache - the history of previous connections
should not affect the weight in deadlock victim selection
(small cleanup of the test to help the correct merge)
When aggregating pairs BIT+NULL and NULL+BIT for result, e.g.
in COALESCE(), preserve the BIT data type (ignore explicit NULLs).
The same fix applied to YEAR.
This bug could affect queries with IN subqueries in WHERE clause and using
derived tables to which split optimization potentially could be applied.
When looking for the best split of a splittable derived table T any key
access from a semi-join materialized table used for lookups S to table T
must be excluded from consideration because in the current implementation
of such tables as S the values from its records cannot be used to access
other tables.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Author: Sergei Petrunia <sergey@mariadb.com>
Date: Wed Oct 11 19:02:25 2023 +0300
MDEV-32301: Server crashes at Arg_comparator::compare_row
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
A subquery in form "(SELECT not_null_value LIMIT 1 OFFSET 1)" will
produce no rows which will translate into scalar SQL NULL value.
The code in Item_singlerow_subselect::fix_length_and_dec() failed to
take the LIMIT/OFFSET clause into account and used to set
item_subselect->maybe_null=0, despite that SQL NULL will be produced.
If such subselect was used in ORDER BY, this would cause a crash in
filesort() code when it would get a NULL value for a not-nullable item.
also made subselect_engine::no_tables() const function.
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
The code inside Item_subselect::fix_fields() could fail to check
that left expression had an Item_row, like this:
(('x', 1.0) ,1) IN (SELECT 'x', 1.23 FROM ... UNION ...)
In order to hit the failure, the first SELECT of the subquery had
to be a degenerate no-tables select. In this case, execution will
not enter into Item_in_subselect::create_row_in_to_exists_cond()
and will not check if left_expr is composed of scalars.
But the subquery is a UNION so as a whole it is not degenerate.
We try to create an expression cache for the subquery.
We create a temp.table from left_expr columns. No field is created
for the Item_row. Then, we crash when trying to add an index over a
non-existent field.
Fixed by moving the left_expr cardinality check to a point in
check_and_do_in_subquery_rewrites() which gets executed for all
cases.
It's better to make the check early so we don't have to care about
subquery rewrite code hitting Item_row in left_expr.
The crash inside my_vsnprintf_utf32() happened correctly,
because the caller methods:
Field_string::sql_rpl_type()
Field_varstring::sql_rpl_type()
mis-used the charset library and sent pure ASCII data to the
virtual function snprintf() of a utf32 CHARSET_INFO.
It was wrong to use Field::charset() in sql_rpl_type().
We're printing the metadata (the data type) here, not the column data.
The string contraining the data type of a CHAR/VARCHAR column
is a pure ASCII string.
Fixing to use res->charset() to print, like all virtual implementations
of sql_type() do.
Review was done by Andrei Elkin.
Thanks to Andrei for proposing MTR test improvents.
An "ITERATE innerLoop" did not work properly inside
a WHILE loop, which itself is inside an outer FOR loop:
outerLoop:
FOR
...
innerLoop:
WHILE
...
ITERATE innerLoop;
...
END WHILE;
...
END FOR;
It erroneously generated an integer increment code for the outer FOR loop.
There were two problems:
1. "ITERATE innerLoop" worked like "ITERATE outerLoop"
2. It was always integer increment, even in case of FOR cursor loops.
Background:
- A FOR loop automatically creates a dedicated sp_pcontext stack entry,
to put the iteration and bound variables on it.
- Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated
slack entry.
The old code erroneously assumed that sp_pcontext::m_for_loop
either describes the most inner loop (in case the inner loop is FOR),
or is empty (in case the inner loop is not FOR).
But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop:
it describes the closest FOR loop, even if this FOR loop has nested
non-FOR loops inside.
So when we're near the ITERATE statement in the above script,
sp_pcontext::m_for_loop is not empty - it stores information about
the FOR loop labeled as "outrLoop:".
Fix:
- Adding a new member sp_pcontext::Lex_for_loop::m_start_label,
to remember the explicit or the auto-generated label correspoding
to the start of the FOR body. It's used during generation
of "ITERATE loop_label" code to check if "loop_label" belongs
to the current FOR loop pointed by sp_pcontext::m_for_loop,
or belongs to a non-FOR nested loop.
- Adding LEX methods sp_for_loop_intrange_iterate() and
sp_for_loop_cursor_iterate() to reuse the code between
methods handling:
* ITERATE
* END FOR
- Adding a test for Lex_for_loop::is_for_loop_cursor()
and generate a code either a cursor fetch, or for an integer increment.
Before this change, it always erroneously generated an integer increment
version.
- Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside
Lex_for_loop_st::init(), to avoid not initialized members.
- Cleanup: Removing a redundant method:
Lex_for_loop_st::init(const Lex_for_loop_st &other)
Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
Problem:
Item_func_date_format::val_str() and make_date_time() did not take into
account that the format string and the result string
(separately or at the same time) can be of a tricky character set
like UCS2, UTF16, UTF32. As a result, DATE_FORMAT() could generate
an ill-formed result which crashed on DBUG_ASSERTs testing well-formedness
in other parts of the code.
Fix:
1. class String changes
Removing String::append_with_prefill(). It was not compatible with
tricky character sets. Also it was inconvenient to use and required
too much duplicate code on the caller side.
Adding String::append_zerofill() instead. It's compatible with tricky
character sets and is easier to use.
Adding helper methods Static_binary_string::q_append_wc() and
String::append_wc(), to append a single wide character
(a Unicode code point in my_wc_t).
2. storage/spider changes
Removing spider_string::append_with_prefill().
It used String::append_with_prefix() inside, but it was unused itself.
3. Changing tricky charset incompatible code pieces in make_date_time()
to compatible replacements:
- Fixing the loop scanning the format string to iterate in terms
of Unicode code points (using mb_wc()) rather than in terms
of "char" items.
- Using append_wc(my_wc_t) instead of append(char) to append
a single character to the result string.
- Using append_zerofill() instead of append_with_prefill() to
append date/time numeric components to the result string.
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The error was specific to threadpool/compressed protocol.
set_thd_idle() set socket state to idle twice, causing assert failure.
This happens if unread compressed data on connection,after query was
finished. On a protocol level, this means a single compression packet
contains multiple command packets.
state() == s_prepared || state() == s_must_abort || state() == s_aborting ||
state() == s_cert_failed || state() == s_must_replay' failed
When applier tries to execute write rows event it find out
in table_def::compatible_with that value is not compatible
and sets error and thd->is_slave_error but thd->is_error()
is false. Later in rpl_group_info::slave_close_thread_tables
we commit stmt. This is bad for Galera because later in
apply_write_set we notice that event apply was not successful
and try to rollback transaction, but wsrep transaction
is already in s_committed state.
This is fixed on rpl_group_info::slave_close_thread_tables
so that in Galera case we rollback stmt if thd->is_slave_error
or thd->is_error() is set. Then later we can rollback wsrep
transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The function setup_windows() called at the prepare phase of processing a
select builds a list of all window specifications used in the select. This list
is built on the statement memory and it must be done only once.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem was that with BINLOG-statement you can execute
binlog events on master also (not only in applier).
Fix removes too strict part wsrep_thd_is_applying from
assertion. Note that actual event in test is intentionally
corrupted to test should this error being ignored.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|| state() == s_prepared || state() == s_committing
|| state() == s_must_abort || state() == s_replaying'
failed.
CACHE INDEX and LOAD INDEX INTO CACHE are local operations.
Therefore, do not replicate them with Galera.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
=========
During commit, server calls prepare_commit_versioned to
determine the transaction modified system-versioned data.
Due to binlog_do_db option, we disable the binlog for the
statement. But prepare_commit_versioned() is being
called only when binlog is enabled for the statement.
Fix:
===
prepare_commit_versioned() should happen irrespective
of binlog state. So if the server has any read-write operation
then we should call prepare_commit_versioned().
The fix is to return 3-state value from Range_rowid_filter::build()
call:
1. The filter was built successfully;
2. The filter was not built, but the error was not fatal, i.e. there is
no need to rollback transaction. For example, if the size of container to
storevrow ids is not enough;
3. The filter was not built because of fatal error, for example,
deadlock or lock wait timeout from storage engine. In this case we
should stop query plan execution and roll back transaction.
Reviewed by: Sergey Petrunya
When resolving a column from the HAVING clause, a new Item_field
object may be created inside Item_ref::fix_fields().
But the object is created with an empty name resolution context,
which then leads to debug assertion failure during
Item_field::fix_fields().
The solution is to pass the correct name resolution context
when creating the Item_field object.
Reviewer: Oleksandr Byelkin (sanja@mariadb.com)