In MDEV-31086, SET FOREIGN_KEY_CHECKS=0 cannot bypass checks that
make column types of foreign keys incompatible. An unfortunate
consequence is that adding an AUTO_INCREMENT is considered
incompatible in Field_{num,decimal}::is_equal and for the purpose
of FK checks this isn't relevant.
innodb.foreign_key - pragmaticly left wait_until_count_sessions.inc at
end of test to match the second line of test.
Reporter: horrockss@github - https://github.com/MariaDB/mariadb-docker/issues/528
Co-Author: Marko Mäkelä <marko.makela@mariadb.com>
Reviewer: Nikita Malyavin
For the future reader this was attempted:
Removing AUTO_INCREMENT checks from Field_{num,decimal}::is_equals
failed in the following locations (noted for future fixing):
* MyISAM and Aria (not InnoDB) don't adjust AUTO_INCREMENT next number
correctly, hence added a test to main.auto_increment to catch
the next person that attempts this fix.
* InnoDB must perform an ALGORITHM=COPY to populate NULL values of
an original table (MDEV-19190 mtr test period.copy), this requires
ALTER_STORED_COLUMN_TYPE to be set in fill_alter_inplace_info
which doesn't get hit because field->is_equal is true.
* InnoDB must not perform the change inplace (below patch)
* innodb.innodb-alter-timestamp main.partition_innodb test would
also need futher investigation.
InnoDB ha_innobase::check_if_supported_inplace_alter to support the
removal of Field_{num,decimal}::is_equal AUTO_INCREMENT checks would need the following change
diff --git a/storage/innobase/handler/handler0alter.cc b/storage/innobase/handler/handler0alter.cc
index a5ccb1957f3..9d778e2d39a 100644
--- a/storage/innobase/handler/handler0alter.cc
+++ b/storage/innobase/handler/handler0alter.cc
@@ -2455,10 +2455,15 @@ ha_innobase::check_if_supported_inplace_alter(
/* An AUTO_INCREMENT attribute can only
be added to an existing column by ALGORITHM=COPY,
but we can remove the attribute. */
- ut_ad((MTYP_TYPENR((*af)->unireg_check)
- != Field::NEXT_NUMBER)
- || (MTYP_TYPENR(f->unireg_check)
- == Field::NEXT_NUMBER));
+ if ((MTYP_TYPENR((*af)->unireg_check)
+ == Field::NEXT_NUMBER)
+ && (MTYP_TYPENR(f->unireg_check)
+ != Field::NEXT_NUMBER))
+ {
+ ha_alter_info->unsupported_reason = my_get_err_msg(
+ ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_AUTOINC);
+ DBUG_RETURN(HA_ALTER_INPLACE_NOT_SUPPORTED);
+ }
With this change the main.auto_increment test for bug #14573, under
innodb, will pass without the 2 --error ER_DUP_ENTRY entries.
The function header comment was updated to reflect the MDEV-31086
changes.
While cleaning up a failed CREATE TABLE LIKE <sequence>, `mysql_rm_table_no_locks`
erroneously attempted to remove all tables involved in the query, including
the source table (sequence).
Fix to temporarily modify `table_list` to ensure that only the intended
table is removed during the cleanup.
The memory allocated for an instance of the class Item_direct_ref_to_item
was leaked on second execution of a query run as a prepared statement and
involving conversion of strings with different character sets.
The reason of leaking the memory was caused by the fact that a statement
arena could be already set by the moment the method
Type_std_attributes::agg_item_set_converter() is called.
don't forget to reset mdl_context.m_deadlock_overweight when
taking the THD out of the cache - the history of previous connections
should not affect the weight in deadlock victim selection
(small cleanup of the test to help the correct merge)
When aggregating pairs BIT+NULL and NULL+BIT for result, e.g.
in COALESCE(), preserve the BIT data type (ignore explicit NULLs).
The same fix applied to YEAR.
This bug could affect queries with IN subqueries in WHERE clause and using
derived tables to which split optimization potentially could be applied.
When looking for the best split of a splittable derived table T any key
access from a semi-join materialized table used for lookups S to table T
must be excluded from consideration because in the current implementation
of such tables as S the values from its records cannot be used to access
other tables.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Author: Sergei Petrunia <sergey@mariadb.com>
Date: Wed Oct 11 19:02:25 2023 +0300
MDEV-32301: Server crashes at Arg_comparator::compare_row
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
A subquery in form "(SELECT not_null_value LIMIT 1 OFFSET 1)" will
produce no rows which will translate into scalar SQL NULL value.
The code in Item_singlerow_subselect::fix_length_and_dec() failed to
take the LIMIT/OFFSET clause into account and used to set
item_subselect->maybe_null=0, despite that SQL NULL will be produced.
If such subselect was used in ORDER BY, this would cause a crash in
filesort() code when it would get a NULL value for a not-nullable item.
also made subselect_engine::no_tables() const function.
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
The code inside Item_subselect::fix_fields() could fail to check
that left expression had an Item_row, like this:
(('x', 1.0) ,1) IN (SELECT 'x', 1.23 FROM ... UNION ...)
In order to hit the failure, the first SELECT of the subquery had
to be a degenerate no-tables select. In this case, execution will
not enter into Item_in_subselect::create_row_in_to_exists_cond()
and will not check if left_expr is composed of scalars.
But the subquery is a UNION so as a whole it is not degenerate.
We try to create an expression cache for the subquery.
We create a temp.table from left_expr columns. No field is created
for the Item_row. Then, we crash when trying to add an index over a
non-existent field.
Fixed by moving the left_expr cardinality check to a point in
check_and_do_in_subquery_rewrites() which gets executed for all
cases.
It's better to make the check early so we don't have to care about
subquery rewrite code hitting Item_row in left_expr.
The crash inside my_vsnprintf_utf32() happened correctly,
because the caller methods:
Field_string::sql_rpl_type()
Field_varstring::sql_rpl_type()
mis-used the charset library and sent pure ASCII data to the
virtual function snprintf() of a utf32 CHARSET_INFO.
It was wrong to use Field::charset() in sql_rpl_type().
We're printing the metadata (the data type) here, not the column data.
The string contraining the data type of a CHAR/VARCHAR column
is a pure ASCII string.
Fixing to use res->charset() to print, like all virtual implementations
of sql_type() do.
Review was done by Andrei Elkin.
Thanks to Andrei for proposing MTR test improvents.
An "ITERATE innerLoop" did not work properly inside
a WHILE loop, which itself is inside an outer FOR loop:
outerLoop:
FOR
...
innerLoop:
WHILE
...
ITERATE innerLoop;
...
END WHILE;
...
END FOR;
It erroneously generated an integer increment code for the outer FOR loop.
There were two problems:
1. "ITERATE innerLoop" worked like "ITERATE outerLoop"
2. It was always integer increment, even in case of FOR cursor loops.
Background:
- A FOR loop automatically creates a dedicated sp_pcontext stack entry,
to put the iteration and bound variables on it.
- Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated
slack entry.
The old code erroneously assumed that sp_pcontext::m_for_loop
either describes the most inner loop (in case the inner loop is FOR),
or is empty (in case the inner loop is not FOR).
But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop:
it describes the closest FOR loop, even if this FOR loop has nested
non-FOR loops inside.
So when we're near the ITERATE statement in the above script,
sp_pcontext::m_for_loop is not empty - it stores information about
the FOR loop labeled as "outrLoop:".
Fix:
- Adding a new member sp_pcontext::Lex_for_loop::m_start_label,
to remember the explicit or the auto-generated label correspoding
to the start of the FOR body. It's used during generation
of "ITERATE loop_label" code to check if "loop_label" belongs
to the current FOR loop pointed by sp_pcontext::m_for_loop,
or belongs to a non-FOR nested loop.
- Adding LEX methods sp_for_loop_intrange_iterate() and
sp_for_loop_cursor_iterate() to reuse the code between
methods handling:
* ITERATE
* END FOR
- Adding a test for Lex_for_loop::is_for_loop_cursor()
and generate a code either a cursor fetch, or for an integer increment.
Before this change, it always erroneously generated an integer increment
version.
- Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside
Lex_for_loop_st::init(), to avoid not initialized members.
- Cleanup: Removing a redundant method:
Lex_for_loop_st::init(const Lex_for_loop_st &other)
Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
Problem:
Item_func_date_format::val_str() and make_date_time() did not take into
account that the format string and the result string
(separately or at the same time) can be of a tricky character set
like UCS2, UTF16, UTF32. As a result, DATE_FORMAT() could generate
an ill-formed result which crashed on DBUG_ASSERTs testing well-formedness
in other parts of the code.
Fix:
1. class String changes
Removing String::append_with_prefill(). It was not compatible with
tricky character sets. Also it was inconvenient to use and required
too much duplicate code on the caller side.
Adding String::append_zerofill() instead. It's compatible with tricky
character sets and is easier to use.
Adding helper methods Static_binary_string::q_append_wc() and
String::append_wc(), to append a single wide character
(a Unicode code point in my_wc_t).
2. storage/spider changes
Removing spider_string::append_with_prefill().
It used String::append_with_prefix() inside, but it was unused itself.
3. Changing tricky charset incompatible code pieces in make_date_time()
to compatible replacements:
- Fixing the loop scanning the format string to iterate in terms
of Unicode code points (using mb_wc()) rather than in terms
of "char" items.
- Using append_wc(my_wc_t) instead of append(char) to append
a single character to the result string.
- Using append_zerofill() instead of append_with_prefill() to
append date/time numeric components to the result string.
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The error was specific to threadpool/compressed protocol.
set_thd_idle() set socket state to idle twice, causing assert failure.
This happens if unread compressed data on connection,after query was
finished. On a protocol level, this means a single compression packet
contains multiple command packets.
state() == s_prepared || state() == s_must_abort || state() == s_aborting ||
state() == s_cert_failed || state() == s_must_replay' failed
When applier tries to execute write rows event it find out
in table_def::compatible_with that value is not compatible
and sets error and thd->is_slave_error but thd->is_error()
is false. Later in rpl_group_info::slave_close_thread_tables
we commit stmt. This is bad for Galera because later in
apply_write_set we notice that event apply was not successful
and try to rollback transaction, but wsrep transaction
is already in s_committed state.
This is fixed on rpl_group_info::slave_close_thread_tables
so that in Galera case we rollback stmt if thd->is_slave_error
or thd->is_error() is set. Then later we can rollback wsrep
transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The function setup_windows() called at the prepare phase of processing a
select builds a list of all window specifications used in the select. This list
is built on the statement memory and it must be done only once.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem was that with BINLOG-statement you can execute
binlog events on master also (not only in applier).
Fix removes too strict part wsrep_thd_is_applying from
assertion. Note that actual event in test is intentionally
corrupted to test should this error being ignored.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|| state() == s_prepared || state() == s_committing
|| state() == s_must_abort || state() == s_replaying'
failed.
CACHE INDEX and LOAD INDEX INTO CACHE are local operations.
Therefore, do not replicate them with Galera.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
=========
During commit, server calls prepare_commit_versioned to
determine the transaction modified system-versioned data.
Due to binlog_do_db option, we disable the binlog for the
statement. But prepare_commit_versioned() is being
called only when binlog is enabled for the statement.
Fix:
===
prepare_commit_versioned() should happen irrespective
of binlog state. So if the server has any read-write operation
then we should call prepare_commit_versioned().
The fix is to return 3-state value from Range_rowid_filter::build()
call:
1. The filter was built successfully;
2. The filter was not built, but the error was not fatal, i.e. there is
no need to rollback transaction. For example, if the size of container to
storevrow ids is not enough;
3. The filter was not built because of fatal error, for example,
deadlock or lock wait timeout from storage engine. In this case we
should stop query plan execution and roll back transaction.
Reviewed by: Sergey Petrunya
When resolving a column from the HAVING clause, a new Item_field
object may be created inside Item_ref::fix_fields().
But the object is created with an empty name resolution context,
which then leads to debug assertion failure during
Item_field::fix_fields().
The solution is to pass the correct name resolution context
when creating the Item_field object.
Reviewer: Oleksandr Byelkin (sanja@mariadb.com)
On creation of a VIEW that depends on a stored routine an instance of
the class Item_func_sp is allocated on a memory root of SP statement.
It happens since mysql_make_view() calls the method
THD::activate_stmt_arena_if_needed()
before parsing definition of the view.
On the other hand, when sp_head's rcontext is created an instance of
the class Field referenced by the data member
Item_func_sp::result_field
is allocated on the Item_func_sp's Query_arena (call arena) that set up
inside the method
Item_sp::execute_impl
just before calling the method
sp_head::execute_function()
On return from the method sp_head::execute_function() all items allocated
on the Item_func_sp's Query_arena are released and its memory root is freed
(see implementation of the method Item_sp::execute_impl). As a consequence,
the pointer
Item_func_sp::result_field
references to the deallocated memory. Later, when the method
sp_head::execute
cleans up items allocated for just executed SP instruction the method
Item_func_sp::cleanup is invoked and tries to delete an object referenced
by data member Item_func_sp::result_field that points to already deallocated
memory, that results in a server abnormal termination.
To fix the issue the current active arena shouldn't be switched to
a statement arena inside the function mysql_make_view() that invoked indirectly
by the method sp_head::rcontext_create. It is implemented by introducing
the new Query_arena's state STMT_SP_QUERY_ARGUMENTS that is set when explicit
Query_arena is created for placing SP arguments and other caller's side items
used during SP execution. Then the method THD::activate_stmt_arena_if_needed()
checks Query_arena's state and returns immediately without switching to
statement's arena.
The SQL thread and a user connection executing SHOW SLAVE STATUS
have a race condition on Last_SQL_Errno, such that a slave which
previously errored and stopped, on its next start, SHOW SLAVE STATUS
can show that the SQL Thread is running while the previous error is
also showing.
The fix is to move when the last error is cleared when the SQL
thread starts to occur before setting the status of
Slave_SQL_Running.
Thanks to Kristian Nielson for his work diagnosing the problem!
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Kristian Nielson <knielsen@knielsen-hq.org>
Remove TLSv1.1 from the default tls_version system variable.
Output a warning if TLSv1.0 or TLSv1.1 are selected.
Thanks Tingyao Nian for the feature request.
Problem was that if wsrep_notify_cmd was set it was called
with a new status "joined" it tries to connect to the server
to update some table, but the server isn't initialized yet,
it's not listening for connections. So the server waits for the
script to finish, script waits for mariadb client to connect,
and the client cannot connect, because the server isn't listening.
Fix is to call script only when Galera has already formed a
view or when it is synched or donor.
This fix also enables following test cases:
* galera.MW-284
* galera.galera_binlog_checksum
* galera_var_notify_ssl_ipv6
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The problem was that parallel replication of temporary tables using
statement-based binlogging could overlap the COMMIT in one thread with a DML
or DROP TEMPORARY TABLE in another thread using the same temporary table.
Temporary tables are not safe for concurrent access, so this caused
reference to freed memory and possibly other nastiness.
The fix is to disable the optimisation with overlapping commits of one
transaction with the start of a later transaction, when temporary tables are
in use. Then the following event groups will be blocked from starting until
the one using temporary tables is completed.
This also fixes occasional test failures of rpl.rpl_parallel_temptable seen
in Buildbot.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
recalculate long unique hash in Write_rows_log_event
and Update_rows_log_event.
normally generated columns (stored and indexed virtual)
are deterministic and their values don't need to be recalculated
on the slave as they're already present in the row image.
but the long unique hash function was changed in MDEV-27653,
so a row event from the old master will have the old hash,
but a table created on the new slave will need a new hash.
Create test for for case insensitive gives a basic warning on creating
a test file and the next thing a user might see is an abort.
ProtectHome and other systemd setting protect system services from
accessing user data. Unfortunately some of our users do put things
on /home due space or other reasons.
Rather than enumberate the systemd options in a very clunkly fragile
way we put an error associated with the "Can't create test file" and
hope the user can work it out from there.
%M tip thanks Sergei.
Fixed memory leak taken place on executing a prepared statement or
a stored routine that querying a view and this view constructed
on an information schema table. For example,
Lets consider the following definition of the view 'v1'
CREATE VIEW v1 AS SELECT table_name FROM information_schema.views
ORDER BY table_name;
Querying this view in PS mode result in hit of assert.
PREPARE stmt FROM "SELECT * FROM v1";
EXECUTE stmt;
EXECUTE stmt; (*)
Running the statement marked with (*) leads to a crash in case
server build with mode to control allocation of a memory from SP/PS
memory root on the second and following executions of PS/SP.
The reason of leaking the memory is that a memory allocated on
processing of FRM file for the view requested from a PS/PS memory
root meaning that this memory be released only when a stored routine
be evicted from SP-cache or a prepared statement be deallocated
that typically happens on termination of a user session.
To fix the issue switch to a memory root specially created for
allocation of short-lived objects that requested on parsing FRM.
In case a table accessed by a PS/SP is dropped after the first execution of
PS/SP and a view created with the same name as a table just dropped then
the second execution of PS/SP leads to allocation of a memory on SP/PS
memory root already marked as read only on first execution.
For example, the following test case:
CREATE TABLE t1 (a INT);
PREPARE stmt FROM "INSERT INTO t1 VALUES (1)";
EXECUTE stmt;
DROP TABLE t1;
CREATE VIEW t1 S SELECT 1;
--error ER_NON_INSERTABLE_TABLE
EXECUTE stmt; # (*)
DROP VIEW t1;
will hit assert on running the statement 'EXECUTE stmt' marked with (*)
when allocation of a memory be performed on parsing the view.
Memory allocation is requested inside the function mysql_make_view
when a view definition being parsed. In order to avoid an assertion
failure, call of the function mysql_make_view() must be moved after
invocation of the function check_and_update_table_version().
It will result in re-preparing the whole PS statement or current
SP instruction that will free currently allocated items and reset
read_only flag for the memory root.
Moved call of the function check_and_update_table_version() just
before the place where the function extend_table_list() is invoked
in order to avoid allocation of memory on a PS/SP memory root
marked as read only. It happens by the reason that the function
extend_table_list() invokes sp_add_used_routine() to add a trigger
created for the table in time frame between execution the statement
EXECUTE `stmt_id` .
For example, the following test case
create table t1 (a int);
prepare stmt from "insert into t1 (a) value (1)";
execute stmt;
create trigger t1_bi before insert on t1 for each row
set @message= new.a;
execute stmt; # (*)
adds the trigger t1_bi to a list of used routines that involves
allocation of a memory on PS memory root that has been already marked
as read only on first run of the statement 'execute stmt'.
In result, when the statement marked with (*) is executed it results in
assert hit.
To fix the issue call the function check_and_update_table_version()
before invocation of extend_table_list() to force re-compilation of
PS/SP that resets read-only flag of its memory root.
This patch adds support for controlling of memory allocation
done by SP/PS that could happen on second and following executions.
As soon as SP or PS has been executed the first time its memory root
is marked as read only since no further memory allocation should
be performed on it. In case such allocation takes place it leads to
the assert hit for invariant that force no new memory allocations
takes place as soon as the SP/PS has been marked as read only.
The feature for control of memory allocation made on behalf SP/PS
is turned on when both debug build is on and the cmake option
-DWITH_PROTECT_STATEMENT_MEMROOT is set.
The reason for introduction of the new cmake option
-DWITH_PROTECT_STATEMENT_MEMROOT
to control memory allocation of second and following executions of
SP/PS is that for the current server implementation there are too many
places where such memory allocation takes place. As soon as all such
incorrect allocations be fixed the cmake option
-DWITH_PROTECT_STATEMENT_MEMROOT
can be removed and control of memory allocation made on second and
following executions can be turned on only for debug build. Before
every incorrect memory allocation be fixed it makes sense to guard
the checking of memory allocation on read only memory by extra cmake
option else we would get a lot of failing test on buildbot.
Moreover, fixing of all incorrect memory allocations could take pretty
long period of time, so for introducing the feature without necessary
to wait until all places throughout the source code be fixed it makes
sense to add the new cmake option.
For clang compiler the compiler's flag -Wno-unused-but-set-variable
was set based on compiler version. This approach could result in
false positive detection for presence of compiler option since
only first three groups of digits in compiler version taken into account
and it could lead to inaccuracy in determining of supported compiler's
features.
Correct way to detect options supported by a compiler is to use
the macros MY_CHECK_CXX_COMPILER_FLAG and to check the result of
variable with prefix have_CXX__
So, to check whether compiler does support the option
-Wno-unused-but-set-variable
the macros
MY_CHECK_CXX_COMPILER_FLAG(-Wno-unused-but-set-variable)
should be called and the result variable
have_CXX__Wno_unused_but_set_variable
be tested for assigned value.