don't construct open ranges from prefix blob keys for < (less than)
just as it's already done for > (greater than)
because prefix KEY_PART doesn't create prefix Field for blobs
(see open_table_from_share() near "Create a new field for the key part"),
so stored_field_cmp_to_item() will compare the original field to the
value not taking the prefix length into account.
A simple "SET SESSION gtid_seq_no= DEFAULT" did not work, it would straight
up crash the server! Also, explicitly setting gtid_seq_no to 0 gave an error
in --gtid-strict-mode=1.
Setting to DEFAULT or 0 should disable any prior setting of
gtid_seq_no, so that the next transaction is allocated the next GTID
in sequence, as normal.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
differently react to SQL_MODE => unusable SHOW CREATE
Use abort_on_warning dependent on strict mode over create new table
like it is done for copy data and inplace alter.
MDEV-31749 sporadic assert in MDEV-30619 new test
If the workers of a parallel replica are busy (potentially with long
queues), but the SQL thread has no events left to distribute (so it
goes idle), then the next event that comes from the primary will
update mi->last_master_timestamp with its timestamp, even if the
workers have not yet finished.
This patch changes the parallel replica logic which updates
last_master_timestamp after idling from using solely sql_thread_caught_up
(added in MDEV-29639) to using the latter with rli queued/dequeued
event counters.
That is, if the queued count is equal to the dequeued count, it
means all events have been processed and the replica is considered
idle when the driver thread has also distributed all events.
Low level details of the commit include
- to make a more generalized test for Seconds_Behind_Master on
the parallel replica, rpl_delayed_parallel_slave_sbm.test
is renamed to rpl_parallel_sbm.test for this purpose.
- pause_sql_thread_on_next_event usage was removed
with the MDEV-30619 fixes. Rather than remove it, we adapt it
to the needs of this test case
- added test case to cover SBM spike of relay log read and LMT
update that was fixed by MDEV-29639
- rpl_seconds_behind_master_spike.test is made to use
the negate_clock_diff_with_master debug eval.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
We introduce simple plugin dependency. A plugin init function may
return HA_ERR_RETRY_INIT. If this happens during server startup when
the server is trying to initialise all plugins, the failed plugins
will be retried, until no more plugins succeed in initialisation or
want to be retried.
This will fix spider init bugs which is caused in part by its
dependency on Aria for initialisation.
The reason we need a new return code, instead of treating every
failure as a request for retry, is that it may be impossible to clean
up after a failed plugin initialisation. Take InnoDB for example, it
has a global variable `buf_page_cleaner_is_active`, which may not
satisfy an assertion during a second initialisation try, probably
because InnoDB does not expect the initialisation to be called
twice.
Since TLS server certificate verification is a client
only option, this flag is removed in both client (C/C)
and MariaDB server capability flags.
This patch reverts commit 89d759b93e
(MySQL Bug #21543) and stores the server certificate validation
option in mysql->options.extensions.
Restrict vcol_cleanup_expr() in close_thread_tables() to only simple
locked tables mode. Prelocked is cleaned up like normal statement: in
close_thread_table().
First UPDATE under START TRANSACTION does nothing (nstate= nstate),
but anyway generates history. Since update vector is empty we get into
(!uvect->n_fields) branch which only adds history row, but does not do
update. After that we get current row with wrong (old) row_start value
and because of that second UPDATE tries to insert history row again
because it sees trx->id != row_start which is the guard to avoid
inserting multiple trx_id-based history rows under same transaction
(because we have same trx_id and we get duplicate error and this bug
demostrates that). But this try anyway fails because PK is based on
row_end which is constant under same transaction, so PK didn't change.
The fix moves vers_make_update() to an earlier stage of
calc_row_difference(). Therefore it prepares update vector before
(!uvect->n_fields) check and never gets into that branch, hence no
need to handle versioning inside that condition anymore.
Now trx->id and row_start are equal after first UPDATE and we don't
try to insert second history row.
== Cleanups and improvements ==
ha_innobase::update_row():
vers_set_fields and vers_ins_row are cleaned up into direct condition
check. SQLCOM_ALTER_TABLE check now is not used as this is dead code,
assertion is done instead.
upd_node->is_delete is set in calc_row_difference() just to keep
versioning code as much in one place as possible. vers_make_delete()
is still located in row_update_for_mysql() as this is required for
ha_innodbase::delete_row() as well.
row_ins_duplicate_error_in_clust():
Restrict DB_FOREIGN_DUPLICATE_KEY to the better conditions.
VERSIONED_DELETE is used specifically to help lower stack to
understand what caused current insert. Related to MDEV-29813.
On create table tmp as select ... we exited Item_func::fix_fields()
with error. fix_fields_if_needed('foo' or 'bar') failed and we
returned true, but already changed const_item_cache. So the item is in
inconsistent state: fixed == false and const_item_cache == false.
Now we cleanup the item before the return if Item_func::fix_fields()
fails to process.
Constraints processing row_ins_check_foreign_constraint() was not
called because row_upd_check_references_constraints() didn't see
update as delete: node->is_delete was false.
Since MDEV-30378 we check for TRG_EVENT_DELETE to detect versioned
delete in ha_innobase::update_row().
Now we can use TRG_EVENT_DELETE to set upd_node->is_delete, so
constraints processing is triggered correctly.
Problem:
Item_func_conv::val_str() copied the ASCII string with the numeric base
conversion result directly to the function result string. In case of a
tricky character set (e.g. utf32) it produced an illformed string.
Fix:
Copy the base conversion result to the function result as is only if
the function character set is ASCII compatible, go through a
character set conversion otherwise.
Change history in the affected code:
- Since 10.4.8 (MDEV-20397 and MDEV-23311), functions ROUND(), CEILING(),
FLOOR() return a TIME value for a TIME input.
- Since 10.4.14 (MDEV-23525), MIN() and MAX() calculate a result for a TIME
input using val_native() rather than val_str().
Problem:
The patch for MDEV-23525 did not take into account combinations like
MIN(ROUND(time)), MAX(FLOOR(time)), etc.
MIN() and MAX() with ROUND(time), CEILING(time), FLOOR(time) as an argument
call the method val_native() of the undelying classes Item_func_round and
Item_func_int_val. However these classes implemented the method val_native()
as DBUG_ASSERT(0).
Fix:
This patch adds a TIME-specific code inside:
- Item_func_round::val_native()
- Item_func_int_val::val_native()
still with DBUG_ASSERT(0) for all other data types,
as other data types do not call val_native() of these classes.
We'll need a more generic solition eventualy, e.g.
turn Item_func_round and Item_func_int_val into Item_handled_func.
However, this change would be too risky for 10.4 at this point.
The pointer was used deep in the call path.
Resolve this by setting the pointer to NULL at the end of
the function.
Tested with gcc-13.3.1 (fc38)
The warning disable 38fe266ea9 can be reverted in 10.6+ on merge.
There was a memory leak under these conditions:
- YYABORT was called in the end-of-rule action of a rule containing expr_lex
- This expr_lex was not bound to any sp_lex_keeper
Bison did not call %destructor <expr_lex> in this case, because its stack
already contained a reduced upper-level rule.
Fixing rules starting with RETURN, CONTINUE, EXIT keywords:
Turning end-of-rule actions with YYABORT into mid-rule actions
by adding an empty trailing { } block. This prevents the upper level
rule from being reduced without calling %destructor <expr_lex>.
In other rules expr_lex is used not immediately before the last
end-of-rule { } block, so they don't need changes.
Also fixing: MDEV-31719 Wrong result of: WHERE inet6_column IN ('','::1')
Problem:
When converting an Item value from string to INET6 it's possible
that the Item value itself is a not-NULL string value,
while the following result of the string-to-INET6 conversion returns NULL.
Methods cmp_item_xxx::set(), cmp_item_xxx::store_value_by_template(),
in_inet6::set() did not take this scenario into account and
tested source_item->null_value, which does not indicate if the conversion
failed.
Changing the return data type of the mentioned methods from "void" to "bool".
"true" means that:
- either the source Item was NULL
- or the source Item was not NULL, but the data type coversion to
the destination data type (INET6 in this issue) returned NULL.
"false" means that the Item was not NULL and the data type conversion
to the destination data type worked without error.
This patches fixes the INET6 data type.
After merging to 10.9, this patch should also fix same problems in UUID.
- Moving the code from a public function trim_whitespaces()
to the class Lex_cstring as methods. This code may
be useful in other contexts, and also this code becomes
visible inside sql_class.h
- Adding a helper method THD::strmake_lex_cstring_trim_whitespaces()
- Unifying the way how CREATE PROCEDURE/CREATE FUNCTION and
CREATE PACKAGE/CREATE PACKAGE BODY work:
a) Now CREATE PACKAGE/CREATE PACKAGE BODY also calls
Lex->sphead->set_body_start() to remember the cpp body start inside
an sp_head member.
b) adding a "const char *cpp_body_end" parameter to
sp_head::set_stmt_end().
These changes made it possible to reuse sp_head::set_stmt_end() inside
LEX::create_package_finalize() and remove the duplucate code.
- Renaming sp_head::m_body_begin to m_cpp_body_begin and adding a comment
to make it clear that this member is used only during parsing, and
points to a fragment inside the cpp buffer.
- Changed sp_head::set_body_start() and sp_head::set_stmt_end()
to skip the calls related to "body_utf8" in cases when m_parent is not NULL.
A non-NULL m_parent means that we're inside a package routine.
"body_utf8" in such case belongs not to the current sphead itself,
but to parent (the package) sphead.
So an sphead instance of a package routine should neither initialize,
nor finalize, nor change in any other ways the "body_utf8" related
members of Lex_input_stream, and should not take over or copy "body_utf8"
data from Lex_input_stream to "this".
MDEV-31503 ALTER SEQUENCE ends up in optimistic parallel slave binlog out-of-order
The OOO error still was possible even after MDEV-31077. This time
it occured through open_table() when the sequence table was not in
the table cache *and* the table was created before the last server
restart.
In such context a internal (read-only) transaction is committed
and it was not blocked from doing a wakeup() call to subsequent
transactions.
Fixed with extending suspend_subsequent_commits() effect for the entirety
of Sql_cmd_alter_sequence::execute().
An elaborated MDEV-31077 test proves the fixes of both failure scenarios.
Also the bug condition suggests a workaround to pre-SELECT sequence
tables before START SLAVE.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
The case is statement format and mixed InnoDB/MyISAM without
binlog_direct_non_trans_update. Fix due to Brandon Nesterenko.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The largest_started_sub_id needs to be set under LOCK_parallel_entry
together with testing stop_sub_id. However, in-between was the logic for
do_ftwrl_wait(), which temporarily releases the mutex. This could lead to
inconsistent stopping amongst worker threads and lost data.
Fix by moving all the stop-related logic out from unrelated do_gco_wait()
and do_ftwrl_wait() and into its own function do_stop_handling().
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The problem is that when a worker thread is (user) killed in
wait_for_prior_commit, the event group may complete out-of-order since the
wait for prior commit was aborted by the kill.
This fix ensures that event groups will always complete in-order, even
in the error case. This is done in finish_event_group() by doing an
extra wait_for_prior_commit(), if necessary, that ignores kills.
This fix supersedes the fix for MDEV-30780, so the earlier fix for
that is reverted in this patch.
Also fix that an error from wait_for_prior_commit() inside
finish_event_group() would not signal the error to
wakeup_subsequent_commits().
Based on earlier work by Brandon Nesterenko and Andrei Elkin, with
some changes to simplify the semantics of wait_for_prior_commit() and
make the code more robust to future changes.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The problem was an incorrect unmark_start_commit() in
signal_error_to_sql_driver_thread(). If an event group gets an error, this
unmark could run after the following GCO started, and the subsequent
re-marking could access de-allocated GCO.
The offending unmark_start_commit() looks obviously incorrect, and the fix
is to just remove it. It was introduced in the MDEV-8302 patch, the commit
message of which suggests it was added there solely to satisfy an assertion
in ha_rollback_trans(). So update this assertion instead to not trigger for
event groups that experienced an error (rgi->worker_error). When an error
occurs in an event group, all following event groups are skipped anyway, so
the unmark should never be needed in this case.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
At STOP SLAVE, worker threads will continue applying event groups until the
end of the current GCO before stopping. This is a left-over from when only
conservative mode was available. In optimistic and aggressive mode, often
_all_ queued event will be in the same GCO, and slave stop will be
needlessly delayed.
This patch instead records at STOP SLAVE time the latest (highest sub_id)
event group that has started. Then worker threads will continue to apply
event groups up to that event group, but skip any following. The result is
that each worker thread will complete its currently running event group, and
then the slave will stop.
If the slave is caught up, and STOP SLAVE is run in the middle of an event
group that is already executing in a worker thread, then that event group
will be rolled back and the slave stop immediately, as normal.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Where a read-only server permits writes through replication, it
should not permit user connections to commit/rollback XA
transactions prepared via replication. The bug reported in
MDEV-30978 shows that this can happen. This is because there is no
read only check in the XA transaction logic, the most relevant one
occurs in ha_commit_trans() for normal statements/transactions.
This patch extends the XA transaction logic to check the read only
status of the server before performing an XA COMMIT or ROLLBACK.
Reviewed By:
Andrei Elkin <andrei.elkin@mariadb.com>
The problematic query outlined a bug in window functions sorting
optimization. When multiple window functions are present in a query,
we sort the sorting key (as defined by PARTITION BY and ORDER BY) from
generic to specific.
SELECT RANK() OVER (ORDER BY const_col) as r1,
RANK() OVER (ORDER BY const_col, a) as r2,
RANK() OVER (PARTITION BY c) as r3,
RANK() OVER (PARTITION BY c ORDER BY b) as r4
FROM table;
For these functions, the sorting we need to do for window function
computations are: [(const_col), (const_col, a)] and [(c), (c, b)].
Instead of doing 4 different sort order, the sorts grouped within [] are
compatible and we can use the most *specific* sort to cover both window
functions.
The bug was caused by an incorrect flagging of which sort is most
specific for a compatible group of functions. In our specific test case,
instead of picking (const_col, a) as the most specific sort, it would
only sort by (const_col), which lead to wrong results for rank function.
By ensuring that we pick the last sort key before an "incompatible sort"
flag is met in our "ordered array of sorting specifications", we
guarantee correct results.
If procedure is changed in one connection, and other procedure has
already called the initial version of the procedure, the query to
INFORMATION_SCHEMA.PARAMETERS would use obsolete information from sp
cache for that connection. That happens because cache invalidating
method only increments cache version, and does not flush (all) the
cache(s), and changing of a procedure only invalidates cache, and
removes the procedure's cache entry from local thread cache only.
The fix adds the check if sp info obtained from the cache for forming of
results for the query to I_S, is not obsoleted, and does not use it, if
it is.
The test has been added to main.information_schema. It changes the SP in
one connection, and ensures, that the change is seen in the query to the
I_S.PARAMETERS in other connection, that already has called the
procedure before the change.
There was no actual execution of the SQL of a pushed derived table,
which caused "r_rows" to be always displayed as 0 and "r_total_time_ms"
to show inaccurate numbers.
This commit makes a derived table SQL to be executed by the storage
engine, so the server is able to calculate the number of rows returned
and measure the execution time more accurately
Fix that rpl_slave_state::load() was calling rpl_slave_state::update() without
holding LOCK_slave_state.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When CURSOR parameters get parsed, their sp_assignment_lex instances
(one instance per parameter) get collected to List<sp_assignment_lex>.
These instances get linked to sphead only in the end of the list.
If a syntax error happened in the middle of the parameter list,
these instances were not deleted, which caused memory leaks.
Fix:
using a Bison %destructor to free rules of the <sp_assignment_lex_list>
type (on syntax errors).
Afte the fix these sp_assignment_lex instances from CURSOR parameters
deleted as follows:
- If the CURSOR statement was fully parsed, then these instances
get properly linked to sp_head structures, so they are deleted
during ~sp_head (this did not change)
- If the CURSOR statement failed on a syntax error, then by Bison's
%destructor (this is being added in the current patch).
The parser works as follows:
The rule expr_lex returns a pointer to a newly created sp_expr_lex
instance which is not linked to any MariaDB structures yet - it is
pointed only from a Bison stack variable. The sp_expr_lex instance
gets linked to other structures (such as sp_instr_jump_if_not) later,
after scanning some following grammar.
Problem before the fix:
If a parse error happened immediately after expr_lex (before it got linked),
the created sp_expr_lex value got lost causing a memory leak.
Fix:
- Using Bison's "destructor" directive to free the results of expr_lex
on parse/oom errors.
- Moving the call for LEX::cleanup_lex_after_parse_error() from
MYSQL_YYABORT and yyerror inside parse_sql().
This is needed because Bison calls destructors after yyerror(),
while it's important to delete the sp_expr_lex instance before
LEX::cleanup_lex_after_parse_error().
The latter frees the memory root containing the sp_expr_lex instance.
After this change the code block are executed in the following order:
- yyerror() -- now only raises the error to DA (no cleanup done any more)
- %destructor { delete $$; } <expr_lex> -- destructs the sp_expr_lex instance
- LEX::cleanup_lex_after_parse_error() -- frees the memory root containing
the sp_expr_lex instance
- Removing the "delete sublex" related code from restore_lex():
- restore_lex() is called in most cases on success, when delete is not needed.
- There is one place when restore_lex() is called on error:
In sp_create_assignment_instr(). But in this case LEX::sp_lex_in_use
is true anyway.
The patch adds a new DBUG_ASSERT(lex->sp_lex_in_use) to guard this.
'long long int'; cast to an unsigned type to negate this value ..
to itself in Item_func_mul::int_op and Item_func_round::int_op
Problems:
The code in multiple places in the following methods:
- Item_func_mul::int_op()
- longlong Item_func_int_div::val_int()
- Item_func_mod::int_op()
- Item_func_round::int_op()
did not properly check for corner values LONGLONG_MIN
and (LONGLONG_MAX+1) before doing negation.
This cuased UBSAN to complain about undefined behaviour.
Fix summary:
- Adding helper classes ULonglong, ULonglong_null, ULonglong_hybrid
(in addition to their signed couterparts in sql/sql_type_int.h).
- Moving the code performing multiplication of ulonglong numbers
from Item_func_mul::int_op() to ULonglong_hybrid::ullmul().
- Moving the code responsible for extracting absolute values
from negative numbers to Longlong::abs().
It makes sure to perform negation without undefinite behavior:
LONGLONG_MIN is handled in a special way.
- Moving negation related code to ULonglong::operator-().
It makes sure to perform negation without undefinite behavior:
(LONGLONG_MAX + 1) is handled in a special way.
- Moving signed<=>unsigned conversion code to
Longlong_hybrid::val_int() and ULonglong_hybrid::val_int().
- Reusing old and new sql_type_int.h classes in multiple
places in Item_func_xxx::int_op().
Fix details (explain how sql_type_int.h classes are reused):
- Instead of straight negation of negative "longlong" arguments
*before* performing unsigned multiplication,
Item_func_mul::int_op() now calls ULonglong_null::ullmul()
using Longlong_hybrid_null::abs() to pass arguments.
This fixes undefined behavior N1.
- Instead of straight negation of "ulonglong" result
*after* performing unsigned multiplication,
Item_func_mul::int_op() now calls ULonglong_hybrid::val_int(),
which recursively calls ULonglong::operator-().
This fixes undefined behavior N2.
- Removing duplicate negating code from Item_func_mod::int_op().
Using ULonglong_hybrid::val_int() instead.
This fixes undefinite behavior N3.
- Removing literal "longlong" negation from Item_func_round::int_op().
Using Longlong::abs() instead, which correctly handler LONGLONG_MIN.
This fixes undefinite behavior N4.
- Removing the duplicate (negation related) code from
Item_func_int_div::val_int(). Reusing class ULonglong_hybrid.
There were no undefinite behavior in here.
However, this change allowed to reveal a bug in
"-9223372036854775808 DIV 1".
The removed negation code appeared to be incorrect when
negating +9223372036854775808. It returned the "out of range" error.
ULonglong_hybrid::operator-() now handles all values correctly
and returns +9223372036854775808 as a negation for -9223372036854775808.
Re-recording wrong results for
SELECT -9223372036854775808 DIV 1;
Now instead of "out of range", it returns -9223372036854775808,
which is the smallest possible value for the expression data type
(signed) BIGINT.
- Removing "no UBSAN" branch from Item_func_splus::int_opt()
and Item_func_minus::int_opt(), as it made UBSAN happy but
in RelWithDebInfo some MTR tests started to fail.
- When foreign_key_check is disabled, allowing to modify the
column which is part of foreign key constraint can lead to
refusal of TRUNCATE TABLE, OPTIMIZE TABLE later. So it make
sense to block the column modify operation when foreign key
is involved irrespective of foreign_key_check variable.
Correct way to modify the charset of the column when fk is involved:
SET foreign_key_checks=OFF;
ALTER TABLE child DROP FOREIGN KEY fk, MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE parent MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE child ADD CONSTRAINT FOREIGN KEY (m) REFERENCES PARENT(m);
SET foreign_key_checks=ON;
fk_check_column_changes(): Remove the FOREIGN_KEY_CHECKS while
checking the column change for foreign key constraint. This
is the partial revert of commit 5f1f2fc0e4
and it changes the behaviour of copy alter algorithm
ha_innobase::prepare_inplace_alter_table(): Find the modified
column and check whether it is part of existing and newly
added foreign key constraint.