There are 3 diff in result:
1) NULL value from SELECT
Due to incorrect truncating of the hex value, incorrect value is
written instead of original value to the view frm. This results in reading
incorrect value from frm, so eventual result is NULL.
2) 'Name_exp1' in column name (in gis.test)
This was because the identifier in SELECT is longer than 64 characters,
so 'Name_exp1' alias is also written to the view frm.
3)diff in explain extended
This was because the query plan for view protocol doesn't
contain database name. As a fix, disable view protocol for that particular
query.
This patch fixes two problems:
- The code inside my_strtod_int() in strings/dtoa.c could test the byte
behind the end of the string when processing the mantissa.
Rewriting the code to avoid this.
- The code in test_if_number() in sql/sql_analyse.cc called my_atof()
which is unsafe and makes the called my_strtod_int() look behind
the end of the string if the input string is not 0-terminated.
Fixing test_if_number() to use my_strtod() instead, passing the correct
end pointer.
Item_exists_subselect::fix_length_and_dec() sets explicit_limit to 1.
In the exists2in transformation it resets select_limit to NULL. For
consistency we should reset explicity_limit too.
This fixes a bug where spider table returns wrong results for queries
that gets through exists2in transformation when semijoin is off.
Gain MySQL compatibility by allowing table aliases in a single
table statement.
This now supports the syntax of:
DELETE [delete_opts] FROM tbl_name [[AS] tbl_alias] [PARTITION (partition_name [, partition_name] ...)] ....
The delete.test is from MySQL commit 1a72b69778a9791be44525501960b08856833b8d
/ Change-Id: Iac3a2b5ed993f65b7f91acdfd60013c2344db5c0.
Co-Author: Gleb Shchepa <gleb.shchepa@oracle.com> (for delete.test)
Reviewed by Igor Babaev (igor@mariadb.com)
Based on the current logic, objects of classes Item_func_charset and
Item_func_coercibility (responsible for CHARSET() and COERCIBILITY()
functions) are always considered constant.
However, SQL syntax allows their use in a non-constant manner, such as
CHARSET(t1.a), COERCIBILITY(t1.a).
In these cases, the `used_tables()` parameter corresponds to table names
in the function parameters, creating an inconsistency: the item is marked
as constant but accesses tables. This leads to crashes when
conditions with CHARSET()/COERCIBILITY() are pushed into derived tables.
This commit addresses the issue by setting `used_tables()` to 0 for
`Item_func_charset` and `Item_func_coercibility`. Additionally, the items
now store the return values during the preparation phase and return
them during the execution phase. This ensures that the items do not call
its arguments methods during the execution and are truly constant.
Reviewer: Alexander Barkov <bar@mariadb.com>
ENGINE_SUBSTITUTION only applies to CREATE TABLE and ALTER TABLE, and
Storage_engine_name::resolve_storage_engine_with_error() could be
called when executing any sql command.
- Lex_ident_cli* into a new file sql/lex_ident_cli.h
- Lex_ident_sys* into a new file sql/lex_ident_sys.h
- Well_formed_prefix into include/m_ctype.h
This change is needed to the optimizer hint parser coming soon.
Address Sanitizer's know how to detect stack overrun, so there's
no point in us doing it.
As evidenced by perfschema tests where signficant test failures
because this function failed under ASAN (MDEV-33210).
Also, so since clang-16, we cannot assume much about how local
variables are allocated on the stack (MDEV-31605).
Disabling check idea thanks to Sanja.
- Remove single/trivial call of function MYSQL_BIN_LOG::init() and remove function
- Remove single jump to label end2 and use code instead
- Remove label end2
The `Item` class methods `get_copy()`, `build_clone()`, and `clone_item()`
face an issue where they may be defined in a descendant class
(e.g., `Item_func`) but not in a further descendant (e.g., `Item_func_child`).
This can lead to scenarios where `build_clone()`, when operating on an
instance of `Item_func_child` with a pointer to the base class (`Item`),
returns an instance of `Item_func` instead of `Item_func_child`.
Since this limitation cannot be resolved at compile time, this commit
introduces runtime type checks for the copy/clone operations.
A debug assertion will now trigger in case of a type mismatch.
`get_copy()`, `build_clone()`, and `clone_item()` are no more virtual,
but virtual `do_get_copy()`, `do_build_clone()`, and `do_clone_item()`
are added to the protected section of the class `Item`.
Additionally, const qualifiers have been added to certain methods
to enhance code reliability.
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Trivial batch, using the handler statistics already collected for
the slow query log.
The reason for the changes in test cases was mainly to change to use
select TABLE_SCHEMA ... from information_schema.table_statistics instead
of 'show table_statistics' to avoid future changes to test results
if we add more columns to table_statistics.
This commits adds the "materialization" block to the output of
EXPLAIN/ANALYZE FORMAT=JSON when materialized subqueries are involved
into processing. In the case of ANALYZE additional runtime information
is displayed, such as:
- chosen strategy of materialization
- number of partial match/index lookup loops
- sizes of partial match buffers
from HAVING
The bug is caused by refixing of the constant subquery in pushdown from
HAVING into WHERE optimization.
Similarly to MDEV-29363 in the problematic query two references of the
constant subquery are used. After the pushdown one of the references of the
subquery is pushed into WHERE-clause and the second one remains as the part
of the HAVING-clause.
Before the represented fix, the constant subquery reference that was going to
be pushed into WHERE was cleaned up and fixed. That caused the changes of
the subquery itself and, therefore, changes for the second reference that
remained in HAVING. These changes caused a crash.
To fix this problem all constant objects that are going to be pushed into
WHERE should be marked with an IMMUTABLE_FL flag. Objects marked with this
flag are not cleaned up or fixed in the pushdown optimization.
Approved by Igor Babaev <igor@mariadb.com>
The issue was that the test did not take into account that the IO thread
could have been in COMMAND=Connecting state, which happens before the
COMMANMD=Slave_IO state.
The test is a bit fragile as it depends on the COMMAND state to be
syncronised with the Slave_IO_State, which is not the case.
I added a new proc state and some more information to the error
output to be able to diagnose future failures more easily.
Improve performance of queries like
SELECT * FROM t1 WHERE field = NAME_CONST('a', 4);
by, in this example, replacing the WHERE clause with field = 4
in the case of ref access.
The rewrite is done during fix_fields and we disambiguate this
case from other cases of NAME_CONST by inspecting where we are
in parsing. We rely on THD::where to accomplish this. To
improve performance there, we change the type of THD::where to
be an enumeration, so we can avoid string comparisons during
Item_name_const::fix_fields. Consequently, this patch also
changes all usages of THD::where to conform likewise.
There are two problems.
First, replication fails when XA transactions are used where the
slave has replicate_do_db set and the client has touched a different
database when running DML such as inserts. This is because XA
commands are not treated as keywords, and are thereby not exempt
from the replication filter. The effect of this is that during an XA
transaction, if its logged “use db” from the master is filtered out
by the replication filter, then XA END will be ignored, yet its
corresponding XA PREPARE will be executed in an invalid state,
thereby breaking replication.
Second, if the slave replicates an XA transaction which results in
an empty transaction, the XA START through XA PREPARE first phase of
the transaction won’t be binlogged, yet the XA COMMIT will be
binlogged. This will break replication in chain configurations.
The first problem is fixed by treating XA commands in
Query_log_event as keywords, thus allowing them to bypass the
replication filter. Note that Query_log_event::is_trans_keyword() is
changed to accept a new parameter to define its mode, to either
check for XA commands or regular transaction commands, but not both.
In addition, mysqlbinlog is adapted to use this mode so its
--database filter does not remove XA commands from its output.
The second problem fixed by overwriting the XA state in the XID
cache to be XA_ROLLBACK_ONLY, so at commit time, the server knows to
rollback the transaction and skip its binlogging. If the xid cache
is cleared before an XA transaction receives its completion command
(e.g. on server shutdown), then before reporting ER_XAER_NOTA when
the completion command is executed, the filter is first checked if
the database is ignored, and if so, the error is ignored.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
The server does not log errors after startup when it is started without the
--console parameter and not as a service. This issue arises due to an
undocumented behavior of FreeConsole() in Windows when only a single
process (mariadbd/mysqld) is attached to it, causing the window to close.
In this case stderr is redirected to a file before FreeConsole()
is called. Procmon shows FreeConsole closing file handle
subsequent writes to stderr fail with ERROR_INVALID_HANDLE because
WriteFile() cannot operate on the closed handle. This results in losing
all messages after startup, including warnings, errors, notes, and
crash reports.
Additionally, some users reported stderr being redirected to
multi-master.info and failing at startup, but this could not be reproduced
here.
The workaround involves calling FreeConsole() right before the redirection of
stdout/stderr. This fix has been tested with XAMPP and via cmd.exe using
"start mysqld". Automated testing using MTR is challenging for this case.
The fix is only applicable to version 10.5. In later versions, the
FreeConsole() call has been removed.
PURGE BINARY LOGS did not always purge binary logs. This commit fixes
some of the issues and adds notifications if a binary log cannot be
purged.
User visible changes:
- 'PURGE BINARY LOG TO log_name' and 'PURGE BINARY LOGS BEFORE date'
worked differently. 'TO' ignored 'slave_connections_needed_for_purge'
while 'BEFORE' did not. Now both versions ignores the
'slave_connections_needed_for_purge variable'.
- 'PURGE BINARY LOG..' commands now returns 'note' if a binary log cannot
be deleted like
Note 1375 Binary log 'master-bin.000004' is not purged because it is
the current active binlog
- Automatic binary log purges, based on date or size, will write a
note to the error log if a binary log matching the size or date
cannot yet be deleted.
- If 'slave_connections_needed_for_purge' is set from a config or
command line, it is set to 0 if Galera is enabled and 1 otherwise
(old default). This ensures that automatic binary log purge works
with Galera as before the addition of
'slave_connections_needed_for_purge'.
If the variable is changed to 0, a warning will be printed to the error
log.
Code changes:
- Added THD argument to several purge_logs related functions that needed
THD.
- Added 'interactive' options to purge_logs functions. This allowed
me to remove testing of sql_command == SQLCOM_PURGE.
- Changed purge_logs_before_date() to first check if log is applicable
before calling can_purge_logs(). This ensures we do not get a
notification for logs that does not match the remove criteria.
- MYSQL_BIN_LOG::can_purge_log() will write notifications to the user
or error log if a log cannot yet be removed.
- log_in_use() will return reason why a binary log cannot be removed.
Changes to keep code consistent:
- Moved checking of binlog_format for Galera to be after Galera is
initialized (The old check never worked). If Galera is enabled
we now change the binlog_format to ROW, with a warning, instead of
aborting the server. If this change happens a warning will be printed to
the error log.
- Print a warning if Galera or FLASHBACK changes the binlog_format
to ROW. Before it was done silently.
Reviewed by: Sergei Golubchik <serg@mariadb.com>,
Kristian Nielsen <knielsen@knielsen-hq.org>
st_select_lex::update_correlated_cache() fails to take JSON_TABLE
functions in subqueries into account.
Reviewed by Sergei Petrunia (sergey@mariadb.com)
The IO thread can report error code 2013 into the error log when it
is stopped during the initial connection process to the primary, as
well as when trying to read an event. However, because the IO thread
is being stopped, its connection to the primary is force-killed by
the signaling thread (see THD::awake_no_mutex()), and thereby these
connection errors should be ignored.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
my_like_range*() can create longer keys than Field::char_length().
This caused warnings during print_range().
Fix:
Suppressing warnings in print_range().
Rectify cases of mismatched brackets and address
possible cases of division by zero by checking if
the denominator is zero before dividing.
No functional changes were made.
All new code of the whole pull request, including one or several
files that are either new files or modified ones, are contributed
under the BSD-new license. I am contributing on behalf of my
employer Amazon Web Services, Inc.
Several variables declared in mysqld.h appear to be old system variables
that have been left over after deprecation. Delete them using IDE
refactoring to automatically search for other uses. Most cases had no
other uses in the code.
slave_allow_batching had a test that was effectively unused, as the
result was only
-ERROR HY000: Unknown system variable 'slave_allow_batching'
so that was deleted as well.
Build and test still works without issue as expected.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
The current semi-sync binlog fail-over recovery process uses
rpl_semi_sync_slave_enabled==TRUE as its condition to truncate a
primary server’s binlog, as it is anticipating the server to re-join
a replication topology as a replica. However, for servers configured
with both rpl_semi_sync_master_enabled=1 and
rpl_semi_sync_slave_enabled=1, if a primary is just re-started (i.e.
retaining its role as master), it can truncate its binlog to drop
transactions which its replica(s) has already received and executed.
If this happens, when the replica reconnects, its gtid_slave_pos can
be ahead of the recovered primary’s gtid_binlog_pos, resulting in an
error state where the replica’s state is ahead of the primary’s.
This patch changes the condition for semi-sync recovery to truncate
the binlog to instead use the configuration variable
--init-rpl-role, when set to SLAVE. This allows for both
rpl_semi_sync_master_enabled and rpl_semi_sync_slave_enabled to be
set for a primary that is restarted, and no transactions will be
lost, so long as --init-rpl-role is not set to SLAVE.
Reviewed By:
============
Sergei Golubchik <serg@mariadb.com>
The special logic used by the memory storage engine
to keep slaves in sync with the master on a restart can
break replication. In particular, after a restart, the
master writes DELETE statements in the binlog for
each MEMORY-based table so the slave can empty its
data. If the DELETE is not executable, e.g. due to
invalid triggers, the slave will error and fail, whereas
the master will never see the problem.
Instead of DELETE statements, use TRUNCATE to
keep slaves in-sync with the master, thereby bypassing
triggers.
Reviewed By:
===========
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
The symptoms were: take a server with no activity and a table that's
not in the buffer pool. Run a query that reads the whole table and
observe that r_engine_stats.pages_read_count shows about 2% of the table
was read. Who reads the rest?
The cause was that page prefetching done inside InnoDB was not counted.
This counts page prefetch requests made in buf_read_ahead_random() and
buf_read_ahead_linear() and makes them visible in:
- ANALYZE: r_engine_stats.pages_prefetch_read_count
- Slow Query Log: Pages_prefetched:
This patch intentionally doesn't attempt to count the time to read the
prefetched pages:
* there's no obvious place where one can do it
* prefetch reads may be done in parallel (right?), it is not clear how
to count the time in this case.
The crash is caused by the attempt to refix the constant subquery during
pushdown from HAVING into WHERE optimization.
Every condition that is going to be pushed into WHERE clause is first
cleaned up, then refixed. Constant subqueries are not cleaned or refixed
because they will remain the same after refixing, so this complicated
procedure can be omitted for them (introduced in MDEV-21184).
Constant subqueries are marked with flag IMMUTABLE_FL, that helps to miss
the cleanup stage for them. Also they are marked as fixed, so refixing is
also not done for them.
Because of the multiple equality propagation several references to the same
constant subquery can exist in the condition that is going to be pushed
into WHERE. Before this patch, the problem appeared in the following way.
After the first reference to the constant subquery is processed, the flag
IMMUTABLE_FL for the constant subquery is disabled.
So, when the second reference to this constant subquery is processed, the
flag is already disabled and the subquery goes through the procedure of
cleaning and refixing. That causes a crash.
To solve this problem, IMMUTABLE_FL should be disabled only after all
references to the constant subquery are processed, so after the whole
condition that is going to be pushed is cleaned up and refixed.
Approved by Igor Babaev <igor@maridb.com>
Fixing applying the COLLATE clause to a parameter caused an error error:
COLLATION '...' is not valid for CHARACTER SET 'binary'
Fix:
- Changing the collation derivation for a non-prepared Item_param
to DERIVATION_IGNORABLE.
- Allowing to apply any COLLATE clause to expressions with DERIVATION_IGNORABLE.
This includes:
1. A non-prepared Item_param
2. An explicit NULL
3. Expressions derived from #1 and #2
For example:
SELECT ? COLLATE utf8mb_unicode_ci;
SELECT NULL COLLATE utf8mb_unicode_ci;
SELECT CONCAT(?) COLLATE utf8mb_unicode_ci;
SELECT CONCAT(NULL) COLLATE utf8mb_unicode_ci
- Additional change: preserving the collation of an expression when
the expression gets assigned to a PS parameter and evaluates to SQL NULL.
Before this change, the collation of the parameter was erroneously set
to &my_charset_binary.
- Additional change: removing the multiplication to mbmaxlen from the
fix_char_length_ulonglong() argument, because the multiplication already
happens inside fix_char_length_ulonglong().
This fixes a too large column size created for a COLLATE clause.
In 10.0 there was an assert to ensure that there were semi
sync clients before removing one, but it was removed in 10.1.
This patch adds the assertion back.
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
Prepared_statement::execute
mysql_execute_command
Sql_cmd_dml::execute
Sql_cmd_update::execute_inner
Sql_cmd_update::update_single_table
st_select_lex::save_leaf_tables
List<TABLE_LIST>::push_back
To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.
Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
Caused by:
5d37cac7 MDEV-33348 ALTER TABLE lock waiting stages are indistinguishable.
In that commit, progress reporting was moved to
mysql_alter_table from copy_data_between_tables.
The temporary table case wasn't taken into the consideration,
where the execution of mysql_alter_table ends earlier than usual, by the
'end_temporary' label. There, thd_progress_end has been missing.
Fix:
Add missing thd_progress_end() call in mysql_alter_table.
The feedback plugin server_uid variable and the calculate_server_uid()
function is moved from feedback/utils.cc to sql/mysqld.cc
server_uid is added as a global variable (shown in 'show variables') and
is written to the error log on server startup together with server version
and server commit id.
We have an issue if a user have the following in a configuration file:
log_slow_filter="" # Log everything to slow query log
log_queries_not_using_indexes=ON
This set log_slow_filter to 'not_using_index' which disables
slow_query_logging of most queries.
In effect, on should never use log_slow_filter="" in config files but
instead use log_slow_filter=ALL.
Fixed by changing log_slow_filter="" that comes either from a
configuration file or from the command line, when starting to the server,
to log_slow_filter=ALL.
A warning will be printed when this happens.
Other things:
- One can now use =ALL for any 'set' variable to set all options at once.
(backported from 10.6)
When getaddrinfo returns and error, the contents
of ai are invalid so we cannot continue based
on their data structures.
In the previous branch of the if statement, we
abort there if there is an error so for consistency
we abort here too.
The test case fixes the port number to UINTMAX32
for both an enumberated bind-address and the
default bind-address covering the two calls to
getaddrinfo.
Review thanks Sanja.
Item_func_hex::fix_length_and_dec() evaluated a too short data type
for signed numeric arguments, which resulted in a 'Data too long for column'
error on CREATE..SELECT.
Fixing the code to take into account that a short negative
numer can produce a long HEX value: -1 -> 'FFFFFFFFFFFFFFFF'
Also fixing Item_func_hex::val_str_ascii_from_val_real().
Without this change, MTR test with HEX with negative float point arguments
failed on some platforms (aarch64, ppc64le, s390-x).
InnoDB transactions may be reused after committed:
- when taken from the transaction pool
- during a DDL operation execution
In this case wsrep flag on trx object is cleared, which may cause wrong
execution logic afterwards (wsrep-related hooks are not run).
Make trx->wsrep flag initialize from THD object only once on InnoDB transaction
start and don't change it throughout the transaction's lifetime.
The flag is reset at commit time as before.
Unconditionally set wsrep=OFF for THD objects that represent InnoDB background
threads.
Make Wsrep_schema::store_view() operate in its own transaction.
Fix streaming replication transactions' fragments rollback to not switch
THD->wsrep value during transaction's execution
(use THD->wsrep_ignore_table as a workaround).
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Affects:
MDEV-34150 Assertion failure in Diagnostics_area::set_error_status upon binary
logging hitting tmp space limit
MDEV-9101 Limit size of created disk temporary files and tables
This bug was caused by moving flushing of the in-memory-row-events from
close_thread_tables() to binlog_commit() in MDEV-34150.
This was needed to be able to handle the case where binlog writes could
fail.
Galera have two case where the change caused problems:
- Row events in commit_one_phase_2() was not done in the case the standard
binary log was not enabled but Galera was using the binary log
internally.
- Galera disabled the call to binlog_commit_flush_stmt_cache() for not
ending transactions.
Fixed by adding code that flushes the in-memory-row-events to the binary
log (write, but now sync) in the two above cases if Galera is enabled.
As part of commit 685d958e38 (MDEV-14425)
the parameter innodb_log_write_ahead_size was removed, because it was
thought that determining the physical block size would be a sufficient
replacement.
However, we can only determine the physical block size on Linux or
Microsoft Windows. On some file systems, the physical block size
is not relevant. For example, XFS uses a block size of 4096 bytes
even if the underlying block size may be smaller.
On Linux, we failed to determine the physical block size if
innodb_log_file_buffered=OFF was not requested or possible.
This will be fixed.
log_sys.write_size: The value of the reintroduced parameter
innodb_log_write_ahead_size. To keep it simple, this is read-only
and a power of two between 512 and 4096 bytes, so that the previous
alignment guarantees are fulfilled. This will replace the previous
log_sys.get_block_size().
log_sys.block_size, log_t::get_block_size(): Remove.
log_t::set_block_size(): Ensure that write_size will not be less
than the physical block size. There is no point to invoke this
function with 512 or less, because that is the minimum value of
write_size.
innodb_params_adjust(): Add some disabled code for adjusting
the minimum value and default value of innodb_log_write_ahead_size
to reflect the log_sys.write_size.
log_t::set_recovered(): Mark the recovery completed. This is the
place to adjust some things if we want to allow write_size>4096.
log_t::resize_write_buf(): Refer to write_size.
log_t::resize_start(): Refer to write_size instead of get_block_size().
log_write_buf(): Simplify some arithmetics and remove a goto.
log_t::write_buf(): Refer to write_size. If we are writing less than
that, do not switch buffers, but keep writing to the same buffer.
Move some code to improve the locality of reference.
recv_scan_log(): Refer to write_size instead of get_block_size().
os_file_create_func(): For type==OS_LOG_FILE on Linux, always invoke
os_file_log_maybe_unbuffered(), so that log_sys.set_block_size() will
be invoked even if we are not attempting to use O_DIRECT.
recv_sys_t::find_checkpoint(): Read the entire log header
in a single 12 KiB request into log_sys.buf.
Tested with:
./mtr --loose-innodb-log-write-ahead-size=4096
./mtr --loose-innodb-log-write-ahead-size=2048
The log_event_old.cc is included by mysqlbinlog.cc.
With -DWITHOUT_SERVER the include path for the wsrep
include headers isn't there.
As these aren't needed by the mariadb-binlog, move
these to under a ifndef MYSQL_CLIENT preprocessor.
Caused by MDEV-18590
One of possible use cases that reproduces the memory leakage listed below:
set timestamp= unix_timestamp('2000-01-01 00:00:00');
create or replace table t1 (x int) with system versioning
partition by system_time interval 1 hour auto
partitions 3;
create table t2 (x int);
create trigger tr after insert on t2 for each row update t1 set x= 11;
create or replace procedure sp2() insert into t2 values (5);
set timestamp= unix_timestamp('2000-01-01 04:00:00');
call sp2;
set timestamp= unix_timestamp('2000-01-01 13:00:00');
call sp2; # <<=== Memory leak happens there. In case MariaDB server is built
with the option -DWITH_PROTECT_STATEMENT_MEMROOT,
the second execution would hit assert failure.
The reason of leaking a memory is that once a new partition be created
the table should be closed and re-opened. It results in calling the function
extend_table_list() that indirectly invokes the function sp_add_used_routine()
to add routines implicitly used by the statement that makes a new memory
allocation.
To fix it, don't remove routines and tables the statement implicitly depends
on when a table being closed for subsequent re-opening.
Executing an INSERT statement in PS mode having positional parameter
bound with an array could result in incorrect number of inserted rows
in case there is a BEFORE INSERT trigger that executes yet another
INSERT statement to put a copy of row being inserted into some table.
The reason for incorrect number of inserted rows is that a data structure
used for binding positional argument with its actual values is stored
in THD (this is thd->bulk_param) and reused on processing every INSERT
statement. It leads to consuming actual values bound with top-level
INSERT statement by other INSERT statements used by triggers' body.
To fix the issue, reset the thd->bulk_param temporary to the value nullptr
before invoking triggers and restore its value on finishing its execution.
Having Item_func_not items in item trees breaks assumptions during the
optimization phase about transformation possibilities in fix_fields().
Remove Item_func_not by extending normalization during parsing.
Reviewed by Oleksandr Byelkin (sanja@mariadb.com)
New error codes can only be added in the latest major version.
Adding ER_KILL_DENIED_HIGH_PRIORITY would shift by one all
error codes that were added in MariaDB Server 10.6 or later.
This amends commit 1001dae186
Suggested by: Sergei Golubchik
RAND() and UUID() are treated differently with respect to subquery
materialization both should be marked as uncacheable, forcing materialization.
Altered Create_func_uuid(_short)::create_builder().
Added comment in header about UNCACHEABLE_RAND meaning also unmergeable.
Apart from better performance when accessing thread local variables,
we'll get rid of things that depend on initialization/cleanup of
pthread_key_t variables.
Where appropriate, use compiler-dependent pre-C++11 thread-local
equivalents, where it makes sense, to avoid initialization check overhead
that non-static thread_local can suffer from.
There were erroneous calls for charpos() in key_hashnr() and key_buf_cmp().
These functions are never called with prefix segments.
The charpos() calls were wrong. Before the change BNHL joins
- could return wrong result sets, as reported in MDEV-34417
- were extremely slow for multi-byte character sets, because
the hash was calculated on string prefixes, which increased
the amount of collisions drastically.
This patch fixes the wrong result set as reported in MDEV-34417,
as well as (partially) the performance problem reported in MDEV-34352.
A user reported that MariaDB server got a signal 6 but still accepted new
connections and did not crash. I have not been able to find a way to
repeat this or find the cause of issue. However to make it easier to
notice that the server is unstable, I added code to disable new
connections when the handle_fatal_signal() handler has been called.
The problem is seen on CI, where TEMP pointed to directory outside of
the usual vardir, when testing mysql_install_db.exe
A likely cause for this error is that TEMP was periodically cleaned up
by some automation running on the host, perhaps by buildbot itself.
To fix, mysql_install_db.exe will now use datadir as --tmpdir
for the bootstrap run. This will minimize chances to run into any
environment problems.
This commit introduces a reset of password errors counter on any alter user
command for the altered user. This is done so as to not require a
complete privilege system reload.
Problem was that there was two non-conflicting local idle
transactions in node_1 that both inserted a key to primary key.
Then two transactions from other nodes inserted also
a key to primary key so that insert from node_2 conflicted
one of the local transactions in node_1 so that there would
be duplicate key if both are committed. For this insert
from other node tries to acquire S-lock for this record
and because this insert is high priority brute force (BF)
transaction it will kill idle local transaction.
Concurrently, second insert from node_3 conflicts the second
idle insert transaction in node_1. Again, it tries to acquire
S-lock for this record and kills idle local transaction.
At this point we have two non-conflicting high priority
transactions holding S-lock on different records in node_1.
For example like this: rec s-lock-node2-rec s-lock-node3-rec rec.
Because these high priority BF-transactions do not wait
each other insert from node3 that has later seqno compared
to insert from node2 can continue. It will try to acquire
insert intention for record it tries to insert (to avoid
duplicate key to be inserted by local transaction). Hower,
it will note that there is conflicting S-lock in same gap
between records. This will lead deadlock error as we have
defined that BF-transactions may not wait for record lock
but we can't kill conflicting BF-transaction because
it has lower seqno and it should commit first.
BF-transactions are executed concurrently because their
values to primary key are different i.e. they do not
conflict.
Galera certification will make sure that inserts from
other nodes i.e these high priority BF-transactions
can't insert duplicate keys. Local transactions naturally
can but they will be killed when BF-transaction
acquires required record locks.
Therefore, we can allow situation where there is conflicting
S-lock and insert intention lock regardless of their seqno
order and let both continue with no wait. This will lead
to situation where we need to allow BF-transaction
to wait when lock_rec_has_to_wait_in_queue is called
because this function is also called from
lock_rec_queue_validate and because lock is waiting
there would be assertion in ut_a(lock->is_gap()
|| lock_rec_has_to_wait_in_queue(cell, lock));
lock_wait_wsrep_kill
Add debug sync points for BF-transactions killing
local transaction.
wsrep_assert_no_bf_bf_wait
Print also requested lock information
lock_rec_has_to_wait
Add function to handle wsrep transaction lock wait
cases.
lock_rec_has_to_wait_wsrep
New function to handle wsrep transaction lock wait
exceptions.
lock_rec_has_to_wait_in_queue
Remove wsrep exception, in this function all
conflicting locks need to wait in queue.
Conflicts between BF and local transactions
are handled in lock_wait.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Changed error code for Galera unkillable threads to
be ER_KILL_DENIED_HIGH_PRIORITY giving message
This is a high priority thread/query and cannot be killed
without the compromising consistency of the cluster
also a warning is produced
Thread %lld is [wsrep applier|high priority] and cannot be killed
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
After MDEV-4013, the maximum length of replication passwords was extended to
96 ASCII characters. After a restart, however, slaves only read the first 41
characters of MASTER_PASSWORD from the master.info file. This lead to slaves
unable to reconnect to the master after a restart.
After a slave restart, if a master.info file is detected, use the full
allowable length of the password rather than 41 characters.
Reviewed By:
============
Sergei Golubchik <serg@mariadb.com>
In the error case of THD::register_slave(), there is undefined
behavior of Slave_info si because it is allocated via malloc()
(my_malloc), and cleaned up via delete().
This patch makes these consistent by switching si's cleanup
to use my_free.
Review followup: RANGE_OPT_PARAM statement_should_be_aborted()
checks for thd->is_fatal_error and thd->is_error(). The first is
redundant when the second is present.
The optimizer deals with Rowid Filters this way:
1. First, range optimizer is invoked. It saves information
about all potential range accesses.
2. A query plan is chosen. Suppose, it uses a Rowid Filter on
index $IDX.
3. JOIN::make_range_rowid_filters() calls the range optimizer
again to create a quick select on index $IDX which will be used
to populate the rowid filter.
The problem: KILL command catches the query in step #3. Quick
Select is not created which causes a crash.
Fixed by checking if query was killed. Note: the problem also
affects 10.6, even if error handling for
SQL_SELECT::test_quick_select is different there.
(Variant for 10.6: return error code from SQL_SELECT::test_quick_select)
The optimizer deals with Rowid Filters this way:
1. First, range optimizer is invoked. It saves information
about all potential range accesses.
2. A query plan is chosen. Suppose, it uses a Rowid Filter on
index $IDX.
3. JOIN::make_range_rowid_filters() calls the range optimizer
again to create a quick select on index $IDX which will be used
to populate the rowid filter.
The problem: KILL command catches the query in step #3. Quick
Select is not created which causes a crash.
Fixed by checking if query was killed.
Rowid Filter cannot be used with reverse-ordered scans, for the
same reason as IndexConditionPushdown cannot be.
test_if_skip_sort_order() already has logic to disable ICP when
setting up a reverse-ordered scan. Added logic to also disable
Rowid Filter in this case, factored out the code into
prepare_for_reverse_ordered_access(), and added a comment describing
the cause of this limitation.