Having Item_func_not items in item trees breaks assumptions during the
optimization phase about transformation possibilities in fix_fields().
Remove Item_func_not by extending normalization during parsing.
Reviewed by Oleksandr Byelkin (sanja@mariadb.com)
New error codes can only be added in the latest major version.
Adding ER_KILL_DENIED_HIGH_PRIORITY would shift by one all
error codes that were added in MariaDB Server 10.6 or later.
This amends commit 1001dae186
Suggested by: Sergei Golubchik
RAND() and UUID() are treated differently with respect to subquery
materialization both should be marked as uncacheable, forcing materialization.
Altered Create_func_uuid(_short)::create_builder().
Added comment in header about UNCACHEABLE_RAND meaning also unmergeable.
- Few of test case should make sure that InnoDB does hit
the debug sync point during startup of the server.
InnoDB can remove the double quotes of debug point
in restart parameters.
- InnoDB tries to write FILE_CHECKPOINT marker during
early recovery when log file size is insufficient.
While updating the log checkpoint at the end of the recovery,
InnoDB must already have written out all pending changes
to the persistent files. To complete the checkpoint, InnoDB
has to write some log records for the checkpoint and to
update the checkpoint header. If the server gets killed
before updating the checkpoint header then it would lead
the logfile to be unrecoverable.
- This patch avoids FILE_CHECKPOINT marker during early
recovery and narrows down the window of opportunity to
make the log file unrecoverable.
There were erroneous calls for charpos() in key_hashnr() and key_buf_cmp().
These functions are never called with prefix segments.
The charpos() calls were wrong. Before the change BNHL joins
- could return wrong result sets, as reported in MDEV-34417
- were extremely slow for multi-byte character sets, because
the hash was calculated on string prefixes, which increased
the amount of collisions drastically.
This patch fixes the wrong result set as reported in MDEV-34417,
as well as (partially) the performance problem reported in MDEV-34352.
Changed error code for Galera unkillable threads to
be ER_KILL_DENIED_HIGH_PRIORITY giving message
This is a high priority thread/query and cannot be killed
without the compromising consistency of the cluster
also a warning is produced
Thread %lld is [wsrep applier|high priority] and cannot be killed
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
This problem was earlier fixed by this commit:
> commit 08c7ab404f
> Author: Aleksey Midenkov <midenok@gmail.com>
> Date: Mon Apr 18 12:44:27 2022 +0300
>
> MDEV-24176 Server crashes after insert in the table with virtual
> column generated using date_format() and if()
Adding an mtr test only.
After MDEV-4013, the maximum length of replication passwords was extended to
96 ASCII characters. After a restart, however, slaves only read the first 41
characters of MASTER_PASSWORD from the master.info file. This lead to slaves
unable to reconnect to the master after a restart.
After a slave restart, if a master.info file is detected, use the full
allowable length of the password rather than 41 characters.
Reviewed By:
============
Sergei Golubchik <serg@mariadb.com>
As all MariaDB Server errors now have a dedicated web page, the
perror utility is extended to include a link to the KB page of
the corresponding error code.
All new code of the whole pull request, including one or several
files that are either new files or modified ones, are contributed
under the BSD-new license. I am contributing on behalf of my
employer Amazon Web Services, Inc.
The optimizer deals with Rowid Filters this way:
1. First, range optimizer is invoked. It saves information
about all potential range accesses.
2. A query plan is chosen. Suppose, it uses a Rowid Filter on
index $IDX.
3. JOIN::make_range_rowid_filters() calls the range optimizer
again to create a quick select on index $IDX which will be used
to populate the rowid filter.
The problem: KILL command catches the query in step #3. Quick
Select is not created which causes a crash.
Fixed by checking if query was killed. Note: the problem also
affects 10.6, even if error handling for
SQL_SELECT::test_quick_select is different there.
Rowid Filter cannot be used with reverse-ordered scans, for the
same reason as IndexConditionPushdown cannot be.
test_if_skip_sort_order() already has logic to disable ICP when
setting up a reverse-ordered scan. Added logic to also disable
Rowid Filter in this case, factored out the code into
prepare_for_reverse_ordered_access(), and added a comment describing
the cause of this limitation.
To make this possible, it was also necessary to enhance the mariadb
client with the option --print-query-on-error.
This option can also be very useful when running a batch of queries
through the mariadb client and one wants to find out where things goes
wrong.
TODO: It would be good to enhance mariadb_upgrade to not call the mariadb
client for executing queries but instead do this internally. This
would have made this patch much easier!
Reviewed by: Sergei Golubchik <serg@mariadb.com>
--skip-not-found switch tells mtr to skip not found tests instead of aborting.
But it failed to skip the test if the suite name was not found.
This problem also made the *last-N-failed builbot builders fail
to run `mtr --skip-not-found` if the last commit removed a file in
the mysql-test/include/ directory.
This commit fixes it, now the not found test is properly skipped,
no matter what component of the test name was not found:
$ ./mtr main.foo --skip-not-found foo.main
...
==============================================================================
TEST WORKER RESULT TIME (ms) or COMMENT
--------------------------------------------------------------------------
foo.main [ skipped ] not found
main.foo [ skipped ] not found
--------------------------------------------------------------------------
Adding the test for the length of lex->name into show_create_db().
Without this test writes beyond the end of db_name_buff were possible
upon a too long database name.
This is regression from commit 3228c08fa8. Problem is that
when table storage engine is determined there should be
check is table partitioned and if it is then determine
partition implementing storage engine.
Reported bug is reproducible only with --log-bin so make
sure tests changed by 3228c08fa8 and new test are run
with --log-bin and binlog disabled.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
number of non-user tablespace.
fil_space_t::try_to_close(): Don't try to close
the tablespace which is acquired by the caller of
the function
Added the suppression message in open_files_limit test case
number of non-user tablespace.
- InnoDB only closes the user tablespace when the number of open
files exceeds innodb_open_files limit. In that case, InnoDB should
make sure that innodb_open_files value should be greater
than number of undo tablespace, system and temporary tablespace files.
Some galera tests starts 6 galera nodes. Each galera node requires
three ports: 6*3 = 18. Plus 6 ports are needed for 6 mariadbd servers.
Since the number of ports is rounded up to 10 everywhere in mtr, we
will take 30 as the default value for the port group size parameter.
Avoid starting transactions in wsrep-lib side when wsrep is
disabled. It is unnecessary, and causes spurious deadlock errors on
transaction clean up.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that updates to mysql.gtid_slave_pos table were
replicated even when they were newer used and because that
newer deleted. Avoid replication of mysql.gtid_slave_pos
table if wsrep_gtid_mode=OFF.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
=======
- This commit is a merge of mysql commit 129ee47ef994652081a11ee9040c0488e5275b14.
InnoDB FTS can be in inconsistent state when sync operation
terminates the server before committing the operation. This
could lead to incorrect synced doc id and incorrect query results.
Solution:
========
- During sync commit operation, InnoDB should pass
the sync transaction to update the max doc id
in the config table.
fts_read_synced_doc_id() : This function is used
to read only synced doc id from the config table.
non-default collation_connection
Analysis:
Due to different collation, the string has nothing to chop off.
Fix:
Got rid of chop(), only append " ," only when we have more elements to
add to the result.
The problem was in the Aria part of the range optimizer,
maria_records_in_range(), which wrong concluded that there was no rows
in the range.
This error would happen in the unlikely case when searching for a range
on a partial key and there was a match for the first key part in the
upper part of the b-tree (node) and also a match in the underlying
node page.
In other words, for this bug to happen one have to use Aria, have a multi
part key with a lot of identical values for the first key part and do a
range search on the second part of the key.
Fixed by ensuring that we do not stop searching for partial keys found
on node.
Other things:
- Added some comments
- Changed a variable name to more clearly explain it's purpose.
- Fixed wrong cast in _ma_record_pos() that could cause problems on 32 bit
systems.
This bug could affect queries containing a join of derived tables over
grouping views such that one of the derived tables contains a window
function while another uses view V with dependent subquery DSQ containing
a set function aggregated outside of the subquery in the view V. The
subquery also refers to the fields from the group clause of the view.Due to
this bug execution of such queries could produce wrong result sets.
When the fix_fields() method performs context analysis of a set function AF
first, at the very beginning the function Item_sum::init_sum_func_check()
is called. The function copies the pointer to the embedding set function,
if any, stored in THD::LEX::in_sum_func into the corresponding field of the
set function AF simultaneously changing the value of THD::LEX::in_sum_func
to point to AF. When at the very end of the fix_fields() method the function
Item_sum::check_sum_func() is called it is supposed to restore the value
of THD::LEX::in_sum_func to point to the embedding set function. And in
fact Item_sum::check_sum_func() did it, but only for regular set functions,
not for those used in window functions. As a result after the context
analysis of AF had finished THD::LEX::in_sum_func still pointed to AF.
It confused the further context analysis. In particular it led to wrong
resolution of Item_outer_ref objects in the fix_inner_refs() function.
This wrong resolution forced reading the values of grouping fields referred
in DSQ not from the temporary table used for aggregation from which they
were supposed to be read, but from the table used as the source table for
aggregation.
This patch guarantees that the value of THD::LEX::in_sum_func is properly
restored after the call of fix_fields() for any set function.
Field_string::val_int(), Field_string::val_real(), Field_string::val_decimal()
passed the whole buffer of field_length bytes to data type conversion routines.
This made conversion routines to print redundant trailing spaces in case of warnings.
Adding a method Field_string::to_lex_cstring() and using it inside
val_int(), val_real(), val_decimal(), val_str().
After this change conversion routines get the same value with what val_str() returns,
and no redundant trailing spaces are displayed.
GTID events are applied without a running server transaction,
we need to set next transaction ID for Wsrep transaction.
The whole Galera cluster now has a single GTID value (including
the server ID throughout the cluster), fix the config accordingly.
Add force restart so that repeated MTR test execution prints
consistent GTID values, otherwise they would have been recovered
from the previous run.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
It's possible to establish Galera multi-cluster setups connected
through the native replication when every Galera cluster is configured
to have a separate domain ID.
For this setup to work, we need to replace domain ID values in generated
GTID events when they are written at transaction commit to the values
configured by Wsrep replication.
At the same time, it's possible that the GTID event already contains
a correct domain ID if it comes through the native replication from
another Galera cluster.
In this case, when such an event is applied either through a native
replication slave thread or through Wsrep applier, we write GTID event
on transaction start and avoid writing it during transaction commit.
The code contained multiple problems that were fixed:
- applying GTID events didn't work because it's applied without a
running server transaction and Wsrep transaction was not started
- GTID event generation on transaction start didn't contain proper
"standalone" and "is_transactional" flags that the original applied
GTID event contained
- condition determining that GTID event is written on transaction start
to avoid writing it on commit relied on the fact that the GTID event
is the first found in transaction/statement caches, which wasn't the
case and resulted in duplicate GTID events written
- instead of relying on the caches to find a GTID event, a simple check
is introduced that follows the exact rules for checking if event is
written at transaction start as described above
- the test case is improved to check that exact GTID events are
applied after two Galera clusters have synced.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The test that triggers multi-master conflict between two CTAS commands
uses LOCK/UNLOCK TABLES to block local CTAS from progress. It could
result in a race when UNLOCK TABLES command is issued a bit earlier
then needed, causing local CTAS to run further and change wsrep
transaction state, so that a different code path is taken later and
the original error gets overridden, causing the test to fail.
The solution is to replace LOCK/UNLOCK TABLES with debug sync points.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
nullptr+0 is an UB (undefined behavior).
- Fixing my_string_metadata_get_mb() to handle {nullptr,0} without UB.
- Fixing THD::copy_with_error() to disallow {nullptr,0} by DBUG_ASSERT().
- Fixing parse_client_handshake_packet() to call THD::copy_with_error()
with an empty string {"",0} instead of NULL string {nullptr,0}.
- Fixing the code in get_interval_value() to use Longlong_hybrid_null.
This allows to handle correctly:
- Signed and unsigned arguments
(the old code assumed the argument to be signed)
- Avoid undefined negation behavior the corner case with LONGLONG_MIN
This fixes the UBSAN warning:
negation of -9223372036854775808 cannot be represented
in type 'long long int';
- Fixing the code in get_interval_value() to avoid overflow in
the INTERVAL_QUARTER and INTERVAL_WEEK branches.
This fixes the UBSAN warning:
signed integer overflow: -9223372036854775808 * 7 cannot be represented
in type 'long long int'
- Fixing the INTERVAL_WEEK branch in date_add_interval() to handle
huge numbers correctly. Before the change, huge positive numeber
were treated as their negative complements.
Note, some other branches still can be affected by this problem
and should also be fixed eventually.
- InnoDB page compression works only on COMPACT or DYNAMIC row
format tables. So InnoDB should throw error when alter table
tries to enable PAGE_COMPRESSED for redundant table.
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.
While the second parameter is usually correct, there are exceptions
that need fixing.
This commit addresses the issue within frm_file_exists() and other
affected places.
Fixing the condition to raise an overflow in the ulonglong
representation of the number is greater or equal to 0x8000000000000000ULL.
Before this change the condition did not catch -9223372036854775808
(the smallest possible signed negative longlong number).
The patch for MDEV-31340 fixed the following bugs:
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
Backporting the fixes from 11.5 to 10.5
BUF_LRU_MIN_LEN (256) is too high value for low buffer pool(BP) size.
For example, for BP size lower than 80M and 16 K page size, the limit is
more than 5% of total BP and for lowest BP 5M, it is 80% of the BP.
Non-data objects like explicit locks could occupy part of the BP pool
reducing the pages available for LRU. If LRU reaches minimum limit and
if no free pages are available, server would hang with page cleaner not
able to free any more pages.
Fix: To avoid such hang, we adjust the LRU limit lower than the limit
for data objects as checked in buf_LRU_check_size_of_non_data_objects()
i.e. one page less than 5% of BP.
trx_free_at_shutdown(): Similar to trx_t::commit_in_memory(),
clear the detailed_error (FOREIGN KEY constraint error) before
invoking trx_t::free(). We only do this on debug instrumented
builds in order to avoid a debug assertion failure on shutdown.