The @@global.character_set_client variable could erroneously be set
to a non-default collation of its character set, which further made
the `SET NAMES DEFAULT` statement crash the server.
Fixing the code to make sure that the global value these variables:
@@character_set_client
@@character_set_connection
@@character_set_server
@@character_set_database
@@character_set_connection
point to the default compiled collations of the character set.
- Change the comments in class ha_handler_stats to say the members
are in ticks, not milliseconds.
- In sql_explain.cc, adjust the scale to print milliseconds.
nullptr+0 is an UB (undefined behavior).
- Fixing my_string_metadata_get_mb() to handle {nullptr,0} without UB.
- Fixing THD::copy_with_error() to disallow {nullptr,0} by DBUG_ASSERT().
- Fixing parse_client_handshake_packet() to call THD::copy_with_error()
with an empty string {"",0} instead of NULL string {nullptr,0}.
- Fixing the code in get_interval_value() to use Longlong_hybrid_null.
This allows to handle correctly:
- Signed and unsigned arguments
(the old code assumed the argument to be signed)
- Avoid undefined negation behavior the corner case with LONGLONG_MIN
This fixes the UBSAN warning:
negation of -9223372036854775808 cannot be represented
in type 'long long int';
- Fixing the code in get_interval_value() to avoid overflow in
the INTERVAL_QUARTER and INTERVAL_WEEK branches.
This fixes the UBSAN warning:
signed integer overflow: -9223372036854775808 * 7 cannot be represented
in type 'long long int'
- Fixing the INTERVAL_WEEK branch in date_add_interval() to handle
huge numbers correctly. Before the change, huge positive numeber
were treated as their negative complements.
Note, some other branches still can be affected by this problem
and should also be fixed eventually.
to_natsort_key(): Zero-initialize also num_start. This silences a
compiler warning. There is no impact on correctness, because
before the first read of num_start, !n_digits would always hold
and hence num_start would have been initialized.
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.
While the second parameter is usually correct, there are exceptions
that need fixing.
This commit addresses the issue within frm_file_exists() and other
affected places.
Preserve compatibility with 7.1.3 by including the previous non-const
function.
The error was:
fmt/format.h:3466:8: note: candidate function template not
viable: no known conversion from 'const formatter<String, [2 * ...]>' to
'formatter<fmt::basic_string_view<char>, [2 * ...]>' for object argument
3466 | auto format(const T& val, FormatContext& ctx) ->
decltype(ctx.out()) {
The patch for MDEV-31340 fixed the following bugs:
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
Backporting the fixes from 11.5 to 10.5
Similar to #2480.
567b681 introduced safe_strcpy() to minimize the use of C with
potentially unsafe memory overflow with strcpy() whose use is
discouraged.
Replace instances of strcpy() with safe_strcpy() where possible, limited
here to files in the `sql/` directory.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
A memory leak happens on the second execution of a query that run in PS mode
and uses the function ROWNUM().
A memory leak took place on allocation of an instance of the class Item_int
for storing a limit value that is performed at the function set_limit_for_unit
indirectly called from JOIN::optimize_inner. Typical trace to the place where
the memory leak occurred is below:
JOIN::optimize_inner
optimize_rownum
process_direct_rownum_comparison
set_limit_for_unit
new (thd->mem_root) Item_int(thd, lim, MAX_BIGINT_WIDTH);
To fix this memory leak, calling of the function optimize_rownum()
has to be performed only once on first execution and never called
after that. To control it, the new data member
first_rownum_optimization
added into the structure st_select_lex.
Workaround patch: Do not remove GROUP BY clause when it has
subquer(ies) in it.
remove_redundant_subquery_clauses() removes redundant GROUP BY clause
from queries in form:
expr IN (SELECT no_aggregates GROUP BY ...)
expr {CMP} {ALL|ANY|SOME} (SELECT no_aggregates GROUP BY ...)
This hits problems when the GROUP BY clause itself has subquer(y/ies).
This patch is just a workaround: it disables removal of GROUP BY clause
if the clause has one or more subqueries in it.
Tests:
- subselect_elimination.test has all known crashing cases.
- subselect4.result, insert_select.result are updated.
Note that in some cases results of SELECT are changed too (not just
EXPLAINs). These are caused by non-deterministic SQL: when running a
query like:
x > ANY( SELECT col1 FROM t1 GROUP BY constant_expression)
without removing the GROUP BY, the executor is free to pick the value
of t1.col1 from any row in the GROUP BY group (denote it $COL1_VAL).
Then, it computes x > ANY(SELECT $COL1_VAL).
When running the same query and removing the GROUP BY:
x > ANY( SELECT col1 FROM t1)
the executor will actually check all rows of t1.
Due to this bug a wrong result might be expected from queries with
an IN subquery predicate in the WHERE clause and a derived table in the
FROM clause to which split optimization could be applied.
The function JOIN::fix_all_splittings_in_plan() used the value of the
bitmap JOIN::sjm_lookup_tables() such as it had been left after the
search for the best plan for the select containing the splittable
derived table. That value could not be guaranteed to be correct. So the
recalculation of this bitmap is needed to exclude the plans with key
accesses from SJM lookup tables.
Approved by Igor Babaev <igor@maridb.com>
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
This commit fixes sporadic failures in galera_3nodes_sr.GCF-336
test. The following changes have been made here:
1) A small addition to the test itself which should make
it more deterministic by waiting for non-primary state
before COMMIT;
2) More careful handling of the wsrep_ready variable in
the server code (it should always be protected with mutex).
No additional tests are required.
Refinement of the original patch.
Move the code to reset the kill up into the parent class
Xid_apply_log_event, to also fix the similar issue for XA COMMIT.
Increase the number of slave retries in the test case
rpl.rpl_parallel_multi_domain_xa to fix some sporadic failures. The test
generates massive amounts of conflicting transactions in multiple
independent domains, which can cause multiple rollback+retry for a
transaction as it conflicts with transactions in other domains one-by-one.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Don't deadlock kill event groups in other domains if they are not
SPECULATE_OPTIMISTIC. Such event groups may not be able to safely roll back
and retry (eg. DDL).
But do deadlock kill a transaction T2 from a blocked transaction U in another
domain, even if T2 has lower sub_id than U. Otherwise, in case of a cycle
T2->T1->U->T2, we might not break the cycle if U is not SPECULATE_OPTIMISTIC
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.
Suchwise, given a hash collision it may choose an incorrect row.
handler::position would be correct and very convenient to use here.
dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
In strict mode a timestamp(0) column could be directly assigned from
another timestamp(N>0) column with the value '1970-01-01 00:00:00.1'
(at time zone '+00:00'), or with any other value '1970-01-01 00:00:00.XXXXXX'
with non-zero microsecond value XXXXXX.
This assignment happened silently without warnings or errors.
It worked as follows:
- The value {tv_sec=0, tv_usec=100000}, which is '1970-01-01 00:00:00.1'
was rounded to {tv_sec=0, tv_usec=0}, which is '1970-01-01 00:00:00.0'
- Then {tv_sec=0, tv_usec=0} was silently re-interpreted as zero datetime.
After the fix this assignment always raises a warning,
which in case of the strict mode is escalated to an error.
The problem in this scenario is that '1970-01-01 00:00:00' cannot be stored,
because its timeval value {tv_sec=0, tv_usec=0} is reserved for zero datetimes.
Thus the warning should be raised no matter if sql_mode allows or disallows
zero dates.
Field_timestampf::val_native() checked only the
first four bytes to detect zero dates.
That was not enough. Fixing the code to check all packed_length()
bytes to detect zero dates.
The code in Field_timestamp::save_in_field() did not catch
zero datetime and stored it to the other field like a usual value
using store_timestamp_dec(), which knows nothing about zero date and
treats {tv_sec=0, tv_usec=0} as a normal timeval value corresponding to
'1970-01-01 00:00:00 +00:00'.
Fixing the code to catch the special combination (ts==0 && sec_pat==0) and
store it using store_time_dec() with a zero datetime passed as an argument.
When calculating next wakeup timepoint for the timer thread, with large
thread_pool_stall_limit values, 32bit int overflow can happen.
Fixed by making one operand 8 byte large.
Also fixed the type of tick_interval to be unsigned, so it does not
go negative for very thread_pool_stall_limit.
- ZLIB_LIBRARIES, not ZLIB_LIBRARY
- ZLIB_INCLUDE_DIRS, not ZLIB_INCLUDE_DIR
For building libmariadb, ZLIB_LIBRARY/ZLIB_INCLUDE_DIR are still defined
This workaround will be removed later.
The two I_S plugins SPIDER_ALLOC_MEM and SPIDER_WRAPPER_PROTOCOL
only makes sense if the main SPIDER plugin is installed. Further,
SPIDER_ALLOC_MEM requires a mutex that requires SPIDER init to fill
the table.
We also update the spider init query to override
--transaction_read_only=on so that it does not affect the spider init.
Also fixed error handling in spider_db_init() so that failure in
spider table init does not result in memory leak
Clear any pending deadlock kill after completing XA PREPARE, and before
updating the mysql.gtid_slave_pos table in a separate transaction.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
One case is conflicting transactions T1 and T2 with different domain id, in
optimistic parallel replication in non-GTID mode. Then T2 will
wait_for_prior_commit on T1; and if T1 got a row lock wait on T2 it would
hang, as different domains caused the deadlock kill to be skipped in
thd_rpl_deadlock_check().
More generally, if we have transactions T1 and T2 in one domain/master
connection, and independent transactions U in another, then we can
still deadlock like this:
T1 row low wait on U
U row lock wait on T2
T2 wait_for_prior_commit on T1
This commit enforces the deadlock kill in these cases. If the waited-for
transaction is speculatively applied, then it will be deadlock killed in
case of a conflict, even if the two transactions are in different domains
or master connections.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Improve detection for DES support in OpenSSL, to allow compilation
against system OpenSSL without DES.
Note that MariaDB needs to be compiled against OpenSSL-like library
that itself has DES support which cmake detected. Positive detection
is indicated with CMake variable HAVE_des 1.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@surgut.co.uk>
The problem was that the signal thread was not killed when using
unireg_abort().
The bug was introduced by:
MDEV-30260: Slave crashed:reload_acl_and_cache during shutdown
Other things fixed:
- Don't produce memory leaks with safemalloc if all threads was not
ended properly (not useful)
Analysis:
When we scan json to get to a beginning according to the path, we end up
scanning json even if we have exhausted it. When eventually returns error.
Fix:
Continue scanning json only if we have not exhausted it and return result
accordingly.
Analysis:
When scanning json and getting the exact path at each step, if a path
is reached, we end up adding the item in the result and immediately get the
next item which results in current path changing.
Fix:
Instead of immediately returning the item, count the occurences of the path
in argument and append in the result as needed.
(returns NULL) and for Date/DateTime returns "INTEGER"
Analysis:
When the first character of json is scanned it is number. Based on that
integer is returned.
Fix:
Scan rest of the json before returning the final result to ensure json is
valid in the first place in order to have a valid type.