Similar to #2480.
567b681 introduced safe_strcpy() to minimize the use of C with
potentially unsafe memory overflow with strcpy() whose use is
discouraged.
Replace instances of strcpy() with safe_strcpy() where possible, limited
here to files in the `sql/` directory.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Spider calls info with HA_STATUS_AUTO to update auto increment info,
which may attempt to connect the data node. If the connection fails,
it may emit an error and return the same error. This error should not
be of lower priority than any possible error from the later call to
handler::update_auto_increment().
Without this change, certain errors from update_auto_increment() such
as HA_ERR_AUTOINC_ERANGE may get ignored, causing my_insert() to call
my_ok(), which fails the assertion because the error was emitted in
the info() call (Diagnostics_area::is_set() returns true).
A memory leak happens on the second execution of a query that run in PS mode
and uses the function ROWNUM().
A memory leak took place on allocation of an instance of the class Item_int
for storing a limit value that is performed at the function set_limit_for_unit
indirectly called from JOIN::optimize_inner. Typical trace to the place where
the memory leak occurred is below:
JOIN::optimize_inner
optimize_rownum
process_direct_rownum_comparison
set_limit_for_unit
new (thd->mem_root) Item_int(thd, lim, MAX_BIGINT_WIDTH);
To fix this memory leak, calling of the function optimize_rownum()
has to be performed only once on first execution and never called
after that. To control it, the new data member
first_rownum_optimization
added into the structure st_select_lex.
The accounting of the limit variable that represents the
amount of space left it the buffer was incorrect.
Also there was 1 or 2 bytes left to write that occured without
the buffer length being checked.
Review: Sanja Byelkin
MariaDB-backup needs to check for SLAVE MONITOR as that is
what is returned by SHOW GRANTS.
Update test to ensure that warnings about missing privileges
do not occur when the backup is successful.
Reviewer: Andrew Hutchings
Thanks Eugene for reporting the issue.
Workaround patch: Do not remove GROUP BY clause when it has
subquer(ies) in it.
remove_redundant_subquery_clauses() removes redundant GROUP BY clause
from queries in form:
expr IN (SELECT no_aggregates GROUP BY ...)
expr {CMP} {ALL|ANY|SOME} (SELECT no_aggregates GROUP BY ...)
This hits problems when the GROUP BY clause itself has subquer(y/ies).
This patch is just a workaround: it disables removal of GROUP BY clause
if the clause has one or more subqueries in it.
Tests:
- subselect_elimination.test has all known crashing cases.
- subselect4.result, insert_select.result are updated.
Note that in some cases results of SELECT are changed too (not just
EXPLAINs). These are caused by non-deterministic SQL: when running a
query like:
x > ANY( SELECT col1 FROM t1 GROUP BY constant_expression)
without removing the GROUP BY, the executor is free to pick the value
of t1.col1 from any row in the GROUP BY group (denote it $COL1_VAL).
Then, it computes x > ANY(SELECT $COL1_VAL).
When running the same query and removing the GROUP BY:
x > ANY( SELECT col1 FROM t1)
the executor will actually check all rows of t1.
The problem was two fold:
- REPAIR TABLE t1 USE_FRM did not work for transactional
Aria tables (Table was thought to be repaired, which it was not) which
caused issues in later usage of the table.
- When swapping tmp_data file to data file, sort_info files where not
updated. This caused problems if there was several unique keys and
there was a duplicate for the second key.
Due to this bug a wrong result might be expected from queries with
an IN subquery predicate in the WHERE clause and a derived table in the
FROM clause to which split optimization could be applied.
The function JOIN::fix_all_splittings_in_plan() used the value of the
bitmap JOIN::sjm_lookup_tables() such as it had been left after the
search for the best plan for the select containing the splittable
derived table. That value could not be guaranteed to be correct. So the
recalculation of this bitmap is needed to exclude the plans with key
accesses from SJM lookup tables.
Approved by Igor Babaev <igor@maridb.com>
In the absence of insight of the cause of spider.spider_fixes_part
failure as described in MDEV-30929, This is a workaround, which could
help narrow the possibility down to whether slave SQL thread attempts
to read from file that maybe not yet on disk. It does not otherwise
affect the coverage of the test.
I have pushed this commit 4 times, but have yet to encounter the
failure as described in MDEV-30929, so it could also fix the test and
stop the CI pollution.
Also replaced START SLAVE; with --source include/start_slave.inc
inside the slave_test_init.inc files.
safety first - tell mariadb client not to execute dangerous
cli commands, they cannot be present in the dump anyway.
wrapping the command in /*!999999 ..... */ guarantees that
if a non-mariadb-cli client loads the dump and sends it to the
server - the server will ignore the command it doesn't understand
mysql --sandbox
disables system (\!), tee (\T), pager with an argument(\P foo), source (\.)
does *not* disable edit (\e). Use EDITOR=/bin/false to disable
or, for example, EDITOR=rnano for something more useful
does *not* disable pager (\P) without an argument. Use
PAGER=cat or, for example PAGER=less LESSSECURE=1 for something
more useful
using a disabled command is an error, which can be ignored with --force
Also, a "sandbox" command (\-) - enables the sandbox mode until EOF
(current file or the session, if interactive)
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.