When creating a recursive CTE, the column types are taken from the
non recursive part of the CTE (this is according to the SQL standard).
This patch adds code to abort the CTE if the calculated values in the
recursive part does not fit in the fields in the created temporary table.
The new code only affects recursive CTE, so it should not cause any notable
problems for old applications.
Other things:
- Fixed that we get correct row numbers for warnings generated with
WITH RECURSIVE
Reviewer: Alexander Barkov <bar@mariadb.com>
Part #2: make sure we allocate space for two JOIN_TABs that
use temporary tables.
The dbug_join_tab_array_size is still set to catch cases where
we try to access more JOIN_TAB object than we thought we would have.
The problem was caused by use of COLLATION(AVG('x')). This is an
item whose value is a constant.
Name Resolution code called convert_const_to_int() which removed AVG('x').
However, the item representing COLLATION(...) still had with_sum_func=1.
This inconsistent state confused the code that handles grouping and
DISTINCT: JOIN::get_best_combination() decided to use one temporary
table and allocated one JOIN_TAB for it, but then
JOIN::make_aggr_tables_info() attempted to use two and made writes
beyond the end of the JOIN::join_tab array.
The fix:
- Do not replace constant expressions which contain aggregate functions.
- Add JOIN::dbug_join_tab_array_size to catch attempts to use more
JOIN_TAB objects than we've allocated.
- query->intersection fails to get freed if the query exceeds
innodb_ft_result_cache_limit
- errors from init_ftfuncs were not propogated by delete command
This is taken from percona/percona-server@ef2c0bcb9a
This bug manifested itself for INSERT...SELECT and DELETE statements whose
WHERE condition used an IN/ANY/ALL predicand or a EXISTS predicate with
such grouping subquery that:
- its GROUP BY clause could be eliminated,
- the GROUP clause contained a subquery over a mergeable derived table
referencing the updated table.
The bug ultimately caused a server crash when the prepare phase of the
statement processing was executed. This happened after removal redundant
subqueries used in the eliminated GROUP BY clause from the statement tree.
The function that excluded the subqueries from the did not do it properly.
As a result the specification of any derived table contained in a removed
subquery was not marked as excluded.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
mysqlimport starts many worker threads. when one of the worker
encounters an error, it frees global memory and calls exit().
it suppresses memory leak detector, because, as the comment says
"dirty exit, some threads are still running", indeed, it cannot
free the memory from other threads.
but precisely because some threads are still running, they
might use this global memory, so it cannot be freed.
fix: if we know that some threads are still running and accept
that we cannot free all memory anyway, let's not free global
allocations either
This is particularly important for Azure where there is no
MyISAM support in their MariaDB cloud product.
Like mysqldumper does, a view can satisfy the requirement
like a table, without constraints. The views in frm files are
text form and don't have column limits.
Thanks Thomas Casteleyn for the suggestion.
With a global non-default max-statement-time of a time interval that exceed
the query time mysqldump queries when doing a backup.
To solve both, add a max-statement-time option, defaulting to 0 (unlimited time).
Also like mariabackup, set the session wait_timeout=DEFAULT (28800). The
time/processing between mysqldump times isn't expected to get that
close ever, but let's adopt the standard of mariabackup as no-one has
challenged it has having a detrimental effect.
Reviewer and test case author Daniel Black
This commit restores defaults and functionality regarding binlogs
to the way it was prior to MDEV-27524. The mariabackup utility no
longer saves binlogs files as part of a backup without the --galera-info
option. However, since we use --galera-info during SST, the behavior
of mariabackup changes and, in combination with GTIDs support enabled,
mariabackup transfers one (most recent) binlog file obtained after
FLUSH BINARY LOGS. In other cases, binlogs are not transferred during
SST in mariabackup mode. As for SST in the rsync mode, it works the
same way as before MDEV-27524 - by default it transfers one last
binlog file.
The --sst-max-binlogs option for mariabackup and the sst_max_binlogs
parameter in the [sst] / server sections are no longer supported for
SST via mariabackup.
Let simplify the test.
The update_time is stored in the table metadata (dict_table_t);
it has nothing to do with buffer pool page eviction or replacement.
look for an installed plugin with the same name _and the same type_
(in case there are many plugins with the same name and different type,
which is, technically, possible for built-in plugins).
it's not "non deterministic", it's completely defined
by @@rand_seed1 and @@rand_seed2. And as a session func it needs
to be re-fixed at the beginning of every statement.
Test fixes:
Since fix for CONC-603 (wrong error handling in TLS read/write) in case
of a read/write error client doesn't return always error 2013 (server
has gone away), so in addition we need to check for error 2026
(TLS/SSL error) and 5014 (write error).
Starting with commit da094188f6 (MDEV-24393),
MariaDB will no longer acquire advisory file locks on InnoDB data
files by default, because it would create a large number of
entries in Linux /proc/locks.
The motivation for acquiring the file locks is to prevent accidental
concurrent startup of multiple server processes on the same data files.
Such mistake still turns out to be relatively common, based on
corruption bug reports from the community.
To prevent corruption due to concurrent startup attempts, the
Aria storage engine would unconditionally acquire an advisory lock
on one of its log files.
Solution: InnoDB will always lock its system tablespace files.
(Ever since commit 685d958e38
the InnoDB log file will not necessarily be open while the
server is running, because it can be accessed via memory-mapped I/O.)
If more protection is desired, then the option --external-locking
can be used.
The mandatory advisory lock also fixes intermittent failures of
some crash recovery tests. It turns out that when the mtr test harness
kills and restarts the server, it will not actually ensure that the
old process has terminated before starting the new one.
This bug could cause a crash of the server when executing queries containing
ANY/ALL predicands with redundant subqueries in GROUP BY clauses.
These subqueries are eliminated by remove_redundant_subquery_clause()
together with elimination of GROUP BY list containing these subqueries.
However the references to the elements of the GROUP BY remained in the
JOIN::all_fields list of the right operand of of the ALL/ANY predicand.
Later these references confused make_aggr_tables_info() when forming
proper execution structures after ALL/ANY predicands had been replaced
with expressions containing MIN/MAX set functions.
The patch just removes these references from JOIN::all_fields list used
by the subquery of the ALL/ANY predicand when its GROUP BY clause is
eliminated.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Import tablespace re-evicts and reload the table definition. During that
time, innodb has to load the table even though the secondary fts index
marked as corrupted
- InnoDB should ignore the single word followed by apostrophe while
tokenising the document. Example is that if the input string is O'brien
then right now, InnoDB seperates into two tokens as O, brien. But
after this patch, InnoDB can ignore the token 'O' and consider
only 'brien'.
Unlike GCC, clang could optimize away alloca() and thus the
ALLOCATE_MEM_ON_STACK() instrumentation. To make it harder, let us
invoke a non-inline function on the entire allocated buffer.
This commit replaces sprintf(buf, ...) with
snprintf(buf, sizeof(buf), ...),
specifically in the "easy" cases where buf is allocated with a size
known at compile time.
The changes make sure we are not write outside array/string bounds which
will lead to undefined behaviour. In case the code is trying to write
outside bounds - safe version of functions simply cut the string
messages so we process this gracefully.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services,
Inc.
bsonudf.cpp warnings cleanup by Daniel Black
Reviewer: Daniel Black
Problem:
=======
This patch addresses two issues:
1. An incident event can be incorrectly reported for transactions
which are rolled back successfully. That is, an incident event
should only be generated for failed “non-transactional transactions”
(i.e., those which modify non-transactional tables) because they
cannot be rolled back.
2. When the mariadb slave (error) stops at receiving the incident
event there's no description of what led to it. Neither in the event
nor in the master's error log.
Solution:
========
Before reporting an incident event for a transaction, first validate
that it is “non-transactional” (i.e. cannot be safely rolled back).
To determine if a transaction is non-transactional,
lex->stmt_accessed_table(LEX::STMT_WRITES_NON_TRANS_TABLE)
is used because it is set previously in
THD::decide_logging_format().
Additionally, when an incident event is written, write an error
message to the server’s error log to indicate the underlying issue.
Reviewed by:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
dict_load_foreigns(): Use a correctly sized buffer for the maximum-length
SYS_FOREIGN.ID. In case of overflow, do not crash the server but instead
return DB_CORRUPTION.
This commit is a fixup for MDEV-28762
Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used