When creating a recursive CTE, the column types are taken from the
non recursive part of the CTE (this is according to the SQL standard).
This patch adds code to abort the CTE if the calculated values in the
recursive part does not fit in the fields in the created temporary table.
The new code only affects recursive CTE, so it should not cause any notable
problems for old applications.
Other things:
- Fixed that we get correct row numbers for warnings generated with
WITH RECURSIVE
Reviewer: Alexander Barkov <bar@mariadb.com>
1. Store assignment failures on incompatible data types now raise errors if:
- STRICT_ALL_TABLES or STRICT_TRANS_TABLES sql_mode is used, and
- IGNORE is not used
Otherwise, only a warning is raised and the statement continues.
2. Changing the error/warning test as follows:
-ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
+ERROR HY000: Cannot cast 'int' as 'inet6' in assignment of `db`.`t`.`col`
so in case of a big table it's easier to see which column has the problem.
The new error text is also applied to SP variables.
Make SEL_ARG::make_root() maintain SEL_ARG::weight.
Also, an unrelated change: fix dbug_print_sel_arg() to correctly
print SQL NULL for the right endpoint.
New Feature:
============
This patch adds a new system variable, @@slave_max_statement_time,
which limits the execution time of s slave’s events that implements
an equivalent to @@max_statement_time for slave applier.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
The problem was caused by use of COLLATION(AVG('x')). This is an
item whose value is a constant.
Name Resolution code called convert_const_to_int() which removed AVG('x').
However, the item representing COLLATION(...) still had with_sum_func=1.
This inconsistent state confused the code that handles grouping and
DISTINCT: JOIN::get_best_combination() decided to use one temporary
table and allocated one JOIN_TAB for it, but then
JOIN::make_aggr_tables_info() attempted to use two and made writes
beyond the end of the JOIN::join_tab array.
The fix:
- Do not replace constant expressions which contain aggregate functions.
- Add JOIN::dbug_join_tab_array_size to catch attempts to use more
JOIN_TAB objects than we've allocated.
This bug manifested itself for INSERT...SELECT and DELETE statements whose
WHERE condition used an IN/ANY/ALL predicand or a EXISTS predicate with
such grouping subquery that:
- its GROUP BY clause could be eliminated,
- the GROUP clause contained a subquery over a mergeable derived table
referencing the updated table.
The bug ultimately caused a server crash when the prepare phase of the
statement processing was executed. This happened after removal redundant
subqueries used in the eliminated GROUP BY clause from the statement tree.
The function that excluded the subqueries from the did not do it properly.
As a result the specification of any derived table contained in a removed
subquery was not marked as excluded.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This is particularly important for Azure where there is no
MyISAM support in their MariaDB cloud product.
Like mysqldumper does, a view can satisfy the requirement
like a table, without constraints. The views in frm files are
text form and don't have column limits.
Thanks Thomas Casteleyn for the suggestion.
don't assume anymore that OPTIONS_WRITTEN_TO_BIN_LOG is fixed once
and forever. Instead, deduct master's OPTIONS_WRITTEN_TO_BIN_LOG
from the master's version in binlog.
With a global non-default max-statement-time of a time interval that exceed
the query time mysqldump queries when doing a backup.
To solve both, add a max-statement-time option, defaulting to 0 (unlimited time).
Also like mariabackup, set the session wait_timeout=DEFAULT (28800). The
time/processing between mysqldump times isn't expected to get that
close ever, but let's adopt the standard of mariabackup as no-one has
challenged it has having a detrimental effect.
Reviewer and test case author Daniel Black
MDEV-20704 changed the rules of how (HA_BINARY_PACK_KEY |
HA_VAR_LENGTH_KEY) flags are added. Older FRMs before that fix had
these flags for DOUBLE index. After that fix when ALTER sees such old
FRM it thinks it cannot do instant alter because of failed
compare_keys_but_name(): it compares flags against tmp table created
by ALTER.
MDEV-20704 fix was actually not about DOUBLE type but about
FIELDFLAG_BLOB which affected DOUBLE. So there is no direct knowledge
that any other types were not affected.
The proposed fix under CHECK TABLE checks if FRM has
(HA_BINARY_PACK_KEY | HA_VAR_LENGTH_KEY) flags and was created prior
MDEV-20704 and if so issues "needs upgrade". When mysqlcheck and
mysql_upgrade see such status they issue ALTER TABLE FORCE and upgrade
the table to the version of server.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
Starting since this commit 36cdd5c3cd
there is an ASAN stack-buffer-overflow error because we append a NULL
terminator beyond the length of memory allocated.
Reviewed by: Monty and Nayuta Yanagisawa
Bring the 5 warnings of select random_bytes(cast('x' as unsigned)+1);
back to two. 1 for Item_func_random_bytes::fix_length_and_dec and
one from Item_func_random_bytes::val_str.
The warnings are from args[0]->val_int().
Setting max_length to a negative value in Item_func_random_bytes::fix_length_and_dec
underflowed resulting in debug optimizer assertion.
Also set the maximium to 1024 rather than MAX_BLOB_WIDTH because
we aren't going to return more than that.
Test fixes:
Since fix for CONC-603 (wrong error handling in TLS read/write) in case
of a read/write error client doesn't return always error 2013 (server
has gone away), so in addition we need to check for error 2026
(TLS/SSL error) and 5014 (write error).