1. Store assignment failures on incompatible data types now raise errors if:
- STRICT_ALL_TABLES or STRICT_TRANS_TABLES sql_mode is used, and
- IGNORE is not used
Otherwise, only a warning is raised and the statement continues.
2. Changing the error/warning test as follows:
-ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
+ERROR HY000: Cannot cast 'int' as 'inet6' in assignment of `db`.`t`.`col`
so in case of a big table it's easier to see which column has the problem.
The new error text is also applied to SP variables.
A message used to say "failed to read or decrypt"
but the "or decrypt" part was removed in
commit 0b47c126e3
without adjusting rarely needed error message suppressions in some
encryption tests.
Let us improve the error message so that it mentions the file name,
and adjust all error message suppressions in tests.
Thanks to Oleksandr Byelkin for noticing one test failure.
Make SEL_ARG::make_root() maintain SEL_ARG::weight.
Also, an unrelated change: fix dbug_print_sel_arg() to correctly
print SQL NULL for the right endpoint.
The test mariabackup.mdev-14447 is inserting relatively much
data while concurrently backing up the data. The test often fails
on CI systems, possibly due to an inherent race condition between
the producer (server) and consumer (backup) that would be solved
if the backup was being produced by the server (MDEV-14992).
The written data volume was increased somewhat by
commit 4179f93d28 (MDEV-18976).
Let us trim the log volume by not writing PAGE_CHECKSUM records
or row-level undo log records, and make the test cover what was
intended to cover by creating the table in the system tablespace.
Part #2: make sure we allocate space for two JOIN_TABs that
use temporary tables.
The dbug_join_tab_array_size is still set to catch cases where
we try to access more JOIN_TAB object than we thought we would have.
The problem was caused by use of COLLATION(AVG('x')). This is an
item whose value is a constant.
Name Resolution code called convert_const_to_int() which removed AVG('x').
However, the item representing COLLATION(...) still had with_sum_func=1.
This inconsistent state confused the code that handles grouping and
DISTINCT: JOIN::get_best_combination() decided to use one temporary
table and allocated one JOIN_TAB for it, but then
JOIN::make_aggr_tables_info() attempted to use two and made writes
beyond the end of the JOIN::join_tab array.
The fix:
- Do not replace constant expressions which contain aggregate functions.
- Add JOIN::dbug_join_tab_array_size to catch attempts to use more
JOIN_TAB objects than we've allocated.
- query->intersection fails to get freed if the query exceeds
innodb_ft_result_cache_limit
- errors from init_ftfuncs were not propogated by delete command
This is taken from percona/percona-server@ef2c0bcb9a
with its 0049-Selectively-disallow-SHA1-signatures.patch
in the openssl source rpm.
let's allow them for now, this fixes tls_version and tls_version1 tests.
This bug manifested itself for INSERT...SELECT and DELETE statements whose
WHERE condition used an IN/ANY/ALL predicand or a EXISTS predicate with
such grouping subquery that:
- its GROUP BY clause could be eliminated,
- the GROUP clause contained a subquery over a mergeable derived table
referencing the updated table.
The bug ultimately caused a server crash when the prepare phase of the
statement processing was executed. This happened after removal redundant
subqueries used in the eliminated GROUP BY clause from the statement tree.
The function that excluded the subqueries from the did not do it properly.
As a result the specification of any derived table contained in a removed
subquery was not marked as excluded.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
mysqlimport starts many worker threads. when one of the worker
encounters an error, it frees global memory and calls exit().
it suppresses memory leak detector, because, as the comment says
"dirty exit, some threads are still running", indeed, it cannot
free the memory from other threads.
but precisely because some threads are still running, they
might use this global memory, so it cannot be freed.
fix: if we know that some threads are still running and accept
that we cannot free all memory anyway, let's not free global
allocations either
This is particularly important for Azure where there is no
MyISAM support in their MariaDB cloud product.
Like mysqldumper does, a view can satisfy the requirement
like a table, without constraints. The views in frm files are
text form and don't have column limits.
Thanks Thomas Casteleyn for the suggestion.
don't assume anymore that OPTIONS_WRITTEN_TO_BIN_LOG is fixed once
and forever. Instead, deduct master's OPTIONS_WRITTEN_TO_BIN_LOG
from the master's version in binlog.
With a global non-default max-statement-time of a time interval that exceed
the query time mysqldump queries when doing a backup.
To solve both, add a max-statement-time option, defaulting to 0 (unlimited time).
Also like mariabackup, set the session wait_timeout=DEFAULT (28800). The
time/processing between mysqldump times isn't expected to get that
close ever, but let's adopt the standard of mariabackup as no-one has
challenged it has having a detrimental effect.
Reviewer and test case author Daniel Black
MDEV-20704 changed the rules of how (HA_BINARY_PACK_KEY |
HA_VAR_LENGTH_KEY) flags are added. Older FRMs before that fix had
these flags for DOUBLE index. After that fix when ALTER sees such old
FRM it thinks it cannot do instant alter because of failed
compare_keys_but_name(): it compares flags against tmp table created
by ALTER.
MDEV-20704 fix was actually not about DOUBLE type but about
FIELDFLAG_BLOB which affected DOUBLE. So there is no direct knowledge
that any other types were not affected.
The proposed fix under CHECK TABLE checks if FRM has
(HA_BINARY_PACK_KEY | HA_VAR_LENGTH_KEY) flags and was created prior
MDEV-20704 and if so issues "needs upgrade". When mysqlcheck and
mysql_upgrade see such status they issue ALTER TABLE FORCE and upgrade
the table to the version of server.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
This commit restores defaults and functionality regarding binlogs
to the way it was prior to MDEV-27524. The mariabackup utility no
longer saves binlogs files as part of a backup without the --galera-info
option. However, since we use --galera-info during SST, the behavior
of mariabackup changes and, in combination with GTIDs support enabled,
mariabackup transfers one (most recent) binlog file obtained after
FLUSH BINARY LOGS. In other cases, binlogs are not transferred during
SST in mariabackup mode. As for SST in the rsync mode, it works the
same way as before MDEV-27524 - by default it transfers one last
binlog file.
The --sst-max-binlogs option for mariabackup and the sst_max_binlogs
parameter in the [sst] / server sections are no longer supported for
SST via mariabackup.
fil_name_process(): Upon processing a new file name for FILE_RENAME, rename
the tablespace in deferred_spaces.
recv_sys_t::parse(): Rearrange some code to slightly improve the locality
of reference. Invoke fil_name_process() with FILE_MODIFY, FILE_DELETE
or the new case FILE_RENAME, for the new file name of the file.
This should be a regression due to
commit 86dc7b4d4c (MDEV-24626)
and it might also affect the crash recovery of DDL operations.
btr_page_reorganize_low(): Restore mtr->set_log_mode() before returning.
innobase_instant_try(): Do not invoke rec_get_offsets() if
btr_cur_pessimistic_update() failed. The cursor position may be
invalid.