Make SEL_ARG::make_root() maintain SEL_ARG::weight.
Also, an unrelated change: fix dbug_print_sel_arg() to correctly
print SQL NULL for the right endpoint.
Part #2: make sure we allocate space for two JOIN_TABs that
use temporary tables.
The dbug_join_tab_array_size is still set to catch cases where
we try to access more JOIN_TAB object than we thought we would have.
The problem was caused by use of COLLATION(AVG('x')). This is an
item whose value is a constant.
Name Resolution code called convert_const_to_int() which removed AVG('x').
However, the item representing COLLATION(...) still had with_sum_func=1.
This inconsistent state confused the code that handles grouping and
DISTINCT: JOIN::get_best_combination() decided to use one temporary
table and allocated one JOIN_TAB for it, but then
JOIN::make_aggr_tables_info() attempted to use two and made writes
beyond the end of the JOIN::join_tab array.
The fix:
- Do not replace constant expressions which contain aggregate functions.
- Add JOIN::dbug_join_tab_array_size to catch attempts to use more
JOIN_TAB objects than we've allocated.
- query->intersection fails to get freed if the query exceeds
innodb_ft_result_cache_limit
- errors from init_ftfuncs were not propogated by delete command
This is taken from percona/percona-server@ef2c0bcb9a
with its 0049-Selectively-disallow-SHA1-signatures.patch
in the openssl source rpm.
let's allow them for now, this fixes tls_version and tls_version1 tests.
This bug manifested itself for INSERT...SELECT and DELETE statements whose
WHERE condition used an IN/ANY/ALL predicand or a EXISTS predicate with
such grouping subquery that:
- its GROUP BY clause could be eliminated,
- the GROUP clause contained a subquery over a mergeable derived table
referencing the updated table.
The bug ultimately caused a server crash when the prepare phase of the
statement processing was executed. This happened after removal redundant
subqueries used in the eliminated GROUP BY clause from the statement tree.
The function that excluded the subqueries from the did not do it properly.
As a result the specification of any derived table contained in a removed
subquery was not marked as excluded.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
mysqlimport starts many worker threads. when one of the worker
encounters an error, it frees global memory and calls exit().
it suppresses memory leak detector, because, as the comment says
"dirty exit, some threads are still running", indeed, it cannot
free the memory from other threads.
but precisely because some threads are still running, they
might use this global memory, so it cannot be freed.
fix: if we know that some threads are still running and accept
that we cannot free all memory anyway, let's not free global
allocations either
This is particularly important for Azure where there is no
MyISAM support in their MariaDB cloud product.
Like mysqldumper does, a view can satisfy the requirement
like a table, without constraints. The views in frm files are
text form and don't have column limits.
Thanks Thomas Casteleyn for the suggestion.
don't assume anymore that OPTIONS_WRITTEN_TO_BIN_LOG is fixed once
and forever. Instead, deduct master's OPTIONS_WRITTEN_TO_BIN_LOG
from the master's version in binlog.
With a global non-default max-statement-time of a time interval that exceed
the query time mysqldump queries when doing a backup.
To solve both, add a max-statement-time option, defaulting to 0 (unlimited time).
Also like mariabackup, set the session wait_timeout=DEFAULT (28800). The
time/processing between mysqldump times isn't expected to get that
close ever, but let's adopt the standard of mariabackup as no-one has
challenged it has having a detrimental effect.
Reviewer and test case author Daniel Black
MDEV-20704 changed the rules of how (HA_BINARY_PACK_KEY |
HA_VAR_LENGTH_KEY) flags are added. Older FRMs before that fix had
these flags for DOUBLE index. After that fix when ALTER sees such old
FRM it thinks it cannot do instant alter because of failed
compare_keys_but_name(): it compares flags against tmp table created
by ALTER.
MDEV-20704 fix was actually not about DOUBLE type but about
FIELDFLAG_BLOB which affected DOUBLE. So there is no direct knowledge
that any other types were not affected.
The proposed fix under CHECK TABLE checks if FRM has
(HA_BINARY_PACK_KEY | HA_VAR_LENGTH_KEY) flags and was created prior
MDEV-20704 and if so issues "needs upgrade". When mysqlcheck and
mysql_upgrade see such status they issue ALTER TABLE FORCE and upgrade
the table to the version of server.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
This commit restores defaults and functionality regarding binlogs
to the way it was prior to MDEV-27524. The mariabackup utility no
longer saves binlogs files as part of a backup without the --galera-info
option. However, since we use --galera-info during SST, the behavior
of mariabackup changes and, in combination with GTIDs support enabled,
mariabackup transfers one (most recent) binlog file obtained after
FLUSH BINARY LOGS. In other cases, binlogs are not transferred during
SST in mariabackup mode. As for SST in the rsync mode, it works the
same way as before MDEV-27524 - by default it transfers one last
binlog file.
The --sst-max-binlogs option for mariabackup and the sst_max_binlogs
parameter in the [sst] / server sections are no longer supported for
SST via mariabackup.
Let simplify the test.
The update_time is stored in the table metadata (dict_table_t);
it has nothing to do with buffer pool page eviction or replacement.
(addressed review input)
The issue was introduced by @@optimizer_max_sel_arg_weight code.
key_or() calls SEL_ARG::update_weight_locally(). That function
takes O(tree->elements) time.
Without that call, key_or(big_tree, one_element_tree) would take
O(log(big_tree)) when one_element_tree doesn't overlap with elements
of big_tree.
This means, update_weight_locally() can cause a big slowdown.
The fix:
1. key_or() actually doesn't need to call update_weight_locally().
It calls SEL_ARG::tree_delete() and SEL_ARG::insert(). These functions
update SEL_ARG::weight.
It also manipulates the SEL_ARG objects directly, but these
modifications do not change the weight of the tree.
I've just removed the update_weight_locally() call.
2. and_all_keys() also calls update_weight_locally(). It manipulates the
SEL_ARG graph directly.
Removed that call and added the code to update the SEL_ARG graph weight.
Tests main.range and main.range_not_embedded already contain the queries
that have test coverage for the affected code.
look for an installed plugin with the same name _and the same type_
(in case there are many plugins with the same name and different type,
which is, technically, possible for built-in plugins).
it's not "non deterministic", it's completely defined
by @@rand_seed1 and @@rand_seed2. And as a session func it needs
to be re-fixed at the beginning of every statement.
Test fixes:
Since fix for CONC-603 (wrong error handling in TLS read/write) in case
of a read/write error client doesn't return always error 2013 (server
has gone away), so in addition we need to check for error 2026
(TLS/SSL error) and 5014 (write error).