Make SEL_ARG::make_root() maintain SEL_ARG::weight.
Also, an unrelated change: fix dbug_print_sel_arg() to correctly
print SQL NULL for the right endpoint.
The test mariabackup.mdev-14447 is inserting relatively much
data while concurrently backing up the data. The test often fails
on CI systems, possibly due to an inherent race condition between
the producer (server) and consumer (backup) that would be solved
if the backup was being produced by the server (MDEV-14992).
The written data volume was increased somewhat by
commit 4179f93d28 (MDEV-18976).
Let us trim the log volume by not writing PAGE_CHECKSUM records
or row-level undo log records, and make the test cover what was
intended to cover by creating the table in the system tablespace.
The problem was caused by use of COLLATION(AVG('x')). This is an
item whose value is a constant.
Name Resolution code called convert_const_to_int() which removed AVG('x').
However, the item representing COLLATION(...) still had with_sum_func=1.
This inconsistent state confused the code that handles grouping and
DISTINCT: JOIN::get_best_combination() decided to use one temporary
table and allocated one JOIN_TAB for it, but then
JOIN::make_aggr_tables_info() attempted to use two and made writes
beyond the end of the JOIN::join_tab array.
The fix:
- Do not replace constant expressions which contain aggregate functions.
- Add JOIN::dbug_join_tab_array_size to catch attempts to use more
JOIN_TAB objects than we've allocated.
- query->intersection fails to get freed if the query exceeds
innodb_ft_result_cache_limit
- errors from init_ftfuncs were not propogated by delete command
This is taken from percona/percona-server@ef2c0bcb9a
with its 0049-Selectively-disallow-SHA1-signatures.patch
in the openssl source rpm.
let's allow them for now, this fixes tls_version and tls_version1 tests.
This bug manifested itself for INSERT...SELECT and DELETE statements whose
WHERE condition used an IN/ANY/ALL predicand or a EXISTS predicate with
such grouping subquery that:
- its GROUP BY clause could be eliminated,
- the GROUP clause contained a subquery over a mergeable derived table
referencing the updated table.
The bug ultimately caused a server crash when the prepare phase of the
statement processing was executed. This happened after removal redundant
subqueries used in the eliminated GROUP BY clause from the statement tree.
The function that excluded the subqueries from the did not do it properly.
As a result the specification of any derived table contained in a removed
subquery was not marked as excluded.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This is particularly important for Azure where there is no
MyISAM support in their MariaDB cloud product.
Like mysqldumper does, a view can satisfy the requirement
like a table, without constraints. The views in frm files are
text form and don't have column limits.
Thanks Thomas Casteleyn for the suggestion.
don't assume anymore that OPTIONS_WRITTEN_TO_BIN_LOG is fixed once
and forever. Instead, deduct master's OPTIONS_WRITTEN_TO_BIN_LOG
from the master's version in binlog.
With a global non-default max-statement-time of a time interval that exceed
the query time mysqldump queries when doing a backup.
To solve both, add a max-statement-time option, defaulting to 0 (unlimited time).
Also like mariabackup, set the session wait_timeout=DEFAULT (28800). The
time/processing between mysqldump times isn't expected to get that
close ever, but let's adopt the standard of mariabackup as no-one has
challenged it has having a detrimental effect.
Reviewer and test case author Daniel Black
MDEV-20704 changed the rules of how (HA_BINARY_PACK_KEY |
HA_VAR_LENGTH_KEY) flags are added. Older FRMs before that fix had
these flags for DOUBLE index. After that fix when ALTER sees such old
FRM it thinks it cannot do instant alter because of failed
compare_keys_but_name(): it compares flags against tmp table created
by ALTER.
MDEV-20704 fix was actually not about DOUBLE type but about
FIELDFLAG_BLOB which affected DOUBLE. So there is no direct knowledge
that any other types were not affected.
The proposed fix under CHECK TABLE checks if FRM has
(HA_BINARY_PACK_KEY | HA_VAR_LENGTH_KEY) flags and was created prior
MDEV-20704 and if so issues "needs upgrade". When mysqlcheck and
mysql_upgrade see such status they issue ALTER TABLE FORCE and upgrade
the table to the version of server.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
This commit restores defaults and functionality regarding binlogs
to the way it was prior to MDEV-27524. The mariabackup utility no
longer saves binlogs files as part of a backup without the --galera-info
option. However, since we use --galera-info during SST, the behavior
of mariabackup changes and, in combination with GTIDs support enabled,
mariabackup transfers one (most recent) binlog file obtained after
FLUSH BINARY LOGS. In other cases, binlogs are not transferred during
SST in mariabackup mode. As for SST in the rsync mode, it works the
same way as before MDEV-27524 - by default it transfers one last
binlog file.
The --sst-max-binlogs option for mariabackup and the sst_max_binlogs
parameter in the [sst] / server sections are no longer supported for
SST via mariabackup.
Let simplify the test.
The update_time is stored in the table metadata (dict_table_t);
it has nothing to do with buffer pool page eviction or replacement.
Starting since this commit 36cdd5c3cd
there is an ASAN stack-buffer-overflow error because we append a NULL
terminator beyond the length of memory allocated.
Reviewed by: Monty and Nayuta Yanagisawa
it's not "non deterministic", it's completely defined
by @@rand_seed1 and @@rand_seed2. And as a session func it needs
to be re-fixed at the beginning of every statement.
Test fixes:
Since fix for CONC-603 (wrong error handling in TLS read/write) in case
of a read/write error client doesn't return always error 2013 (server
has gone away), so in addition we need to check for error 2026
(TLS/SSL error) and 5014 (write error).
In 3b662c6ebd, it was discovered that the
values of the 'wsrep_is_on' and 'wsrep_cannot_replicate_tz' variables need
to be overridden for embedded builds to pass
However, there are other build configurations where these variables also
have NULL values. The mariadb-tzinfo-to-sql script (implemented in
sql/tztime.cc) can be slightly modified to set its 'wsrep_is_on' and
'wsrep_cannot_replicate_tz' variables more predictably in all such cases,
thus allowing the mysql_tzinfo_to_sql_symlink.test test to pass without
any special-casing for particular build types.
See comments:
- 3b662c6ebd (r78994411)
- https://jira.mariadb.org/browse/MDEV-28782?focusedCommentId=230038&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-230038
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services,
Inc.
Starting with commit da094188f6 (MDEV-24393),
MariaDB will no longer acquire advisory file locks on InnoDB data
files by default, because it would create a large number of
entries in Linux /proc/locks.
The motivation for acquiring the file locks is to prevent accidental
concurrent startup of multiple server processes on the same data files.
Such mistake still turns out to be relatively common, based on
corruption bug reports from the community.
To prevent corruption due to concurrent startup attempts, the
Aria storage engine would unconditionally acquire an advisory lock
on one of its log files.
Solution: InnoDB will always lock its system tablespace files.
(Ever since commit 685d958e38
the InnoDB log file will not necessarily be open while the
server is running, because it can be accessed via memory-mapped I/O.)
If more protection is desired, then the option --external-locking
can be used.
The mandatory advisory lock also fixes intermittent failures of
some crash recovery tests. It turns out that when the mtr test harness
kills and restarts the server, it will not actually ensure that the
old process has terminated before starting the new one.
This bug could cause a crash of the server when executing queries containing
ANY/ALL predicands with redundant subqueries in GROUP BY clauses.
These subqueries are eliminated by remove_redundant_subquery_clause()
together with elimination of GROUP BY list containing these subqueries.
However the references to the elements of the GROUP BY remained in the
JOIN::all_fields list of the right operand of of the ALL/ANY predicand.
Later these references confused make_aggr_tables_info() when forming
proper execution structures after ALL/ANY predicands had been replaced
with expressions containing MIN/MAX set functions.
The patch just removes these references from JOIN::all_fields list used
by the subquery of the ALL/ANY predicand when its GROUP BY clause is
eliminated.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
prepare_inplace_add_virtual(): Over-estimate the size of the arrays
by not subtracting table->s->virtual_fields (which may refer to
stored, not virtual generated columns). InnoDB only distinguishes
virtual columns.
... on semisync slave
To provide semisync master crash-recovery the same server-id transactions
were made to accept for execution on the semisync slave when the strict gtid
mode (see MDEV-27760).
That however caused out-of-order error on a master's transaction
server of the circular setup.
The error was fair in the sense of the gtid strict mode rule as indeed
under the condition of the circular setup the replicated transaction
already exists in the local binlog.
This is fixed by the commit to ignore on the gtid strict mode semisync
slave those gtids that exist in the slave's binlog that effectively restores
the default same-server-id ignore policy.
At the same time the fixes complies with MDEV-21117 semisync slave recovery
to accept the same server-id transactions that do not exist in local binlog.