don't assume anymore that OPTIONS_WRITTEN_TO_BIN_LOG is fixed once
and forever. Instead, deduct master's OPTIONS_WRITTEN_TO_BIN_LOG
from the master's version in binlog.
MDEV-20704 changed the rules of how (HA_BINARY_PACK_KEY |
HA_VAR_LENGTH_KEY) flags are added. Older FRMs before that fix had
these flags for DOUBLE index. After that fix when ALTER sees such old
FRM it thinks it cannot do instant alter because of failed
compare_keys_but_name(): it compares flags against tmp table created
by ALTER.
MDEV-20704 fix was actually not about DOUBLE type but about
FIELDFLAG_BLOB which affected DOUBLE. So there is no direct knowledge
that any other types were not affected.
The proposed fix under CHECK TABLE checks if FRM has
(HA_BINARY_PACK_KEY | HA_VAR_LENGTH_KEY) flags and was created prior
MDEV-20704 and if so issues "needs upgrade". When mysqlcheck and
mysql_upgrade see such status they issue ALTER TABLE FORCE and upgrade
the table to the version of server.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
This commit restores defaults and functionality regarding binlogs
to the way it was prior to MDEV-27524. The mariabackup utility no
longer saves binlogs files as part of a backup without the --galera-info
option. However, since we use --galera-info during SST, the behavior
of mariabackup changes and, in combination with GTIDs support enabled,
mariabackup transfers one (most recent) binlog file obtained after
FLUSH BINARY LOGS. In other cases, binlogs are not transferred during
SST in mariabackup mode. As for SST in the rsync mode, it works the
same way as before MDEV-27524 - by default it transfers one last
binlog file.
The --sst-max-binlogs option for mariabackup and the sst_max_binlogs
parameter in the [sst] / server sections are no longer supported for
SST via mariabackup.
fil_name_process(): Upon processing a new file name for FILE_RENAME, rename
the tablespace in deferred_spaces.
recv_sys_t::parse(): Rearrange some code to slightly improve the locality
of reference. Invoke fil_name_process() with FILE_MODIFY, FILE_DELETE
or the new case FILE_RENAME, for the new file name of the file.
This should be a regression due to
commit 86dc7b4d4c (MDEV-24626)
and it might also affect the crash recovery of DDL operations.
btr_page_reorganize_low(): Restore mtr->set_log_mode() before returning.
innobase_instant_try(): Do not invoke rec_get_offsets() if
btr_cur_pessimistic_update() failed. The cursor position may be
invalid.
Let simplify the test.
The update_time is stored in the table metadata (dict_table_t);
it has nothing to do with buffer pool page eviction or replacement.
(addressed review input)
The issue was introduced by @@optimizer_max_sel_arg_weight code.
key_or() calls SEL_ARG::update_weight_locally(). That function
takes O(tree->elements) time.
Without that call, key_or(big_tree, one_element_tree) would take
O(log(big_tree)) when one_element_tree doesn't overlap with elements
of big_tree.
This means, update_weight_locally() can cause a big slowdown.
The fix:
1. key_or() actually doesn't need to call update_weight_locally().
It calls SEL_ARG::tree_delete() and SEL_ARG::insert(). These functions
update SEL_ARG::weight.
It also manipulates the SEL_ARG objects directly, but these
modifications do not change the weight of the tree.
I've just removed the update_weight_locally() call.
2. and_all_keys() also calls update_weight_locally(). It manipulates the
SEL_ARG graph directly.
Removed that call and added the code to update the SEL_ARG graph weight.
Tests main.range and main.range_not_embedded already contain the queries
that have test coverage for the affected code.
Starting since this commit 36cdd5c3cd
there is an ASAN stack-buffer-overflow error because we append a NULL
terminator beyond the length of memory allocated.
Reviewed by: Monty and Nayuta Yanagisawa
look for an installed plugin with the same name _and the same type_
(in case there are many plugins with the same name and different type,
which is, technically, possible for built-in plugins).
The function rec_get_offsets_func() used to hit ut_error
due to an invalid rec_get_status() value of a
ROW_FORMAT!=REDUNDANT record. This fix is twofold:
We will not only avoid a crash on corruption in this case,
but we will also make more effort to validate each record
every time we are iterating over index page records.
rec_get_offsets_func(): Do not crash on a corrupted record.
page_rec_get_nth(): Return nullptr on error.
page_dir_slot_get_rec_validate(): Like page_dir_slot_get_rec(),
but validate the pointer and return nullptr on error.
page_cur_search_with_match(), page_cur_search_with_match_bytes(),
page_dir_split_slot(), page_cur_move_to_next():
Indicate failure in a return value.
page_cur_search(): Replaced with page_cur_search_with_match().
rec_get_next_ptr_const(), rec_get_next_ptr(): Replaced with
page_rec_get_next_low().
TODO: rtr_page_split_initialize_nodes(), rtr_update_mbr_field(),
and possibly other SPATIAL INDEX functions fail to properly handle
errors.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
Performance tested by: Axel Schwenke
it's not "non deterministic", it's completely defined
by @@rand_seed1 and @@rand_seed2. And as a session func it needs
to be re-fixed at the beginning of every statement.
Test fixes:
Since fix for CONC-603 (wrong error handling in TLS read/write) in case
of a read/write error client doesn't return always error 2013 (server
has gone away), so in addition we need to check for error 2026
(TLS/SSL error) and 5014 (write error).
In 3b662c6ebd, it was discovered that the
values of the 'wsrep_is_on' and 'wsrep_cannot_replicate_tz' variables need
to be overridden for embedded builds to pass
However, there are other build configurations where these variables also
have NULL values. The mariadb-tzinfo-to-sql script (implemented in
sql/tztime.cc) can be slightly modified to set its 'wsrep_is_on' and
'wsrep_cannot_replicate_tz' variables more predictably in all such cases,
thus allowing the mysql_tzinfo_to_sql_symlink.test test to pass without
any special-casing for particular build types.
See comments:
- 3b662c6ebd (r78994411)
- https://jira.mariadb.org/browse/MDEV-28782?focusedCommentId=230038&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-230038
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services,
Inc.
Starting with commit da094188f6 (MDEV-24393),
MariaDB will no longer acquire advisory file locks on InnoDB data
files by default, because it would create a large number of
entries in Linux /proc/locks.
The motivation for acquiring the file locks is to prevent accidental
concurrent startup of multiple server processes on the same data files.
Such mistake still turns out to be relatively common, based on
corruption bug reports from the community.
To prevent corruption due to concurrent startup attempts, the
Aria storage engine would unconditionally acquire an advisory lock
on one of its log files.
Solution: InnoDB will always lock its system tablespace files.
(Ever since commit 685d958e38
the InnoDB log file will not necessarily be open while the
server is running, because it can be accessed via memory-mapped I/O.)
If more protection is desired, then the option --external-locking
can be used.
The mandatory advisory lock also fixes intermittent failures of
some crash recovery tests. It turns out that when the mtr test harness
kills and restarts the server, it will not actually ensure that the
old process has terminated before starting the new one.
This bug could cause a crash of the server when executing queries containing
ANY/ALL predicands with redundant subqueries in GROUP BY clauses.
These subqueries are eliminated by remove_redundant_subquery_clause()
together with elimination of GROUP BY list containing these subqueries.
However the references to the elements of the GROUP BY remained in the
JOIN::all_fields list of the right operand of of the ALL/ANY predicand.
Later these references confused make_aggr_tables_info() when forming
proper execution structures after ALL/ANY predicands had been replaced
with expressions containing MIN/MAX set functions.
The patch just removes these references from JOIN::all_fields list used
by the subquery of the ALL/ANY predicand when its GROUP BY clause is
eliminated.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
prepare_inplace_add_virtual(): Over-estimate the size of the arrays
by not subtracting table->s->virtual_fields (which may refer to
stored, not virtual generated columns). InnoDB only distinguishes
virtual columns.
... on semisync slave
To provide semisync master crash-recovery the same server-id transactions
were made to accept for execution on the semisync slave when the strict gtid
mode (see MDEV-27760).
That however caused out-of-order error on a master's transaction
server of the circular setup.
The error was fair in the sense of the gtid strict mode rule as indeed
under the condition of the circular setup the replicated transaction
already exists in the local binlog.
This is fixed by the commit to ignore on the gtid strict mode semisync
slave those gtids that exist in the slave's binlog that effectively restores
the default same-server-id ignore policy.
At the same time the fixes complies with MDEV-21117 semisync slave recovery
to accept the same server-id transactions that do not exist in local binlog.