As a part of this MDEV following changes were made:
1) Mariadb named executables used instead of mysql named executables in scripts
2) renamed mysql-test-run and mysql-stress-test to mariadb-test-run and
mariadb-stress-test and created a symlink.
In commit 49e2c8f0a6 (MDEV-25743)
we made dict_sys_t::find() incompatible with the rest of the
table name hash table operations in case the table name contains
non-ASCII octets (using a compatibility mode that facilitates the
upgrade into the MySQL 5.0 filename-safe encoding) and the target
platform implements signed char.
ut_fold_string(): Remove; replace with my_crc32c(). This also makes
table name hash value calculations independent on whether char
is unsigned or signed.
The server still may abort if there is no enough free space in the
ring buffer to resubmit the IO job, but the behavior is equal to
the failure of os_aio() -> submit_io().
In commit de407e7cb4 a debug assertion
was added that would not always hold: We could have TRX_STATE_PREPARED
here. But, in that case, the transaction should not have been chosen
as a deadlock victim.
* read_command_buf is a pointer now, sizeof() no longer reflects its
length, read_command_buflen is.
* my_safe_print_str() prints multiple screens of '\0' bytes after the
query end and up to read_command_buflen. Use fprintf() instead.
* when setting connection->name to "-closed_connection-" update
connection->name_len to match.
Problem:
=======
When the semisync master is crashed and restarted as slave it could
recover transactions that former slaves may never have seen.
A known method existed to clear out all prepared transactions
with --tc-heuristic-recover=rollback does not care to adjust
binlog accordingly.
Fix:
===
The binlog-based recovery is made to concern of the slave semisync role of
post-crash restarted server.
No changes in behavior is done to the "normal" binloggging server
and the semisync master.
When the restarted server is configured with
--rpl-semi-sync-slave-enabled=1
the refined recovery attempts to roll back prepared transactions
and truncate binlog accordingly.
In case of a partially committed (that is committed at least
in one of the engine participants) such transaction gets committed.
It's guaranteed no (partially as well) committed transactions
exist beyond the truncate position.
In case there exists a non-transactional replication event
(being in a way a committed transaction) past the
computed truncate position the recovery ends with an error.
As after master crash and failover to slave, the demoted-to-slave
ex-master must be ready to face and accept its own (generated by)
events, without generally necessary --replicate-same-server-id.
So the acceptance conditions are relaxed for the semisync slave
to accept own events without that option.
While gtid_strict_mode ON ensures no duplicate transaction can be
(re-)executed the master_use_gtid=none slave has to be
configured with --replicate-same-server-id.
*NOTE* for reviewers.
This patch does not handle the user XA which is done
in next git commit.
the test tests whether short options work on the server command line
* remove 'show variables' for variables not affected by short options
* remove options, that are not short
* remove options, that cannot be tested from SQL
* in particular, -T12 doesn't affect the test output,
but cases ~30sec delay on shutdown
* use -W1 as -W2 is the default, so doesn't affect the test output
also avoid an oxymoron of using `MYSQL_PLUGIN_IMPORT` under
`#ifdef MYSQL_SERVER`, and empty_clex_str is so trivial that a plugin
can define it if needed.
We no longer maintain the unstable-tests collection.
In commit 3635280cf7 we added
a smoke test collection, which independent MariaDB package maintainers
can expect to pass.
should be updated only when adaptive hashing is turned-on
Currently, btr_cur_n_non_sea is used to track the search that missed
adaptive hash index. adaptive hash index is turned off by default
but the said variable is updated always though the value of it makes sense
only when an adaptive index is enabled. It is meant to check how many
searches didn't go through an adaptive hash index.
Given a global variable that is updated on each search path it causes
a contention with a multi-threaded workload.
Patch moves the said variables inside a loop that is now updated
only when the adaptive hash index is enabled and that in theory should
also, reduce the update frequency of the said variable as the majority of
the request should be serviced through the adaptive hash index.
Variables (btr_cur_n_non_sea and btr_cur_n_sea) are also converted to
use distributed counter to avoid contention.
User visible changes:
This also means that user will now see
Innodb_adaptive_hash_non_hash_searches (viewed as part of show status)
only if code is compiled with DWITH_INNODB_AHI=ON (default) and it will
be updated only if innodb_adaptive_hash_index=1 else it reported as 0.
The check-testcase record uses a mysqltest connection
to the database to do the recording. With the server configured
as an abstract socket, the mysqltest client cannot connect and
fails.
We work around this by starting the server as normal and then
restart with an abstract socket and test this.
This didn't affect Windows as it just did a tcp connection.
So this did affect all unix socket based systems except Linux
as this was the only one that supported abstract sockets.
Not all environments have 'diff' installed. Most notably CentOS 8
does not have diff out-of-the-box. Thus users running 'cmake .' and
'make' would fail to build MariaDB, and they would think the error
was in ABI incompatibilities due to the error message emitted by CMake
when in reality simply 'diff' was missing.
This fixes it and makes the developer experience better by simply
skipping the diffing if 'diff' is not found.
Closes#1846
Removed Field_map, since it was used only in a single function.
Fixed is_indexed_agg_distinct(), since it relied on initialization of
Bitmap in constructor.
Fixes MDEV-25888 in 10.4
the idea of main.failed_auth_unixsocket was to have existing
user account (root) authenticate with unix_socket, then login with
non-existent user name, Non-existent user name forces the server
to perform the authentication in the name of some random existing
user. But it must still fail at the end, as the user name is wrong.
In 10.4 a second predefined user was added, mariadb.sys, so root
is not the only user in mysql.global_priv and unix_socket auth
must be forced for all existing user accounts, because we cannot
know what user account the server will randomly pick for non-existing
user auth.
RocksDB failed to package on fedora 32 and 33 with
*** ERROR: ambiguous python shebang in /usr/bin/myrocks_hotbackup: #!/usr/bin/env python. Change it to python3 (or python2) explicitly.
When doing a ALTER TABLE ... RENAME, MariaDB doesn't rename
original table to #sql-backup, which it does in other cases,
but insteads drops the original table directly. However
this optimization doesn't work in case of InnoDB table
with a foreign key constraint.
During copy algorithm, InnoDB fails to rename the foreign key
constraint(MDEV-25855). With this optimisation, InnoDB also
fails to drop the original table because the table has
FOREIGN Key constraint exist in INNODB_SYS_FOREIGN table.
This leads to orphan .ibd file in InnoDB dictionary.
so disabling this optimization when FK is involved.
Reviewer: monty@mariadb.org
This is a complete rewrite of DROP TABLE, also as part of other DDL,
such as ALTER TABLE, CREATE TABLE...SELECT, TRUNCATE TABLE.
The background DROP TABLE queue hack is removed.
If a transaction needs to drop and create a table by the same name
(like TRUNCATE TABLE does), it must first rename the table to an
internal #sql-ib name. No committed version of the data dictionary
will include any #sql-ib tables, because whenever a transaction
renames a table to a #sql-ib name, it will also drop that table.
Either the rename will be rolled back, or the drop will be committed.
Data files will be unlinked after the transaction has been committed
and a FILE_RENAME record has been durably written. The file will
actually be deleted when the detached file handle returned by
fil_delete_tablespace() will be closed, after the latches have been
released. It is possible that a purge of the delete of the SYS_INDEXES
record for the clustered index will execute fil_delete_tablespace()
concurrently with the DDL transaction. In that case, the thread that
arrives later will wait for the other thread to finish.
HTON_TRUNCATE_REQUIRES_EXCLUSIVE_USE: A new handler flag.
ha_innobase::truncate() now requires that all other references to
the table be released in advance. This was implemented by Monty.
ha_innobase::delete_table(): If CREATE TABLE..SELECT is detected,
we will "hijack" the current transaction, drop the table in
the current transaction and commit the current transaction.
This essentially fixes MDEV-21602. There is a FIXME comment about
making the check less failure-prone.
ha_innobase::truncate(), ha_innobase::delete_table():
Implement a fast path for temporary tables. We will no longer allow
temporary tables to use the adaptive hash index.
dict_table_t::mdl_name: The original table name for the purpose of
acquiring MDL in purge, to prevent a race condition between a
DDL transaction that is dropping a table, and purge processing
undo log records of DML that had executed before the DDL operation.
For #sql-backup- tables during ALTER TABLE...ALGORITHM=COPY, the
dict_table_t::mdl_name will differ from dict_table_t::name.
dict_table_t::parse_name(): Use mdl_name instead of name.
dict_table_rename_in_cache(): Update mdl_name.
For the internal FTS_ tables of FULLTEXT INDEX, purge would
acquire MDL on the FTS_ table name, but not on the main table,
and therefore it would be able to run concurrently with a
DDL transaction that is dropping the table. Previously, the
DROP TABLE queue hack prevented a race between purge and DDL.
For now, we introduce purge_sys.stop_FTS() to prevent purge from
opening any table, while a DDL transaction that may drop FTS_
tables is in progress. The function fts_lock_table(), which will
be invoked before the dictionary is locked, will wait for
purge to release any table handles.
trx_t::drop_table_statistics(): Drop statistics for the table.
This replaces dict_stats_drop_index(). We will drop or rename
persistent statistics atomically as part of DDL transactions.
On lock conflict for dropping statistics, we will fail instantly
with DB_LOCK_WAIT_TIMEOUT, because we will be holding the
exclusive data dictionary latch.
trx_t::commit_cleanup(): Separated from trx_t::commit_in_memory().
Relax an assertion around fts_commit() and allow DB_LOCK_WAIT_TIMEOUT
in addition to DB_DUPLICATE_KEY. The call to fts_commit() is
entirely misplaced here and may obviously break the consistency
of transactions that affect FULLTEXT INDEX. It needs to be fixed
separately.
dict_table_t::n_foreign_key_checks_running: Remove (MDEV-21175).
The counter was a work-around for missing meta-data locking (MDL)
on the SQL layer, and not really needed in MariaDB.
ER_TABLE_IN_FK_CHECK: Replaced with ER_UNUSED_28.
HA_ERR_TABLE_IN_FK_CHECK: Remove.
row_ins_check_foreign_constraints(): Do not acquire
dict_sys.latch either. The SQL-layer MDL will protect us.
This was reviewed by Thirunarayanan Balathandayuthapani
and tested by Matthias Leich.
The implementation of MDEV-24626 was not entirely correct.
We could occasionally fail to remove some *.ibd files on recovery.
deferred_spaces: Keep track of FILE_DELETE records.
deferred_spaces.add(): Do not allow duplicate file names.
recv_rename_files(): Preserve some of renamed_spaces entries for
deferred_spaces.reinit_all().
Thanks to Thirunarayanan Balathandayuthapani for noticing that
deferred_spaces.add() must filter out duplicate file names,
as well as some debugging help.
* Make Item_in_optimizer::fix_fields inherit the with_window_func
attribute of the subquery's left expression (the subquery itself
cannot have window functions that are aggregated in this select)
* Make Item_cache_wrapper::Item_cache_wrapper() inherit
with_window_func attribute of the item it is caching.
Occasionally, the test innodb.alter_copy would fail in MariaDB 10.6.1,
reporting DB_MISSING_HISTORY during CHECK TABLE. It started to occur during
the development of MDEV-25180, which introduced purge_sys.stop_SYS().
If we delay purge more during DDL operations, then the test would
almost always fail. The reason is that during startup we will restore
a purge view, and CHECK TABLE would still use REPEATABLE READ
even though innodb_read_only is set and other isolation levels
than READ UNCOMMITTED are not guaranteed to work.
ha_innobase::check(): Use READ UNCOMMITTED isolation level if
innodb_read_only is set or innodb_force_recovery exceeds 3.
dict_set_corrupted(): Do not update the persistent data dictionary
if innodb_force_recovery exceeds 3.
- Report the test name in case not all tests are completed and server
closed the connection
- Rerport the failure of the last test with the server log in case of
server shutdown.
- Ignore stackdump files (obtained on Windows).
Reviewed by: wlad@mariadb.com
If temporary internal table is in use `hton` will not be set. Skip check
if DDL should be replicated in this case.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
buf_read_ibuf_merge_pages(): If space->size is 0, invoke
fil_space_get_size() to determine the size of the tablespace
by reading the header page. Only after that proceed to delete
any entries that are beyond the end of the tablespace.
Otherwise, we could be deleting valid entries that actually
need to be applied.
This fixes a regression that had been introduced in
commit b80df9eba2 (MDEV-21069),
which aimed to avoid crashes during DROP TABLE of corrupted tables.