Add compression plugins:
* bzip2
* lz4
* lzma
* lzo
* snappy
To Autopkg's basic smoke-test dependencies.
Also make sure that MariaDB service is reloaded
when smoke-test is executed.
As MariaDB 10.5 has been removed from Debian Sid and MariaDB 10.6 has
entered it, the Salsa-CI testing needs to adapt.
To achieve this, essentially sync most the the salsa-ci.yml contents from
https://salsa.debian.org/mariadb-team/mariadb-server/-/tree/debian/latest
This includes removing Stretch builds, as Stretch does not support uring
nor pmem libraries, which MariaDB 10.6 depends on.
Also add a couple Lintian overrides to make Salsa-CI pass.
NOTE TO MERGERS: This commit is made on 10.6 branch and can be merged to
all later branches (10.7, 10.8, 10.9..) for now, but later somebody needs
to go in and update all the testing stages to do the upgrade testing
correctly for 10.6->10.7->10.8->10.9 etc.
- InnoDB bulk insert operation fails to rollback when it detect
DB_DUPLICATE_KEY error. It leads to orphaned records in primary
indexes. Consecutive update/delete operation assumes that record
should exist in secondary index and it leads to failure.
- After MDEV-24621, InnoDB does buffer the insert bulk operation
for all indexes expect spatial one. But it leads to search the
primary key lookup and it leads to failure. So InnoDB should avoid
bulk insert when table has spatial index involved.
The issue is caused by 59a0236da4 commit.
The initial intention of the commit was to speed up
"mariabackup --prepare".
The call stack of binlog position reading is the following:
▾ trx_rseg_mem_restore
▾ trx_rseg_array_init
▾ trx_lists_init_at_db_start
▸ srv_start
Both trx_lists_init_at_db_start() and trx_rseg_mem_restore() contain
special cases for srv_operation == SRV_OPERATION_RESTORE condition, and
on this condition only rseg headers are read to parse binlog position.
Performance impact is not so big.
The solution is to revert 59a0236da4.
ib_id_t is a uint64. On AIX this isn't a long long unsigned and to
prevent the compile warnings and potential wrong type, the UINT64PFx
defination is corrected.
As INT64PF is unused (last use, xtradb in 10.2), it is removed to
remove the confusion that INT64PF and UINT64PFx would be different
types otherwise.
This bug report is not about ASAN Use After Free issue. This bug is
about missed calling of the method LEX::cleanup_lex_after_parse_error
that should happen on parse error.
Aforementioned method calls sphead::restore_thd_mem_root to clean up
resources acquired on processing a stored routine. Particularly,
the method sp_head::restore_tht_mem_root is called to restore
an original mem root and reset LEX::sphead into nullptr.
The method LEX::cleanup_lex_after_parse_error is invoked by the macros
MYSQL_YYABORT. Unfortunately, some rules of grammar for handling
user variables in SQL use YYABORT instead of MYSQL_YYABORT to handle
parser errors. As a consequence, in case a statement with setting of
a user variable is called inside a stored routine, it results in
assert failure in sp_head destructor.
To fix the issue the macros YYABORT should be replaced by MYSQL_YYABORT
in those grammar rules that handle assignment of user variables.
CMAKE_SYSTEM_PROCESSOR on AIX is "powerpc". To
deconflict with the Linux 32bit arch of the same
name, CMAKE_SYSTEM_NAME was used in the CMakeLists.txt
test to enable -mhtm in the same way that was required
for Linux ppc64{,le} compilers in MDEV-27936
Configuring UDFs via plugin variables looks not a good idea.
The more variables Spider has, the more complex it becomes.
Further, I expect that only a few users use Spider UDFs.
Deprecate the following plugin variables regarding Spider UDFs:
* spider_udf_ds_bulk_insert_rows
* spider_udf_ds_table_loop_mode
* spider_udf_ds_use_real_table
* spider_udf_ct_bulk_insert_interval
* spider_udf_ct_bulk_insert_rows
spider_udf_table_lock_mutex_count and spider_udf_table_mon_mutex_count
are also for tweaking UDFs but they are already read-only. So,
there is no need to deprecate them.
"#ifdef WITH_PARTITION_STORAGE_ENGINE ... #endif" appears frequently
in the Spider code base. However, there is no need to maintain such
ifdefs because Spider is disabled if the partitioning engine is disabled.
Analysis: In case of error while processing json document, we goto
error label which eventually return 1 instead of 0.
Fix: Return 0 in case of error instead of 1.
1) When at least one of the two json documents is of scalar type:
1.a) If value and json document both are scalar, then return true
if they have same type and value.
1.b) If json document is scalar but other is array (or vice versa),
then return true if array has at least one element of same type
and value as scalar.
1.c) If one is scalar and other is object, then return false because
it can't be compared.
2) When both arguments are of non-scalar type and below conditons
are satisfied then return true:
2.a) When both arguments are arrays:
Iterate over the value and json document. If there exists at
least one element in other array of same type and value as
that of element in value.
2.b) If both arguments are objects:
Iterate over value and json document and if there exists at least
one key-value pair common between two objects.
2.c) If either of json document or value is array and other is object:
Iterate over the array, if an element of type object is found,
then compare it with the object (which is the other arguemnt).
If the entire object matches i.e all they key value pairs match.
The comparison on the checkpoint age (number of log bytes
written since the previous checkpoint) is inaccurate, because
the previous FILE_CHECKPOINT record could span two 512-byte
log blocks, which will cause the LSN to increase by the size of the
log block header and footer.
We will still generate a redudant checkpoint if the previous
checkpoint wrote some FILE_MODIFY records before the FILE_CHECKPOINT
record.
Whenever we retrieve an older version for READ COMMITTED,
it is better to release the undo page latches
so that we can freely move to the next clustered index record
without potentially violating any latching order.
Only one checkpoint may be in progress at a time.
The counter log_sys.n_pending_checkpoint_writes
was being protected by log_sys.mutex.
Let us replace it with the Boolean log_sys.checkpoint_pending.
srv_start(): Set srv_startup_is_before_trx_rollback_phase before
starting the buf_flush_page_cleaner() thread, so that it will not
invoke log_checkpoint() before the log file has been created.
This race condition was reproduced with https://rr-project.org.
This fixes up commit 15efb7ed48
buf_pool_t::watch_unset(): Reorder some code so that
no warning will be emitted in CMAKE_BUILD_TYPE=RelWithDebInfo.
It is unclear why invoking watch_is_sentinel() before
buf_fix_count() would make the warning disappear.
For GTID consistenty, GTID events was artificialy added before
replication happned. This event should not contain CHECKSUM calculated.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
On affected machine, the error happens sporadically in
innodb.instant_alter_limit.
Procmon shows SetRenameInformationFile failing with ERROR_ACCESS_DENIED.
In this case, the destination file was previously opened rsp oplocked by
Windows defender antivirus.
The fix is to retry MoveFileEx on ERROR_ACCESS_DENIED.
In commit 437da7bc54 (MDEV-19534),
the default value of the global variable srv_checksum_algorithm
in innochecksum was changed from SRV_CHECKSUM_ALGORITHM_INNODB
to implied 0 (innodb_checksum_algorithm=crc32). As a result,
the function buf_page_is_corrupted() would by default invoke
buf_calc_page_crc32() in innochecksum, and crc32_inited would hold.
This would cause "innochecksum" to fail on a particular page.
The actual problem is older, introduced in 2011 in
mysql/mysql-server@17e497bdb7
(MySQL 5.6.3). It should affect the validation of pages of old
data files that were written with innodb_checksum_algorithm=innodb.
When using innodb_checksum_algorithm=crc32 (the default setting
since MariaDB Server 10.2), some valid pages would be rejected
only because exactly one of the two checksum fields accidentally
matches the innodb_checksum_algorithm=crc32 value.
buf_page_is_corrupted(): Simplify the logic of non-strict
checksum validation, by always invoking buf_calc_page_crc32().
Remove a bogus condition that if only one of the checksum fields
contains the value returned by buf_calc_page_crc32(), the page
is corrupted.
In commit 7a4fbb55b0 (MDEV-25105)
the innochecksum option --write (-w) was removed altogether.
It should have been made a Boolean option, so that old data files
may be converted to a format that is compatible with
innodb_checksum_algorithm=strict_crc32 by executing the following:
innochecksum -n -w ibdata* */*.ibd
It would be better to use an older-version innochecksum
for such a conversion, so that page checksums will be validated
before updating the checksum.
It never was possible for innochecksum to convert files to the
innodb_checksum_algorithm=full_crc32 format that is the default
for new InnoDB data files.
This also fixes MDEV-20198: Instant ALTER TABLE is not crash safe
InnoDB dictionary recovery wrongly used the READ UNCOMMITTED isolation
level, causing some mismatch. For example, if a table was renamed or
replaced in a transaction, according to READ UNCOMMITTED the table might
not exist at all.
We implement READ COMMITTED isolation level for accessing the dictionary
tables SYS_TABLES, SYS_COLUMNS, SYS_INDEXES, SYS_FIELDS, SYS_VIRTUAL,
SYS_FOREIGN, SYS_FOREIGN_COLS. For most of these tables, no secondary
index exists. For the secondary indexes (on SYS_TABLES.ID,
SYS_FOREIGN.FOR_NAME, SYS_FOREIGN.REF_NAME), we will always look up
the primary key in the clustered index and check if the record actually
is a committed version.
dict_check_sys_tables(): Recover tablespaces also from delete-marked
committed records, so that if a matching .ibd file exists, it will
be removed by fil_delete_tablespace() when the committed delete-marked
SYS_INDEXES record of the clustered index is purged
in row_purge_remove_clust_if_poss_low().
fil_ibd_open(): Change the Boolean parameter "validate" to a ternary
one, to suppress error messages when the file might not exist.
It is possible that a .ibd file was deleted and the server shut down
before the SYS_INDEXES and SYS_TABLES records were purged. Hence, if
dict_check_sys_tables() finds a committed delete-marked record,
we must not complain if the tablespace file is not found.
On Windows, we msut treat ERROR_PATH_NOT_FOUND (directory not found)
in the same way as ERROR_FILE_NOT_FOUND. This fixes a few failures where
a previous test successfully executed DROP DATABASE (and deleted all
files and the directory), but a committed delete-marked SYS_TABLES
record had not been purged before server restart.
dict_getnext_system_low(): Do not filter out delete-marked records.
dict_startscan_system(), dict_getnext_system(): Do filter out
delete-marked records, for accessing the INFORMATION_SCHEMA tables.
dict_sys_tables_rec_read(): Return the DB_TRX_ID of the committed
version of the record. This is needed in dict_load_table_low().
dict_load_foreign_cols(), dict_load_foreign(): Add a parameter for
the current transaction identifier. In some DDL operations, the
FOREIGN KEY constraints are being loaded from the data dictionary
before the DDL transaction has been committed. For SYS_FOREIGN
and SYS_FOREIGN_COLS, we must implement the special case of
READ COMMITTED that the changes of the uncommitted current transaction
are visible.
dict_load_foreign(): Validate the table name. We could find a
SYS_FOREIGN.ID via a committed delete-marked secondary index record
that does not match the REF_NAME or FOR_NAME of the secondary index record.
dict_load_index_low(): Optionally take the table as a parameter,
so that table->def_trx_id can be updated in case of a
committed delete-marked SYS_INDEXES record corresponding
to DROP INDEX, but not corresponding to an index stub of ADD INDEX.
dict_load_indexes(): Do not update table->def_trx_id
in case of delete-marked records.
rec_is_metadata(), rec_offs_make_valid(), rec_get_offsets_func(),
row_build_low(): Relax some assertions. We may now have
!index->is_instant() even if a metadata record is present in the index.
Previously, the recovery of instant ADD/DROP COLUMN assumed
that READ UNCOMMITTED of the data dictionary will be performed.
Now, we will have a READ COMMITTED copy of the data dictionary
cache, and a READ UNCOMMITTED copy of the metadata record.
btr_page_reorganize_low(): Correctly update the FIL_PAGE_TYPE
when rolling back an instant ADD/DROP COLUMN operation.
row_rec_to_index_entry_impl(): Relax some assertions,
and disallow accessing "extra" fields. This fixes the recovery
of a crash during an instant ADD COLUMN after a successful
instant DROP COLUMN, in the test innodb.instant_alter_crash.
Tested by: Matthias Leich
InnoDB background statistics recalculation may acquire
a metadata also on the table itself, not only on the tables
that store the statistics.
Hence, it is better to disable InnoDB persistent statistics altogether.
This fixes up commit 9b8d9a1db3.
The autopkgtest was failing due to missing *.changes file. This is part
of source build, so revert autobake-deb.sh back to NOT using -b for
Gitlab-CI/Salsa-CI runs.
This bug affected queries with IN predicates that contain parameter markers
in the value list. Such queries are executed via prepared statements.
The problem appeared only if the number of elements in the value list
was greater than the set value of the system variable
in_predicate_conversion_threshold.
The patch unconditionally prohibits conversion of an IN predicate to the
equivalent IN predicand if the value list of the IN predicate contains
parameters markers.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
from mysql.plugin table
Fix: Since mysql_upgrade runs commands from mysql_system_tables.fix,
added sql commands to check for semisync plugins in
INFORMATION_SCHEMA.PLUGINS and if they aren't there then delete them
from mysql.plugin.