- Redundant InnoDB table fails to set the flags2 while loading
the table. It leads to "Upgrade index name failure" during alter
operation. InnoDB should set the flags2 to FTS_AUX_HEX_NAME when
fts is being loaded
Log MDL state transitions. Trace-friendly message
format. DBUG_LOCK_FILE replaced by thread-local storage.
Logged states legend:
Seized lock was acquired without waiting
Waiting lock is waiting
Acquired lock was acquired after waiting
Released lock was released
Deadlock lock was aborted due to deadlock
Timeout lock was aborted due to timeout >0
Nowait lock was aborted due to zero timeout
Killed lock was aborted due to kill message
OOM can not acquire because out of memory
Usage:
mtr --mysqld=--debug=d,mdl,query:i:o,/tmp/mdl.log
Cleanup from garbage messages:
sed -i -re \
'/(mysql|performance_schema|sys|mtr)\// d; /MDL_BACKUP_/ d' \
/tmp/mdl.log
The test was reported to fail sporadicaly with this diff:
--- mysql-test/main/information_schema_tables.result
+++ mysql-test/main/information_schema_tables.reject
@@ -21,6 +21,8 @@
disconnect con1;
connection default;
DROP VIEW IF EXISTS vv;
+Warnings:
+Note 4092 Unknown VIEW: 'test.vv'
in the "The originally reported non-deterministic test" part.
Disabling warnings around the DROP VIEW statement.
The Spider mixes the comma join with other join types, and thus
ERROR 1054 occurs. This is well-known issue caused by the higher
precedence of JOIN over the comma (,).
We can fix the problem simply by using JOINs instead of commas.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on INSERT SELECT statements and it results in the
assertion error.
The heap-use-after-free is caused by the following mechanism:
* In the execution of FLUSH TABLE WITH READ LOCK, the function
spider_free_trx_conn() is called and the connections held by
SPIDER_TRX::trx_conn_hash are freed.
* Then, an instance of ha_spider maintains the freed connections
because they are also referenced from ha_spider::conns.
The ha_spider instance is kept in a lock structure until the
corresponding table is unlocked.
* Spider accesses ha_spider::conns on the implicit UNLOCK TABLE
issued by BEGIN.
In the first place, when the connections have been freed, it means
that there are really no remote table locked by Spider.
Thus, there is no need for Spider to access ha_spider::cons on the
implicit UNLOCK TABLE.
We can fix the bug by removing the above mentioned access to
ha_spider::conns. We also modified spider_free_trx_conn() so that it
frees the connections only when no table is locked to reduce the
chance of another heap-use-after-free on ha_spider::conns.
prepare_inplace_alter_table_dict(): If the table will not be rebuilt,
preserve all of the original ROW_FORMAT, including the compressed
page size flags related to ROW_FORMAT=COMPRESSED.
buf_page_print(): Dump the buffer page 32 bytes (64 hexadecimal digits)
per line. In this way, the limitation in mtr
("Data too long for column 'line'") will not be triggered.
Also, do not bother decoding the page contents, because everything
is present in the hexadecimal output.
dict_index_find_on_id_low(): Merge to dict_index_get_if_in_cache_low().
The direct call in buf_page_print() was prone to crashing, in case the
table definition was concurrently evicted or dropped from the
data dictionary cache.
Spider supports (or at least allows) INSERT DELAYED but the
documentation does not specify spider as a storage engine that supports
"INSERT DELAYED".
Also, although not mentioned in the documentation, "INSERT DELAYED" is
not intended to be executed inside a transaction, as can be seen from
the list of supported storage engines.
The current implementation allows executing a delayed insert on a
remote transactional table and this breaks the consistency ensured by
the transaction.
We too remove "internal_delayed", one of the Spider table parameters.
Documentation says,
> Whether to transmit existence of delay to remote servers when
> executing an INSERT DELAYED statement on local server.
This table parameter is only used for "INSERT DELAYED".
Reviewed by: Nayuta Yanagisawa
Take into account that in preparation of a simple key cache for resizing no disk blocks might be assigned to it.
Reviewer: IgorBabaev <igor@mariadb.com>
Fixed tpool timer implementation on POSIX.
Prior to this patch, under some specific rare circumstances (concurrency
related), timer callback execution might be skipped.
This commit contains workaround for a bug known as 'Red Hat issue 1870279'
(connection reset by peer issue in socat versions 1.7.3.3 to 1.7.4.0) which
further causes crashes during SST using mariabackup (when openssl is used).
Also fixed broken logic of automatic generation of the Diffie-Hellman parameters
for socat version less than 1.7.3 (which defaults to 512-bit values instead of
2048-bit ones).
Recent adventures in liburing and btrfs have shown up some kernel
version dependent bugs. Having a bug report of accurace kernel version
can start to correlate these errors sooner.
On Linux, /proc/version contains the kernel version.
FreeBSD has kern.version (per man 8 sysctl), so include that too.
Example output:
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
Kernel version: Linux version 5.19.0-0.rc2.21.fc37.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc (GCC) 12.1.1 20220507 (Red Hat 12.1.1-1), GNU ld version 2.38-14.fc37) #1 SMP PREEMPT_DYNAMIC Mon Jun 13 15:27:24 UTC 2022
Segmentation fault (core dumped)
it starts an EXPLAIN of a multi-table join and tries to KILL it.
no sync points.
depending on how fast the hareware is and optimizer development
it might kill EXPLAIN at some random point in time (generally unrelated
to the Bug#28598 it was supposed to test) or EXPLAIN might finish
before the KILL and the test will fail.
We do not really care about the exact result; we only care that the
statistics will be accessed. The result could change depending on
when some statistics were updated in the background or when some
committed delete-marked rows were purged from other tables on
which persistent statistics are enabled.
ha_partition::set_auto_increment_if_higher expects
part_share->auto_inc_initialized is true or can_use_for_auto_inc_init()
is false (but as the comment of this method says, it returns false
only if we use Spider engine with DROP TABLE or ALTER TABLE query).
However, part_share->auto_inc_initialized becomes true only after all
partitions are opened (since 6dce6aeceb).
Therefore, I added a conditional expression in order to read all
partitions when we execute REPLACE on a table that has an
AUTO_INCREMENT column.
Reviewed by: Nayuta Yanagisawa
Reviewed by: Alexey Botchkov
The bug was that build_notnull_conds_for_range_scans() did not take into
account the join_tab is not yet sorted with constant tables first.
Fixed the bug by testing explicitely if a table is a const table.
In commit 73fee39ea6 (MDEV-27985)
a regression was introduced that would cause bpage=nullptr to
be referenced.
buf_flush_LRU_list_batch(): Always terminate the loop upon
encountering a null pointer.
Current Debian package revision scheme when using
debian/autobake-deb.sh script is:
'1:VERSION+maria~LSBNAME'
For example if VERSION can be like 10.6.8 and LSBNAME is
buster then version and revision is:
'1:10.6.8+maria~buster'
Which can lead to problem as distro code names can be lexical unordered.
For example Debian LSBNAME's can be:
Codename Buster is Debian version 10
Codename Bookworm is Debian version 11
This happens because in ASCII table
Buster first two digits are 'Bu' and they are in hex 0x42 and 0x75
and Bookworm first digits 'Bo' are they are in hex 0x42 and 0x6F
When apt is upgrading it means that:
1:10.6.8+maria~buster is bigger than 1:10.6.8+maria~bookworm
and that leads to problems in dist-upgrade process
To solve problem revision format is changed to:
'1:VERSION+maria~(deb|ubu)LSBVERSION'
Example for Debian 11 is now:
1:10.6.8+maria~deb11
and for Ubuntu 22.04 is now:
1:10.6.8+maria~ubu2204
There are new Variables
* VERSION which contains whole version string
* LSBVERSION which contains LSB version of distro
* LSBID which contains LSB ID (Debian or Ubuntu)
added to debian/autobake-deb.sh.
Also CODENAME is change to LSBNAME as it's more declaritive
During rebuild of partition, the partitioning engine calls
alter_close_table(), which does not unlock and close some table
instances of the target table.
Then, the engine fails to rename partitions because there are table
instances that are still locked.
Closing all the table instance of the target table fixes the bug.
During rebuild of partition, the partitioning engine calls
alter_close_table(), which does not unlock and close some table
instances of the target table.
Then, the engine fails to rename partitions because there are table
instances that are still locked.
Closing all the table instance of the target table fixes the bug.
MariaDB codebase is huge and Lintian has lots of test than
can fire false-positive warnings which leads to situation
where real problems can't be spotted.
Suspend obvious false-positive Lintian warnings and
let Lintian problems that needs some love shine
out.
Suspends in package mariadb-test-data
Supporting BSD family needs to use '/usr/bin/env perl' and not '/usr/bin/perl'
Perl script are for testing and not for production in mariadb-test-data
package:
* incorrect-path-for-interpreter
There is several files with national-encoding which are test file so they
can't be in unicode charset
* national-encoding
Serveral test paths are intentionally repeated:
* repeated-path-segment
Suspends in package mariadb-test
Supporting BSD family needs to use '/usr/bin/env perl' and not '/usr/bin/perl'
Perl script are for testing and not for production in mariadb-test-data
package:
* incorrect-path-for-interpreter
Suspends in package source package
Remade some 'version-substvar-for-external-package' to use
regex.
MGroonga is missing source file 'jquery-ui-1.8.18.custom.js' correct
lintian suspend with regex:
* source-is-missing
There is several files with very long line lenghts. Add suspends
for those that can't be corrected in several places. Most
of them are test result files, SQL test files or intentional
long lines that can't be splitted.
* very-long-line-length-in-source-file
There is several autogenerated C++ files which probably should not
be there but they should not do any harm:
* source-contains-autogenerated-visual-c++-file
Test prior to this change:
CURRENT_TEST: connect.mysql
mysqltest: At line 485: query 'INSERT IGNORE INTO t3 VALUES (5),(10),(30)' failed: ER_GET_ERRMSG (1296): Got error 122 '(1062) Duplicate entry '10' for key 'PRIMARY' [INSERT INTO `t1` (`a`) VALUES (10)]' from CONNECT
So the ignore table option wasn't getting passed to the remove server.
Closes#2008
File '/usr/bin/mariadb_config' has been moved from Debian package
libmariadbd-dev to libmariadb-dev since MariaDB version 10.2
this leads to situation where upgrade will no succeed but fail
with this kind of error message
* trying to overwrite '/usr/bin/mariadb_config', which is also in package libmariadbd-dev 1:10.2.44+maria~bionic
Add libmariadbd-dev to libmariadb-dev Debian control files
'Breaks' solve situation and upgrading won't error anymore
PageConverter::update_header(): Remove an unnecessary write.
The field that was originally called FIL_PAGE_FILE_FLUSH_LSN only
made sense for the first page of the system tablespace
(initially, for the first page of each file of the system tablespace).
It never had any meaning for .ibd files, and it lost its original
meaning in MariaDB Server 10.8.1 when
commit b07920b634 (MDEV-27199)
removed the ability to start without ib_logfile0.
If the most significant 32 bits of the LSN are nonzero, this
unnecessary write would write the wrong encryption key identifier
to the page. The first page of any file is never encrypted,
so normally those bytes should be 0 for any .ibd file.
The zoneinfo directory is littered with non-timezone information files.
These frequently contain extensions, not present in real timezone files.
Alo leapseconds is frequently there and is not a timezone file.
The else condition is meant to be here to define the functions
if the Red Hat include file isn't there.
Fixes: commit 467011bcac / MDEV-26614
RedHat -> Red Hat by Daniel Black
Continue the effort of a previous commit (PR#2114) which changed the man
pages titles from MariaDB to MySQL, to further update the man pages.
Update the man page NAME sections to use mariadb-* instead of mysql* for
MariaDB binaries that are drop-in replacements for MySQL equivalents,
indicating that the commands are actually of the MariaDB version.
Before:
NAME
mysql_upgrade - check tables for MariaDB upgrade
...
After:
NAME
mariadb-upgrade - check tables for MariaDB upgrade (mysql_upgrade
is now a symlink to mariadb-upgrade)
...
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
ha_innobase::build_template may initialize m_prebuilt->idx_cond
even if there is no valid pushed_idx_cond_keyno.
This potentially problematic piece of code was found while
working on MDEV-27366
MariaDB never supported this form of preemption via high-priority
transactions. This error code shold not have been added in the
first place, in commit 2e814d4702.
and failing spider partition test.
With some small datatype changes to the Linux/Solaris my_gethwaddr implementation
the hardware address of AIX can be returned. This is an important aspect
in Spider (and UUID).
Spider test change reviewed by Nayuta Yanagisawa.
my_gethwaddr review by Monty in #2081