Remove table_count from Query_tables_list (not used, moved to MYSQL_LOCK).
Rename table_count from LEX to avoid mixing it with other counters of tables.
1. For INSERT..SELECT statements: don't include table/view the data
is inserted into in the list of leaf tables
2. Remove duplicated and dead code related to table_count
Problem:
========
When using sequences, the function
sequence_definition::write(TABLE *table, bool all_fields)
is used to save DML/DDL updates to sequence tables (e.g. nextval,
setval, and alter). Prior to this patch, the value all_fields was
always false when invoked via nextval and setval, which forced the
bitmap to only include changed columns.
Solution:
========
Change all_fields when invoked via nextval and setval to be reliant
on binlog_row_image, such that it is false when binlog_row_image is
MINIMAL, and true otherwise.
Reviewed By:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
This reverts the revert 4f62dfe676
and fixes the hang that was introduced when ctrl_mutex was removed.
The test mariabackup.compress_qpress covers this code, but the
test is skipped if a stand-alone qpress executable is not available.
It is not available in many software repositories, possibly because
the code base has not been updated since 2010.
This was tested with an executable that was compile from the source
code at http://www.quicklz.com/qpress-11-source.zip (after adding
a missing #include <unistd.h> for the definition of isatty()).
Compared to the grandparent commit (before the revert), the changes
are as follows:
comp_thread_ctxt_t::done_cond: A separate condition for completed
compression, signaling that thd->to_len has been updated.
compress_write(): Replace some threads[i] with thd.
Reset thd->to_len = 0 after consuming the compressed data.
compress_worker_thread_func(): After consuming the uncompressed
data, set thd->data_avail = FALSE. After compressing, signal
thd->done_cond.
Apparently newer libedit is readline-compatible enough
to be detected as a readline, with USE_NEW_READLINE_INTERFACE defined
and USE_LIBEDIT_INTERFACE not defined.
Let's set the locale unconditionally, independently from the
readline/libedit variant. It's already happening anyway now,
unless one specifies --default-character-set explicitly.
optimize_semi_joins() calls update_sj_state() to update semi-join
optimization state in the JOIN class.
greedy_search() algorithm considers different join prefixes,
and then picks one table to put into the join prefix.
Most of the semi-join optimization state is in the table's entry
in the join->positions[cur_prefix_size].
However, it also needs to call update_sj_state() to update the
semi-join optimization state in the JOIN class.
There is one exception, which is the cause of this bug: when we're
inside optimize_semi_join_nests() and are optimizing a subquery,
optimize_semi_joins() does nothing, it doesn't call update_sj_state().
greedy_search() must not do that either.
The partition storage engine ignores return (error) values of
handler::info(). As a result, a query that should be aborted is
not aborted and then the server violates the assertion.
Item_func_xor and Item_cond are both derived class of Item_bool_func.
Spider converts *Item_func_xor to *Item_cond and then calls the member
function argument_list(), which Item_func_xor does not implement.
- InnoDB mistakenly identifies the non-unique FTS_DOC_ID index as
FTS_DOC_ID_INDEX while loading the table. dict_load_indexes()
should check whether the index is unique before assigning
fts_doc_id_index
Before version 10, GCC would think that a right shift of an
unsigned char returns int. Let us explicitly cast that back,
to silence a bogus -Wconversion warning.
- Redundant InnoDB table fails to set the flags2 while loading
the table. It leads to "Upgrade index name failure" during alter
operation. InnoDB should set the flags2 to FTS_AUX_HEX_NAME when
fts is being loaded
The test was reported to fail sporadicaly with this diff:
--- mysql-test/main/information_schema_tables.result
+++ mysql-test/main/information_schema_tables.reject
@@ -21,6 +21,8 @@
disconnect con1;
connection default;
DROP VIEW IF EXISTS vv;
+Warnings:
+Note 4092 Unknown VIEW: 'test.vv'
in the "The originally reported non-deterministic test" part.
Disabling warnings around the DROP VIEW statement.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on INSERT SELECT statements and it results in the
assertion error.
prepare_inplace_alter_table_dict(): If the table will not be rebuilt,
preserve all of the original ROW_FORMAT, including the compressed
page size flags related to ROW_FORMAT=COMPRESSED.
buf_page_print(): Dump the buffer page 32 bytes (64 hexadecimal digits)
per line. In this way, the limitation in mtr
("Data too long for column 'line'") will not be triggered.
Also, do not bother decoding the page contents, because everything
is present in the hexadecimal output.
dict_index_find_on_id_low(): Merge to dict_index_get_if_in_cache_low().
The direct call in buf_page_print() was prone to crashing, in case the
table definition was concurrently evicted or dropped from the
data dictionary cache.
Spider supports (or at least allows) INSERT DELAYED but the
documentation does not specify spider as a storage engine that supports
"INSERT DELAYED".
Also, although not mentioned in the documentation, "INSERT DELAYED" is
not intended to be executed inside a transaction, as can be seen from
the list of supported storage engines.
The current implementation allows executing a delayed insert on a
remote transactional table and this breaks the consistency ensured by
the transaction.
We too remove "internal_delayed", one of the Spider table parameters.
Documentation says,
> Whether to transmit existence of delay to remote servers when
> executing an INSERT DELAYED statement on local server.
This table parameter is only used for "INSERT DELAYED".
Reviewed by: Nayuta Yanagisawa
Take into account that in preparation of a simple key cache for resizing no disk blocks might be assigned to it.
Reviewer: IgorBabaev <igor@mariadb.com>
This commit contains workaround for a bug known as 'Red Hat issue 1870279'
(connection reset by peer issue in socat versions 1.7.3.3 to 1.7.4.0) which
further causes crashes during SST using mariabackup (when openssl is used).
Also fixed broken logic of automatic generation of the Diffie-Hellman parameters
for socat version less than 1.7.3 (which defaults to 512-bit values instead of
2048-bit ones).
Recent adventures in liburing and btrfs have shown up some kernel
version dependent bugs. Having a bug report of accurace kernel version
can start to correlate these errors sooner.
On Linux, /proc/version contains the kernel version.
FreeBSD has kern.version (per man 8 sysctl), so include that too.
Example output:
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
Kernel version: Linux version 5.19.0-0.rc2.21.fc37.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc (GCC) 12.1.1 20220507 (Red Hat 12.1.1-1), GNU ld version 2.38-14.fc37) #1 SMP PREEMPT_DYNAMIC Mon Jun 13 15:27:24 UTC 2022
Segmentation fault (core dumped)
it starts an EXPLAIN of a multi-table join and tries to KILL it.
no sync points.
depending on how fast the hareware is and optimizer development
it might kill EXPLAIN at some random point in time (generally unrelated
to the Bug#28598 it was supposed to test) or EXPLAIN might finish
before the KILL and the test will fail.
We do not really care about the exact result; we only care that the
statistics will be accessed. The result could change depending on
when some statistics were updated in the background or when some
committed delete-marked rows were purged from other tables on
which persistent statistics are enabled.
ha_partition::set_auto_increment_if_higher expects
part_share->auto_inc_initialized is true or can_use_for_auto_inc_init()
is false (but as the comment of this method says, it returns false
only if we use Spider engine with DROP TABLE or ALTER TABLE query).
However, part_share->auto_inc_initialized becomes true only after all
partitions are opened (since 6dce6aeceb).
Therefore, I added a conditional expression in order to read all
partitions when we execute REPLACE on a table that has an
AUTO_INCREMENT column.
Reviewed by: Nayuta Yanagisawa
Reviewed by: Alexey Botchkov
Current Debian package revision scheme when using
debian/autobake-deb.sh script is:
'1:VERSION+maria~LSBNAME'
For example if VERSION can be like 10.6.8 and LSBNAME is
buster then version and revision is:
'1:10.6.8+maria~buster'
Which can lead to problem as distro code names can be lexical unordered.
For example Debian LSBNAME's can be:
Codename Buster is Debian version 10
Codename Bookworm is Debian version 11
This happens because in ASCII table
Buster first two digits are 'Bu' and they are in hex 0x42 and 0x75
and Bookworm first digits 'Bo' are they are in hex 0x42 and 0x6F
When apt is upgrading it means that:
1:10.6.8+maria~buster is bigger than 1:10.6.8+maria~bookworm
and that leads to problems in dist-upgrade process
To solve problem revision format is changed to:
'1:VERSION+maria~(deb|ubu)LSBVERSION'
Example for Debian 11 is now:
1:10.6.8+maria~deb11
and for Ubuntu 22.04 is now:
1:10.6.8+maria~ubu2204
There are new Variables
* VERSION which contains whole version string
* LSBVERSION which contains LSB version of distro
* LSBID which contains LSB ID (Debian or Ubuntu)
added to debian/autobake-deb.sh.
Also CODENAME is change to LSBNAME as it's more declaritive
During rebuild of partition, the partitioning engine calls
alter_close_table(), which does not unlock and close some table
instances of the target table.
Then, the engine fails to rename partitions because there are table
instances that are still locked.
Closing all the table instance of the target table fixes the bug.
Test prior to this change:
CURRENT_TEST: connect.mysql
mysqltest: At line 485: query 'INSERT IGNORE INTO t3 VALUES (5),(10),(30)' failed: ER_GET_ERRMSG (1296): Got error 122 '(1062) Duplicate entry '10' for key 'PRIMARY' [INSERT INTO `t1` (`a`) VALUES (10)]' from CONNECT
So the ignore table option wasn't getting passed to the remove server.
Closes#2008
File '/usr/bin/mariadb_config' has been moved from Debian package
libmariadbd-dev to libmariadb-dev since MariaDB version 10.2
this leads to situation where upgrade will no succeed but fail
with this kind of error message
* trying to overwrite '/usr/bin/mariadb_config', which is also in package libmariadbd-dev 1:10.2.44+maria~bionic
Add libmariadbd-dev to libmariadb-dev Debian control files
'Breaks' solve situation and upgrading won't error anymore
PageConverter::update_header(): Remove an unnecessary write.
The field that was originally called FIL_PAGE_FILE_FLUSH_LSN only
made sense for the first page of the system tablespace
(initially, for the first page of each file of the system tablespace).
It never had any meaning for .ibd files, and it lost its original
meaning in MariaDB Server 10.8.1 when
commit b07920b634 (MDEV-27199)
removed the ability to start without ib_logfile0.
If the most significant 32 bits of the LSN are nonzero, this
unnecessary write would write the wrong encryption key identifier
to the page. The first page of any file is never encrypted,
so normally those bytes should be 0 for any .ibd file.
The zoneinfo directory is littered with non-timezone information files.
These frequently contain extensions, not present in real timezone files.
Alo leapseconds is frequently there and is not a timezone file.
The else condition is meant to be here to define the functions
if the Red Hat include file isn't there.
Fixes: commit 467011bcac / MDEV-26614
RedHat -> Red Hat by Daniel Black
MariaDB never supported this form of preemption via high-priority
transactions. This error code shold not have been added in the
first place, in commit 2e814d4702.
(Try 2) (Cherry-pick back into 10.3)
The code that updates semi-join optimization state for a join order prefix
had several bugs. The visible effect was bad optimization for FirstMatch or
LooseScan strategies: they either weren't considered when they should have
been, or considered when they shouldn't have been.
In order to hit the bug, the optimizer needs to consider several different
join prefixes in a certain order. Queries with "obvious" query plans which
prune all join orders except one are not affected.
Internally, the bugs in updates of semi-join state were:
1. restore_prev_sj_state() assumed that
"we assume remaining_tables doesnt contain @tab"
which wasn't true.
2. Another bug in this function: it did remove bits from
join->cur_sj_inner_tables but never added them.
3. greedy_search() adds tables into the join prefix but neglects to update
the semi-join optimization state. (It does update nested outer join
state, see this call:
check_interleaving_with_nj(best_table)
but there's no matching call to update the semi-join state.
(This wasn't visible because most of the state is in the POSITION
structure which is updated. But there is also state in JOIN, too)
The patch:
- Fixes all of the above
- Adds JOIN::dbug_verify_sj_inner_tables() which is used to verify the
state is correct at every step.
- Renames advance_sj_state() to optimize_semi_joins().
= Introduces update_sj_state() which ideally should have been called
"advance_sj_state" but I didn't reuse the name to not create confusion.
DBUG_PUSH_EMPTY is used by thr_mutex.cc.
If there are 4G of DBUG_PUSH_EMPTY calls, then DBUG_POP_EMPTY will
cause a crash when DBUGCloseFile() will try to free an object that
was never allocated.