Passing $opt_parallel as $childs is wrong: child can be killed before
it connects and you will never decrement $childs for this.
Another problem is (and that is the cause of this bug): child can be
killed and never close server socket. This can happen f.ex. after
unmaskable KILL signal. In such case the socket is closed by reaping
the child but that never happens inside reading the socket loop in
run_test_server().
The proper design is the waitless reap of children inside the socket
loop and if there is no more children we finish the socket loop. Since
there is Windows variation where we don't control the children via
waitpid(), all the clients must normally close the socket and only
this can finish the socket loop. For Unix variation we reckon that
case as all children closed the socket but not all yet died and for
that we do final waiting waitpid() (was done before the patch as
well).
To be more complete, we now handle 3 end-of-game scenarios in Unix:
1. all children closed socket, all children died: everything is
handled by the socket loop;
2. all children closed socket, not all yet died: we wait for alive
children to die after exiting the socket loop;
3. not all children closed socket, all children died: everything is
handled by the socket loop.
For Windows end-of-game scenario is only one:
All children close the socket.
66832e3a introduced change that prints core dumps in very detailed
format. That's completely out of user-friendliness but serves as a
measure for debugging hard-reproducible bugs.
The proper way to implement this:
1. it must be controlled by command-line and environment variable;
2. detailed traces must be default for buildbots only, for user
invocations normal stack traces should be printed.
Options for control are: MTR_PRINT_CORE and --print-core that accept
the following values:
no Don't print core
short Print stack trace of failed thread
medium Print stack traces of all threads
detailed Print all stack traces with debug context
custom:<code> Use debugger commands <code> to print stack trace
Default setting is: short (see env_or_default() call in pre_setup())
For environment variable wrong values are silently ignored (falls back
to default setting, see env_or_default()).
Command-line option --print-core (or -C) overrides environment
variable. Its default value is 'short' if not specified explicitly
(same env_or_default() call in pre_setup()). Explicit values are
checked for validity.
--print-method option can specify by which debugger we print
cores. For Windows there is only one choice: cdb. For Unix the values
are: gdb, dbx, lldb, auto. Default value is: auto
In 'auto' we try to use all possible debuggers until success.
setup_boot_args(), setup_client_args(), setup_args() traversing
datastructures on each invocation. Even if performance is not
important to perl script (though it definitely saves some CO2), this
nonetheless provokes some code-reading questions. Reading and
debugging such code is not convenient.
The better way is to prepare all the data in advance in an easily
readable form as well as do the validation step before any further
processing.
Use mtr_report() instead of die() like the other code does.
TODO: do_args() does even more data processing magic. Prepare that
data according the above strategy in advance in pre_setup() if possible.
On all Unix platforms, link libexecinfo as system library,
if it contains backtrace_symbols_fd function, and libc does not contain
this function
Also remove cmake/os/OpenBSD.cmake, as after the fix it serves no purpose.
Remove table_count from Query_tables_list (not used, moved to MYSQL_LOCK).
Rename table_count from LEX to avoid mixing it with other counters of tables.
1. For INSERT..SELECT statements: don't include table/view the data
is inserted into in the list of leaf tables
2. Remove duplicated and dead code related to table_count
Problem:
========
When using sequences, the function
sequence_definition::write(TABLE *table, bool all_fields)
is used to save DML/DDL updates to sequence tables (e.g. nextval,
setval, and alter). Prior to this patch, the value all_fields was
always false when invoked via nextval and setval, which forced the
bitmap to only include changed columns.
Solution:
========
Change all_fields when invoked via nextval and setval to be reliant
on binlog_row_image, such that it is false when binlog_row_image is
MINIMAL, and true otherwise.
Reviewed By:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
This reverts the revert 4f62dfe676
and fixes the hang that was introduced when ctrl_mutex was removed.
The test mariabackup.compress_qpress covers this code, but the
test is skipped if a stand-alone qpress executable is not available.
It is not available in many software repositories, possibly because
the code base has not been updated since 2010.
This was tested with an executable that was compile from the source
code at http://www.quicklz.com/qpress-11-source.zip (after adding
a missing #include <unistd.h> for the definition of isatty()).
Compared to the grandparent commit (before the revert), the changes
are as follows:
comp_thread_ctxt_t::done_cond: A separate condition for completed
compression, signaling that thd->to_len has been updated.
compress_write(): Replace some threads[i] with thd.
Reset thd->to_len = 0 after consuming the compressed data.
compress_worker_thread_func(): After consuming the uncompressed
data, set thd->data_avail = FALSE. After compressing, signal
thd->done_cond.
Apparently newer libedit is readline-compatible enough
to be detected as a readline, with USE_NEW_READLINE_INTERFACE defined
and USE_LIBEDIT_INTERFACE not defined.
Let's set the locale unconditionally, independently from the
readline/libedit variant. It's already happening anyway now,
unless one specifies --default-character-set explicitly.
optimize_semi_joins() calls update_sj_state() to update semi-join
optimization state in the JOIN class.
greedy_search() algorithm considers different join prefixes,
and then picks one table to put into the join prefix.
Most of the semi-join optimization state is in the table's entry
in the join->positions[cur_prefix_size].
However, it also needs to call update_sj_state() to update the
semi-join optimization state in the JOIN class.
There is one exception, which is the cause of this bug: when we're
inside optimize_semi_join_nests() and are optimizing a subquery,
optimize_semi_joins() does nothing, it doesn't call update_sj_state().
greedy_search() must not do that either.
The partition storage engine ignores return (error) values of
handler::info(). As a result, a query that should be aborted is
not aborted and then the server violates the assertion.
Item_func_xor and Item_cond are both derived class of Item_bool_func.
Spider converts *Item_func_xor to *Item_cond and then calls the member
function argument_list(), which Item_func_xor does not implement.
- InnoDB mistakenly identifies the non-unique FTS_DOC_ID index as
FTS_DOC_ID_INDEX while loading the table. dict_load_indexes()
should check whether the index is unique before assigning
fts_doc_id_index
Before version 10, GCC would think that a right shift of an
unsigned char returns int. Let us explicitly cast that back,
to silence a bogus -Wconversion warning.
- Redundant InnoDB table fails to set the flags2 while loading
the table. It leads to "Upgrade index name failure" during alter
operation. InnoDB should set the flags2 to FTS_AUX_HEX_NAME when
fts is being loaded
The test was reported to fail sporadicaly with this diff:
--- mysql-test/main/information_schema_tables.result
+++ mysql-test/main/information_schema_tables.reject
@@ -21,6 +21,8 @@
disconnect con1;
connection default;
DROP VIEW IF EXISTS vv;
+Warnings:
+Note 4092 Unknown VIEW: 'test.vv'
in the "The originally reported non-deterministic test" part.
Disabling warnings around the DROP VIEW statement.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on INSERT SELECT statements and it results in the
assertion error.
prepare_inplace_alter_table_dict(): If the table will not be rebuilt,
preserve all of the original ROW_FORMAT, including the compressed
page size flags related to ROW_FORMAT=COMPRESSED.
buf_page_print(): Dump the buffer page 32 bytes (64 hexadecimal digits)
per line. In this way, the limitation in mtr
("Data too long for column 'line'") will not be triggered.
Also, do not bother decoding the page contents, because everything
is present in the hexadecimal output.
dict_index_find_on_id_low(): Merge to dict_index_get_if_in_cache_low().
The direct call in buf_page_print() was prone to crashing, in case the
table definition was concurrently evicted or dropped from the
data dictionary cache.
Spider supports (or at least allows) INSERT DELAYED but the
documentation does not specify spider as a storage engine that supports
"INSERT DELAYED".
Also, although not mentioned in the documentation, "INSERT DELAYED" is
not intended to be executed inside a transaction, as can be seen from
the list of supported storage engines.
The current implementation allows executing a delayed insert on a
remote transactional table and this breaks the consistency ensured by
the transaction.
We too remove "internal_delayed", one of the Spider table parameters.
Documentation says,
> Whether to transmit existence of delay to remote servers when
> executing an INSERT DELAYED statement on local server.
This table parameter is only used for "INSERT DELAYED".
Reviewed by: Nayuta Yanagisawa
Take into account that in preparation of a simple key cache for resizing no disk blocks might be assigned to it.
Reviewer: IgorBabaev <igor@mariadb.com>
This commit contains workaround for a bug known as 'Red Hat issue 1870279'
(connection reset by peer issue in socat versions 1.7.3.3 to 1.7.4.0) which
further causes crashes during SST using mariabackup (when openssl is used).
Also fixed broken logic of automatic generation of the Diffie-Hellman parameters
for socat version less than 1.7.3 (which defaults to 512-bit values instead of
2048-bit ones).
Recent adventures in liburing and btrfs have shown up some kernel
version dependent bugs. Having a bug report of accurace kernel version
can start to correlate these errors sooner.
On Linux, /proc/version contains the kernel version.
FreeBSD has kern.version (per man 8 sysctl), so include that too.
Example output:
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
Kernel version: Linux version 5.19.0-0.rc2.21.fc37.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc (GCC) 12.1.1 20220507 (Red Hat 12.1.1-1), GNU ld version 2.38-14.fc37) #1 SMP PREEMPT_DYNAMIC Mon Jun 13 15:27:24 UTC 2022
Segmentation fault (core dumped)
it starts an EXPLAIN of a multi-table join and tries to KILL it.
no sync points.
depending on how fast the hareware is and optimizer development
it might kill EXPLAIN at some random point in time (generally unrelated
to the Bug#28598 it was supposed to test) or EXPLAIN might finish
before the KILL and the test will fail.
We do not really care about the exact result; we only care that the
statistics will be accessed. The result could change depending on
when some statistics were updated in the background or when some
committed delete-marked rows were purged from other tables on
which persistent statistics are enabled.
ha_partition::set_auto_increment_if_higher expects
part_share->auto_inc_initialized is true or can_use_for_auto_inc_init()
is false (but as the comment of this method says, it returns false
only if we use Spider engine with DROP TABLE or ALTER TABLE query).
However, part_share->auto_inc_initialized becomes true only after all
partitions are opened (since 6dce6aeceb).
Therefore, I added a conditional expression in order to read all
partitions when we execute REPLACE on a table that has an
AUTO_INCREMENT column.
Reviewed by: Nayuta Yanagisawa
Reviewed by: Alexey Botchkov