The hang can happen between a lock connection issuing KILL CONNECTION for a victim,
which is in committing phase.
There happens two resource deadlockwhere killer is holding victim's
LOCK_thd_data and requires trx mutex for the victim.
The victim, otoh, holds his own trx mutex, but requires LOCK_thd_data
in wsrep_commit_ordered(). Hence a classic two thread deadlock happens.
The fix in this commit changes innodb commit so that wsrep_commit_ordered()
is not called while holding trx mutex. With this, wsrep patch commit time mutex
locking does not violate the locking protocol of KILL command
(i.e. LOCK_thd_data -> trx mutex)
Also, a new test case has been added in galera.galera_bf_kill.test for scenario
where a client connection is killed in committting phase.
A temporary table is needed for window function computation but if only a NAMED WINDOW SPEC
is used and there is no window function, then there is no need to create a temporary
table as there is no stage to compute WINDOW FUNCTION
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock
This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.
Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();
The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.
Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
are no tables marked for reopen. (performance)
MDEV-22617 Galera node crashes when trying to log to slow_log table in
streaming replication mode
Other things:
- Changed name of wsrep_after_row(two arguments) to
wsrep_after_row_internal(one argument) to not depended on the
function signature with unused arguments.
When my_vsnprintf() is patched, the code protected disabled with
'WAITING_FOR_BUGFIX_TO_VSPRINTF' should be enabled again. Also all %b
formats in this patch should be revert to %s again
MDEV-22531 Remove maria::implicit_commit()
MDEV-22607 Assertion `ha_info->ht() != binlog_hton' failed in
MYSQL_BIN_LOG::unlog_xa_prepare
From the handler point of view, Aria now looks like a transactional
engine. One effect of this is that we don't need to call
maria::implicit_commit() anymore.
This change also forces the server to call trans_commit_stmt() after doing
any read or writes to system tables. This work will also make it easier
to later allow users to have system tables in other engines than Aria.
To handle the case that Aria doesn't support rollback, a new
handlerton flag, HTON_NO_ROLLBACK, was added to engines that has
transactions without rollback (for the moment only binlog and Aria).
Other things
- Moved freeing of MARIA_SHARE to a separate function as the MARIA_SHARE
can be still part of a transaction even if the table has closed.
- Changed Aria checkpoint to use the new MARIA_SHARE free function. This
fixes a possible memory leak when using S3 tables
- Changed testing of binlog_hton to instead test for HTON_NO_ROLLBACK
- Removed checking of has_transaction_manager() in handler.cc as we can
assume that as the transaction was started by the engine, it does
support transactions.
- Added new class 'start_new_trans' that can be used to start indepdendent
sub transactions, for example while reading mysql.proc, using help or
status tables etc.
- open_system_tables...() and open_proc_table_for_Read() doesn't anymore
take a Open_tables_backup list. This is now handled by 'start_new_trans'.
- Split thd::has_transactions() to thd::has_transactions() and
thd::has_transactions_and_rollback()
- Added handlerton code to free cached transactions objects.
Needed by InnoDB.
squash! 2ed35999f2a2d84f1c786a21ade5db716b6f1bbc
All changes (except one) is of type
thd->transaction. -> thd->transaction->
thd->transaction points by default to 'thd->default_transaction'
This allows us to 'easily' have multiple active transactions for a
THD object, like when reading data from the mysql.proc table
MDEV-22468 BACKUP STAGE BLOCK_COMMIT should block commit in the Aria engine
This is needed to ensure that mariabackup works properly with Aria tables
This code ads new calls to ha_maria::implicit_commit(). These will be
deleted by MDEV-22531 Remove maria::implicit_commit().
Item_null_result did not override type_handler() because of a wrong merge
of d8a9b524f2 (MDEV-14221) from 10.1.
Overriding type_handler().
Removing the old style field_type() method. It's not relevant any more.
The code incorrectly assumed in multiple places that TYPELIB
values cannot have 0x00 bytes inside. In fact they can:
CREATE TABLE t1 (a ENUM(0x61, 0x0062) CHARACTER SET BINARY);
Note, the TYPELIB value encoding used in FRM is ambiguous about 0x00.
So this fix is partial.
It fixes 0x00 bytes in many (but not all) places:
- In the middle or in the end of a value:
CREATE TABLE t1 (a ENUM(0x6100) ...);
CREATE TABLE t1 (a ENUM(0x610062) ...);
- In the beginning of the first value:
CREATE TABLE t1 (a ENUM(0x0061));
CREATE TABLE t1 (a ENUM(0x0061), b ENUM('b'));
- In the beginning of the second (and following) value of the *last* ENUM/SET
in the table:
CREATE TABLE t1 (a ENUM('a',0x0061));
CREATE TABLE t1 (a ENUM('a'), b ENUM('b',0x0061));
However, it does not fix 0x00 when:
- 0x00 byte is in the beginning of a value of a non-last ENUM/SET
causes an error:
CREATE TABLE t1 (a ENUM('a',0x0061), b ENUM('b'));
ERROR 1033 (HY000): Incorrect information in file: './test/t1.frm'
This is an ambuguous case and will be fixed separately.
We need a new TYPELIB encoding to fix this.
Details:
- unireg.cc
The function pack_header() incorrectly used strlen() to detect
a TYPELIB value length. Adding a new function typelib_values_packed_length()
which uses TYPELIB::type_lengths[n] to detect the n-th value length,
and reusing the new function in pack_header() and packed_fields_length()
- table.cc
fix_type_pointers() assumed in multiple places that values cannot have
0x00 inside and used strlen(TYPELIB::type_names[n]) to set
the corresponding TYPELIB::type_lengths[n].
Also, fix_type_pointers() did not check the encoded data for consistency.
Rewriting fix_type_pointers() code to populate TYPELIB::type_names[n] and
TYPELIB::type_lengths[n] at the same time, so no additional loop
with strlen() is needed any more.
Adding many data consistency tests.
Fixing the main loop in fix_type_pointers() to use memchr() instead of
strchr() to handle 0x00 properly.
Fixing create_key_infos() to return the result in a LEX_STRING rather
that in a char*.
Executing CHECK TABLE with streaming replication enabled reports
error "Streaming replication not supported with
binlog_format=STATEMENT".
Administrative commands such as CHECK TABLE, are not replicated and
temporarily set binlog format to statement.
To avoid the problem, report the error only for active transactions
for which streaming replication is enabled.
Analysis:
========
RESET MASTER TO # command deletes all binary log files listed in the index
file, resets the binary log index file to be empty, and creates a new binary
log with number #. When the user provided binary log number is greater than
the max allowed value '2147483647' server fails to generate a new binary log.
The RESET MASTER statement marks the binlog closure status as
'LOG_CLOSE_TO_BE_OPENED' and exits. Statements which follow RESET MASTER
try to write to binary log they find the log_state != LOG_CLOSED and
proceed to write to binary log cache and it results in crash.
Fix:
===
During MYSQL_BIN_LOG open, if generation of new binary log name fails then the
"log_state" needs to be marked as "LOG_CLOSED". With this further statements
will find binary log as closed and they will skip writing to the binary log.
Problem:
When handling a query like this:
VALUES ('') UNION SELECT _utf16 0x0020 COLLATE utf16_bin;
Type_handler_string_result::Item_hybrid_func_fix_attributes()
tried to apply character set conversion Item_type_holder,
which causes a crash on DBUG_ASSERT(0) inside Item_type_holder::val_str().
Fix:
Overriding Item_type_holder's methods to avoid this, as follows:
bool const_item() const { return false; }
bool is_expensive() { return true; }
Removing a wrong DBUG_ASSERT:
When Item_param gets "unfixed" in cleanup(), its "fixed" gets assigned
to false, while item_item keeps the value. So the assert was wrong.
Perhaps, instead of removing the assert, it was possible to reset
item_type to NO_VALUE in cleanup. But this is not very important:
it's implemented in 10.4 in a better way:
Item_param::is_fixed() always returns true and it does not need to be "unfixed".
1. Code simplification:
Item_default_value handled all these values:
a. DEFAULT(field)
b. DEFAULT
c. IGNORE
and had various conditions to distinguish (a) from (b) and from (c).
Introducing a new abstract class Item_contextually_typed_value_specification,
to handle (b) and (c), so the hierarchy now looks as follows:
Item
Item_result_field
Item_ident
Item_field
Item_default_value - DEFAULT(field)
Item_contextually_typed_value_specification
Item_default_specification - DEFAULT
Item_ignore_specification - IGNORE
2. Introducing a new virtual method is_evaluable_expression() to
determine if an Item is:
- a normal expression, so its val_xxx()/get_date() methods can be called
- or a just an expression substitute, whose value methods cannot be called.
3. Disallowing Items that are not evalualble expressions in table value
constructors.
TIME_ZONE_ID_UNKNOWN return code from GetDynamicTimeZoneInformation()
does not mean failure.
It only means, daylight saving dates in the returned strct are not valid.
TIME_ZONE_ID_INVALID means failure, in this case "unknown" should be returned
In multithreaded build (at least confirmed with Windows ninja and msbuild),
at the end of "sql" target compilation, only 2 processors are used,
compiling either sql_yacc.cc or sql_yacc_ora.cc.
Thus, link of dependent executables or libraries is delayed while build is
underusing the CPU.
Rearrange the source list to improve parallelism.
The assert was caused by early cleanup of a user variable participant
in BINLOG @var,@var where it plays twice. That was unexpected by the base
code to clear its value prematurely.
Fixed with relocating the user var destruction after operations with
its value is over.