Fixed a bug introduced in MDEV-11345, server did not start if
non-english error messages were set in startup parameters.
Added lc_messages=de_DE option into an existing test case.
Problem:
=======
The problem is that InnoDB doesn't add the table in fts slots if drop table fails. InnoDB marks the table is in fts slots while processing sync message. So the consecutive alter statement assumes that table is in queue and tries to remove it. But InnoDB can't find the table in fts_slots.
Solution:
=========
i) Removal of in_queue in fts_t while processing the fts sync message.
ii) Add the table to fts_slots when drop table fails.
[Variant 2 of the fix: collect the attached conditions]
Problem:
make_join_select() has a section of code which starts with
"We plan to scan all rows. Check again if we should use an index."
the code in that section will [unnecessarily] re-run the range
optimizer using this condition:
condition_attached_to_current_table AND current_table's_ON_expr
Note that the original invocation of range optimizer in
make_join_statistics was done using the whole select's WHERE condition.
Taking the whole select's WHERE condition and using multiple-equalities
allowed the range optimizer to infer more range restrictions.
The fix:
- Do range optimization using a condition that is an AND of this table's
condition and all of the previous tables' conditions.
- Also, fix the range optimizer to prefer SEL_ARGs with type=KEY_RANGE
over SEL_ARGS with type=MAYBE_KEY, regardless of the key part.
Computing
key_and(
SEL_ARG(type=MAYBE_KEY key_part=1),
SEL_ARG(type=KEY_RANGE, key_part=2)
)
will now produce the SEL_ARG with type=KEY_RANGE.
Post-push fix. aria_pack_mdev14183 test is unstable.
The fix is the following:
1. Disable the test for embedded server.
2. Create non-"transactional" Aria table in the test, as aria_pack does not
support "transactional" Aria tables.
Column definition order in st_maria_share::columndef can differ from
order of fields in record(see also st_maria_share::column_nr,
st_maria_columndef::column_nr, _ma_column_nr_write(),
_ma_column_nr_read()). This was not taken into account in aria_pack
tool.
The fix is to initialize elements of HUFF_COUNTS array in the correct
order.
The only change is a change of the version number.
In MySQL 5.6.46, the copyright comments in a number of files were changed
in mysql/mysql-server@f1a006ece7
but there was no functional change to InnoDB code.
This was also reflected by XtraDB. We are not changing the copyright
comments in MariaDB Server for now.
Between MySQL 5.6.46 and 5.6.47, InnoDB was not changed at all.
Actually, we had forgotten to update the InnoDB version number to
5.6.46. With this change, we are updating InnoDB
from 5.6.45 to 5.6.47 and XtraDB from 5.6.45-86.1 to 5.6.46-86.2.
WL#6326 in MariaDB 10.2.2 introduced a potential hang on purge or rollback
when an index tree is being shrunk by multiple levels.
This fix is based on
mysql/mysql-server@f2c5852630
with the main difference that our version of the test case uses
DEBUG_SYNC instrumentation on ROLLBACK, not on purge.
btr_cur_will_modify_tree(): Simplify the check further.
This is the actual bug fix.
row_undo_mod_remove_clust_low(), row_undo_mod_clust(): Add DEBUG_SYNC
instrumentation for the test case.
Remove the offending test case. This sort of error is hard to test in
all possible corner cases and thus makes the test less valuable. The
overflow error will be covered by warnings generated by the compiler,
which is much more reliable in the general case.
The long semaphore wait appeared to be the caused by the following
pattern in the MTR test:
```
SET DEBUG_SYNC = "now SIGNAL wsrep_after_certification_continue";
SET DEBUG_SYNC = "now SIGNAL signal.wsrep_apply_cb;
```
Raising two signals, one right after another, caused one signal to
overwrite the other, before the signal was consumed by the thread.
This caused one thread to be stuck until the debug sync point would
timeout.
In this scenario:
- There is a possible range access for table T
- And there is a ref access on the same index which uses fewer key parts
- The join optimizer picks the ref access (because it is cheaper)
- make_join_select applies this heuristic to switch to range:
/* Range uses longer key; Use this instead of ref on key */
Join buffer will be used without having called
JOIN_TAB::make_scan_filter(). This means, conditions that should be
checked when reading table T will be checked after T is joined with the
contents of the join buffer, instead.
Fixed this by adding a make_scan_filter() check.
(updated patch after backport to 10.3)
(Fix testcase on Windows)
Analysis:
========
'max_binlog_cache_size' is configured and a huge transaction is executed. When
the transaction specific events size exceeds 'max_binlog_cache_size' the event
cannot be written to the binary log cache and cache write error is raised.
Upon cache write error the statement is rolled back and the transaction cache
should be truncated to a previous statement specific position. The truncate
operation should reset the cache to earlier valid positions and flush the new
changes. Even though the flush is successful the cache write error is still in
marked state. The truncate code interprets the cache write error as cache flush
failure and returns abruptly without modifying the write cache parameters.
Hence cache is in a invalid state. When a COMMIT statement is executed in this
session it tries to flush the contents of transaction cache to binary log.
Since cache has partial events the cache write operation will report
'writer.remains' assert.
Fix:
===
Binlog truncate function resets the cache to a specified size. As a first step
of truncation, clear the cache write error flag that was raised during earlier
execution. With this new errors that surface during cache truncation can be
clearly identified.
MDEV-18046: Assortment of crashes, assertion failures and ASAN errors in mysql_show_binlog_events
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following assert when ASAN is enabled.
uint32 binlog_get_uncompress_len(const char*):
Assertion `(buf[0] & 0xe0) == 0x80' failed
Fix:
===
**Part11: Converted debug assert to error handler code**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> causes a variety of failures, some of which are
listed below. It is not a race condition issue, but there is some
non-determinism in it.
Analysis:
========
"show binlog events from <pos>" code considers the user given position as a
valid event start position. The code starts reading data from this event start
position onwards and tries to map it to a set of known events. Each event has
a specific event structure and asserts have been added to ensure that read
event data satisfies the event specific requirements. When a random position
is supplied to "show binlog events command" the event structure specific
checks will fail and they result in assert.
Fix:
====
The fix is split into different parts. Each part addresses either an ASAN
issue or an assert/crash.
**Part1: Checksum based position validation when checksum is enabled**
Using checksum validate the very first event read at the user specified
position. If there is a checksum mismatch report an appropriate error for the
invalid event.