Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
This can happen if one uses a backup where not all aria_log.* files
are copied or if the last one is too short. In this case the data
files will contain data that is not in the logs and recovery will fail.
Other things:
- Fixed tprint() to not print extra new line to debug trace
MDEV-18451 Server crashes in maria_create_trn_for_mysql
upon ALTER TABLE
Problem was that when table was locked many times, not all
instances where removed from the transaction by
_ma_remove_table_from_trnman()
For CMAKE_BUILD_TYPE=Debug, the default MYSQL_MAINTAINER_MODE=AUTO
implies -Werror along with other flags in cmake/maintainer.cmake,
which would break the debug builds when CMAKE_CXX_FLAGS include -O2.
This fix includes a backport of 6dd3f24090
from MariaDB 10.3.
MDEV-5817 query cache bug (returning inconsistent/old result
set) with aria table parallel inserts, row format = page
The problem is that for transactional aria tables
(row_type=PAGE and transactional=1), maria_lock_database()
didn't flush the state or the query cache.
Not flushing the state is correct for transactional tables as
this is done by checkpoint, but not flushing the query cache
was wrong and could cause concurrent SELECT queries to not
be deleted from the cache.
Fixed by introducing a flush of the query cache as part of commit, if the table has changed.
t for transactional aria tables (row_type=PAGE and transactional=1), maria_lock_table() didn't flush their state or the query cache.
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
- pcretest.c could use macro with side effect
- maria_chk could access freed memory
- Initialized some variables that could be accessed uninitalized
- Fixed compiler warning in my_atomic-t.c
MDEV-19486 and one more similar bug appeared because handler::write_row() interface
welcomes to modify buffer by storage engine. But callers are not ready for that
thus bugs are possible in future.
handler::write_row():
handler::ha_write_row(): make argument const
The bug was that when long item-strings was converted to VARCHAR,
type_handler::string_type_handler() didn't take into account max
VARCHAR length. The resulting Aria temporary table was created with
a VARCHAR field of length 1 when it should have been 65537. This caused
MariaDB to send impossible records to ma_write() and Aria reported
eventually the table as crashed.
Fixed by updating Type_handler::string_type_handler() to not create too long
VARCHAR fields. To make things extra safe, I also added checks in when
writing dynamic Aria records to ensure we find the wrong record during write
instead of during read.
MDEV-19585 Assertion with S3 table and flush_tables
The limit has to be increased so that MariaDB can create system tables.
It should not have any notable impact on performance.
There should not be any notable performance differences between 1K and 4K,
especially for temporary tables. In most cases using bigger blocks is also
faster (with the possible exception of doing key reads of not fixed length
keys).
The problem was that the code in maria_extra assumed that there could be
only one table open when doing maria_extra(MA_FORCE_REOPEN)
However in the case of triggers, there can be multiple copies of
the table open.
Fixed by removing assert.
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
- When recovery failed, errors would not be printed on
new lines.
- Print more information if file lengths are changed
- Added logging of table name for entries INCOMPLETE_LOG and
REDO_REPAIR_TABLE
MDEV-18461 Aria crash recovery failures
This does not fix the bug reported in the MDEV, but
now we get an error message of the problem instead of
an assert.
The test cases for the MDEV found several independent bugs
in MariaDB server and Aria:
- If a temporary table was marked as crashed, it could never
be deleted.
- Opening of a crashed temporary table gave an error message
but the error was never forwarded to the caller which caused
an assert() in my_ok()
- init_read_record() did mmap of all temporary tables, which is
probably not a good idea as this area can potentially be
very big. Changed code to only mmap internal temporary tables.
- mmap-ed tables where not unmapped in case of repair/optimize
which caused bad data in table and crashes if the original
table files where replaced with new ones (as the old mmap
was still in place). Fixed by removing the mmap in case
of repair.
- Cleaned up usage of code that disabled mmap in Aria
Problem was that in case of implicit rollback for alter table
Aria did try to run commit twice.
The test case for this is tricky to do in 10.2, so it will
be added to 10.4 as part of BACKUP STAGE testing.
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
The issue is that two MARIA_HA instances shares the same MARIA_STATUS_INFO
object during UNION execution, so the second MARIA_HA instance state pointer
MARIA_HA::state points to the MARIA_HA::state_save of the first MARIA instance.
This happens in
thr_multi_lock(...) {
...
for (first_lock=data, pos= data+1 ; pos < end ; pos++)
{
...
if (pos[0]->lock == pos[-1]->lock && pos[0]->lock->copy_status)
(pos[0]->lock->copy_status)((*pos)->status_param,
(*first_lock)->status_param);
...
}
...
}
Usually the state is restored from ha_maria::external_lock(...):
\#0 _ma_update_status (param=0x6290000e6270) at ./storage/maria/ma_state.c:309
\#1 0x00005555577ccb15 in _ma_update_status_with_lock (info=0x6290000e6270) at ./storage/maria/ma_state.c:361
\#2 0x00005555577c7dcc in maria_lock_database (info=0x6290000e6270, lock_type=2) at ./storage/maria/ma_locking.c:66
\#3 0x0000555557802ccd in ha_maria::external_lock (this=0x61d0001b1308, thd=0x62a000048270, lock_type=2) at ./storage/maria/ha_maria.cc:2727
But _ma_update_status() does not take into account the case when
MARIA_HA::status points to the MARIA_HA::state_save of the other MARIA_HA
instance.
The fix is to restore MARIA_HA::state in ha_maria::external_lock() after
maria_lock_database() call for transactional tables.