Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
This can happen if one uses a backup where not all aria_log.* files
are copied or if the last one is too short. In this case the data
files will contain data that is not in the logs and recovery will fail.
Other things:
- Fixed tprint() to not print extra new line to debug trace
MDEV-18451 Server crashes in maria_create_trn_for_mysql
upon ALTER TABLE
Problem was that when table was locked many times, not all
instances where removed from the transaction by
_ma_remove_table_from_trnman()
For CMAKE_BUILD_TYPE=Debug, the default MYSQL_MAINTAINER_MODE=AUTO
implies -Werror along with other flags in cmake/maintainer.cmake,
which would break the debug builds when CMAKE_CXX_FLAGS include -O2.
This fix includes a backport of 6dd3f24090
from MariaDB 10.3.
MDEV-5817 query cache bug (returning inconsistent/old result
set) with aria table parallel inserts, row format = page
The problem is that for transactional aria tables
(row_type=PAGE and transactional=1), maria_lock_database()
didn't flush the state or the query cache.
Not flushing the state is correct for transactional tables as
this is done by checkpoint, but not flushing the query cache
was wrong and could cause concurrent SELECT queries to not
be deleted from the cache.
Fixed by introducing a flush of the query cache as part of commit, if the table has changed.
t for transactional aria tables (row_type=PAGE and transactional=1), maria_lock_table() didn't flush their state or the query cache.
Changes:
- maria_create() now uses a bit in the parameter flags to check if table
should be encrypted instead of using maria_encrypted_tables.
- Don't encrypt tables that are to be converted to S3
- Added encrypted flag to ARIA_TABLE_CAPABILITIES
- maria_chk --description now prints if table is encrypted. Other
operations is not allowed on encrypted tables.
Limit increased from 1000 to 2000.
Avoiding stack overflow by only storing keys and pages on the stack in
recursive functions if there is plenty of space on it.
Other things:
- Use less stack space for b-tree operations as we now only allocate as
much space as needed instead of always allocating HA_MAX_KEY_LENGTH.
- Replaced most usage of my_safe_alloca() in Aria with the stack_alloc
interface.
- Moved my_setstacksize() to mysys/my_pthread.c
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
When using field_conv(), which is called in case of field1=field2 copy in
fill_records(), full varstring's was copied, including unitialized bytes.
This caused valgrind to compilain about usage of unitialized bytes when
using Aria static length records.
Fixed by not using memcpy when copying varstrings but instead just copy
the real bytes.
- pcretest.c could use macro with side effect
- maria_chk could access freed memory
- Initialized some variables that could be accessed uninitalized
- Fixed compiler warning in my_atomic-t.c
MDEV-19486 and one more similar bug appeared because handler::write_row() interface
welcomes to modify buffer by storage engine. But callers are not ready for that
thus bugs are possible in future.
handler::write_row():
handler::ha_write_row(): make argument const
The bug was that when long item-strings was converted to VARCHAR,
type_handler::string_type_handler() didn't take into account max
VARCHAR length. The resulting Aria temporary table was created with
a VARCHAR field of length 1 when it should have been 65537. This caused
MariaDB to send impossible records to ma_write() and Aria reported
eventually the table as crashed.
Fixed by updating Type_handler::string_type_handler() to not create too long
VARCHAR fields. To make things extra safe, I also added checks in when
writing dynamic Aria records to ensure we find the wrong record during write
instead of during read.