MDEV-15418 changed InnoDB to use the READ UNCOMMITTED isolation level
when the transaction recovery is disabled by setting
innodb_force_recovery to 5 or 6. Alas, CHECK TABLE would still
internally use REPEATABLE READ. If the server was started without
purge running into completion (and resetting all DB_TRX_ID thanks
to MDEV-12288), this would cause errors about any DB_TRX_ID that
had not been reset to 0.
In 1dc78d35a0beb9620bae1f4841cc07389b425707 the arguments
to a deallocate_large(dontdump=true) was passed a wrong value.
To avoid accidential calling large memory function that have
DODUMP/DONTDUMP options and missing arguments, the functions
have been given distinct names.
MDEV-10814 introduce a bug where the size argument to
deallocate_large was passed true, evaluating to 1, as the size.
When this is passed to munmap this resulted in EINVAL and the
page not being released. This only occured the buf_pool_free_instance
when called on shutdown so no impact as the process termination
correctly frees the memory.
When there is a huge transaction in the undo log, the purge threads
may get stuck in trx_purge_attach_undo_recs() for a long time,
causing the server to hang on a normal shutdown (innodb_fast_shutdown>0).
Apparently the innodb_purge_batch_size does not work correctly, or the
n_pages_handled is not being incremented correctly. We do not fix that
for now, but we will instead check if shutdown has been initiated,
allowing the purge threads to shut down without delays.
There were two newly enabled warnings:
1. cast for a function pointers. Affected sql_analyse.h, mi_write.c
and ma_write.cc, mf_iocache-t.cc, mysqlbinlog.cc, encryption.cc, etc
2. memcpy/memset of nontrivial structures. Fixed as:
* the warning disabled for InnoDB
* TABLE, TABLE_SHARE, and TABLE_LIST got a new method reset() which
does the bzero(), which is safe for these classes, but any other
bzero() will still cause a warning
* Table_scope_and_contents_source_st uses `TABLE_LIST *` (trivial)
instead of `SQL_I_List<TABLE_LIST>` (not trivial) so it's safe to
bzero now.
* added casts in debug_sync.cc and sql_select.cc (for JOIN)
* move assignment method for MDL_request instead of memcpy()
* PARTIAL_INDEX_INTERSECT_INFO::init() instead of bzero()
* remove constructor from READ_RECORD() to make it trivial
* replace some memcpy() with c++ copy assignments
The symptom of the bug was that one got the following in
the aria recovery log:
"Table 'xxx', id 57, has create_rename_lsn (1,0x12dee) more recent than LOGREC_FILE_ID's LSN (1,0x12dc4), ignoring open request"
After this all future redo entries was marked with
"For table of short id 57, table skipped, so skipping record"
Analyze:
When ending batch insert, create_rename_lsn for the table
is updated to signal that earlier redo entries for the
table can't be applied. The problem was that future redo
entries was also ignored as redo code assumed they where
for the old table.
Fixed by calling translog_dessign_id, which causes
future redo entries to be seen as belonging to the
updated table.
On startup, if the InnoDB doublewrite buffer can be used to
recover a corrupted page, raising an ERROR about a recoverable
error seems inappropriate. Issue Note instead, and adjust
tests accordingly.
Also, correctly validate the tablespace ID in the files.
row_drop_tables_for_mysql_in_background(): Copy the table name
before closing the table handle, to avoid heap-use-after-free if
another thread succeeds in dropping the table before
row_drop_table_for_mysql_in_background() completes the table name lookup.
dict_mem_create_temporary_tablename(): With innodb_safe_truncate=ON
(the default), generate a simple, unique, collision-free table name
using only the id, no pseudorandom component. This is safe, because
on startup, we will drop any #sql tables that might exist in InnoDB.
This is a backport from 10.3. It should have been backported already
as part of backporting MDEV-14717,MDEV-14585 which were prerequisites
for the MDEV-13564 backup-friendly TRUNCATE TABLE.
This seems to reduce the chance of table creation failures in
ha_innobase::truncate().
ha_innobase::truncate(): Do not invoke close(), but instead
mimic it, so that we can restore to the original table handle
in case opening the truncated copy of the table failed.
Problem was that we skipped background persistent statistics calculation
on applier nodes if thread is marked as high priority (a.k.a BF).
However, on applier nodes all DDL which is replicate will be executed
as high priority i.e BF.
Fixed by allowing background persistent statistics calculation on
applier nodes even when thread is marked as BF. This could lead
BF lock waits but for queries on that node needs that statistics.
Removed redundant plugin_thdvar_cleanup() from end_connection(): called by
THD::free_connection(), which always follows end_connection().
Saves at least one lock(LOCK_plugin) and one
rdlock(LOCK_system_variables_hash).
Benchmarked on a 2socket/20core/40threads Broadwell system using sysbench
connect brencmark @40 threads (with select 1 disabled).
10.2 shows moderate improvement: 136219.93 -> 137766.31 CPS.
10.3 is improvement is somewhat better: 93018.29 -> 101379.77 CPS.
Also backported MyRocks memory leak fix from 10.4, which turned out to
be unrelated.
In 10.3, all records will be processed by purge due to MDEV-12288.
But, the insert undo records do not contain a transaction identifier.
row_purge_parse_undo_rec(): Use node->trx_id=TRX_ID_MAX for the
insert undo records. We cannot skip table lookups for these records
after DISCARD TABLESPACE other than by 'detaching' the table from
the undo logs by updating SYS_TABLES.ID on both DISCARD TABLESPACE
and IMPORT TABLESPACE.
Also, remove a redundant condition that was introduced
in the merge commit 814205f306.
recv_parse_log_recs(): Do not compare type if ptr==end_ptr
(we have reached the end of the redo log parsing buffer),
because it will not have been correctly initialized in that case.
GCC 6 and later can optimize away the memset() that is part of
mem_heap_zalloc() in a placement new call. So, instead of relying
on that kind of initialization, explicitly initialize the necessary
fields in the constructors.
que_common_t::que_common_t(): Initialize more fields in the
default constructor.
purge_vcol_info_t::purge_vcol_info_t(): Initialize all fields in
the default constructor.
purge_node_t::purge_node_t(): Initialize all necessary fields.
Reference:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71388https://gcc.gnu.org/ml/gcc/2016-02/msg00207.html
row_merge_create_fts_sort_index(): Initialize dict_col_t in
an unambiguous way. GCC 6 and later appear to be able to optimize
away the memset() that is part of mem_heap_zalloc() in the
placement new call. Let us avoid using placement new in order
to ensure that the objects will actually be initialized.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71388https://gcc.gnu.org/ml/gcc/2016-02/msg00207.html
While the latter reference hints that the optimization is only
applicable to non-POD types (and dict_col_t does not define
any member functions before 10.2), it is most consistent to
use the same initialization across all versions.
This was caused by a combination of factors:
* MyISAM/Aria temporary tables historically never saved the state
to disk (MYI/MAI), because the state never needed to persist
* certain ALTER TABLE operations modify the original TABLE structure
and if they fail, the original table has to be reopened to
revert all changes (m_needs_reopen=1)
as a result, when ALTER fails and MyISAM/Aria temp table gets reopened,
it reads the stale state from the disk.
As a fix, MyISAM/Aria tables now *always* write the state to disk
on close, *unless* HA_EXTRA_PREPARE_FOR_DROP was done first. And
the server now always does HA_EXTRA_PREPARE_FOR_DROP before dropping
a temporary table.
purge_node_t::in_progress: Replaces purge_node_t::done.
Only present in debug builds.
purge_node_t::start(): Moved from the start of row_purge_step().
purge_node_t::end(): Replaces row_purge_end().
trx_purge_attach_undo_recs(): Omit a check from non-debug builds.
If a table has been dropped, rebuilt, or its tablespace has been
discarded or the table is corrupted, it does not make sense to
look up that table again while purging old undo log records.
purge_node_t::purge_node_t(): Replaces row_purge_node_create().
que_common_t::que_common_t(): Constructor.
row_import_update_index_root(): Remove the constant parameter
dict_locked=true, and update the table->def_trx_id in the cache.
purge_node_t::unavailable_table_id: The latest unavailable table ID,
to avoid future lookups.
purge_node_t::def_trx_id: The latest modification of the table
identified by unavailable_table_id, or TRX_ID_MAX.
purge_node_t::is_skipped(): Determine if a table should be skipped.
purge_node_t::skip(): Note that a table should be skipped.
In commit d30f17af49 the change of
the loop iteration broke another error handling path that did
"goto error_handling_drop_uncached". Cover this code path with
fault injection, and revert to the correct iteration.
There are two fault injection labels innodb_OOM_prepare_inplace_alter.
Their order was swapped in MDEV-11369, so that the label that used
to be covered in an ADD INDEX code path would become unreachable
because the label that is executed for any ALTER TABLE was executed
first. Let us introduce the label innodb_OOM_prepare_add_index
for the more specific case.
if create_index_dict() fails, we need to free ctx->add_index[a] too
This fixes innodb.alter_crash and innodb.instant_alter_debug
failures in ASAN_OPTIONS="abort_on_error=1" runs
row_merge_create_index_graph(): Relay the internal state
from dict_create_index_step(). Our caller should free the index
only if it was not copied, added to the cache, and freed.
row_merge_create_index(): Free the index template if it was
not added to the cache. This is a safer variant of the logic
that was introduced in 65070beffd in 10.2.
prepare_inplace_alter_table_dict(): Add additional fault injection
to exercise a code path where we have already added an index
to the cache.
row_mysql_handle_errors(): Correct the wrong error handling for
the code DB_FOREIGN_EXCEED_MAX_CASCADE that was introduced in
c0923d396a
commit 35f5429eda
Author: Jimmy Yang <jimmy.yang@oracle.com>
Date: Wed Oct 6 06:55:34 2010 -0700
Manual port Bug #Bug #54582 "stack overflow when opening many tables
linked with foreign keys at once" from mysql-5.1-security to
mysql-5.5-security again.
rb://391 approved by Heikki
No known test case exists for repeating the bug before MariaDB 10.2.
The scenario should be that DB_FOREIGN_EXCEED_MAX_CASCADE is returned,
then InnoDB wrongly skips the rollback to the start of the current
row operation, and finally the SQL layer commits the transaction.
Normally the SQL layer would roll back either the entire transaction or
to the start of the statement. In the faulty scenario, InnoDB would
leave the transaction in an inconsistent state, and the SQL layer could
commit the transaction.
disable inplace alter for adding stored generated columns.
This fixes mroonga/storage.column_generated_stored_add_column failures
in ASAN_OPTIONS="abort_on_error=1" runs
Also, add a test case that shows the bug without ASAN.
I know no test case for this bug in 10.1. So a test case will be
committed separately in 10.2
fts_reset_get_doc(): properly initialize fts_get_doc_t::cache
fts_fetch_index_words(): Restore the initialization len=0.
The test innodb_fts.create in 10.2 would end up in an infinite loop
if this assignment is removed, because a following iteration of the
while() loop would assign zip->zp->avail_in=len with the original value
instead of the 0 that was reset in the previous iteration.
Fix the warnings issued by GCC 8 -Wstringop-truncation
and -Wstringop-overflow in InnoDB and XtraDB.
This work is motivated by Jan Lindström. The patch mainly differs
from his original one as follows:
(1) We remove explicit initialization of stack-allocated string buffers.
The minimum amount of initialization that is needed is a terminating
NUL character.
(2) GCC issues a warning for invoking strncpy(dest, src, sizeof dest)
because if strlen(src) >= sizeof dest, there would be no terminating
NUL byte in dest. We avoid this problem by invoking strncpy() with
a limit that is 1 less than the buffer size, and by always writing
NUL to the last byte of the buffer.
(3) We replace strncpy() with memcpy() or strcpy() in those cases
when the result is functionally equivalent.
Note: fts_fetch_index_words() never deals with len==UNIV_SQL_NULL.
This was enforced by an assertion that limits the maximum length
to FTS_MAX_WORD_LEN. Also, the encoding that InnoDB uses for
the compressed fulltext index is not byte-order agnostic, that is,
InnoDB data files that use FULLTEXT INDEX are not portable between
big-endian and little-endian systems.
row_merge_create_fts_sort_index(): Initialize dict_col_t.
This fixes an access to uninitialized dict_col_t::ind when a debug
assertion in MariaDB 10.4 invokes is_dropped() in
rec_get_converted_size_comp_prefix_low(). Older MariaDB versions
seem to be unaffected by the uninitialized values, but it should
not hurt to initialize everything.