In the function QUICK_RANGE_SELECT::init_ror_merged_scan we create a seperate handler if the handler in
head->file cannot be reused. The flag free_file tells us if we have a seperate handler or not.
There are cases where you might create a handler and then there might be a failure(running ALTER)
and then we have to revert the handler back to the original one. The code does that
but it does not reset the flag 'free_file' in this case.
Also backported f2c418079d.
row_drop_table_for_mysql(): Fix a regression introduced in MDEV-16515.
Similar to the follow-up fixes MDEV-16647 and MDEV-17470, we must make
the internal tables of FULLTEXT INDEX immune to kills, to avoid noise
and resource leakage on DROP TABLE or ALTER TABLE. (Orphan internal tables
would be dropped at the next InnoDB startup only.)
Problem:
========
MLOG_FILE_WRITE_CRYPT_DATA redo log fails to apply type for
the crypt_data present in the space. While processing the double-write
buffer pages, page fails to decrypt. It leads to warning message.
Fix:
====
Set the type while parsing MLOG_FILE_WRITE_CRYPT_DATA redo log.
If type and length is of invalid type then mark it as corrupted.
Fix the off-by-one overflow which was introduced with commit
b0fd06a6f2 (MDEV-15670 - unit.my_atomic failed in buildbot with
Signal 11 thrown)
Closes#1098.
If galera.galera_gtid_slave_sst_rsync is repeated more than once it will fail due incorrect GTID position. After stopping SLAVE node reset also GTID_SLAVE_POS variable.
This mutex can be freed when server shuts down (when thread_count goes down to 0)
, but it is still used inside THD::~THD() when Statement_map is destroyed.
The fix is to call Statement_map::reset() at the point where thread_count
is still positive, and avoid locking LOCK_prepared_stmt_count in THD
destructor.
When performing a hash search via HASH_SEARCH we first look at a key of a node
and then at its pointer to the next node in chain. If we have those in one cache
line instead of a two we reduce memory reads.
I found dict_table_t, fil_space_t and buf_page_t suitable for such improvement.
During database recovery, a transaction with wsrep XID is
recovered from InnoDB in prepared state. However, when the
transaction is looked up with trx_get_trx_by_xid() in
innobase_commit_by_xid(), trx->xid gets cleared in
trx_get_trx_by_xid_low() and commit time serialization history
write does not update the wsrep XID in trx sys header for
that recovered trx. As a result the transaction gets
committed during recovery but the wsrep position does not
get updated appropriately.
As a fix, we preserve trx->xid for Galera over transaction
commit in recovery phase.
Fix authored by: Teemu Ollakka (GaleraCluster) and Marko Mäkelä.
modified: mysql-test/suite/galera/disabled.def
modified: mysql-test/suite/galera/r/galera_gcache_recover_full_gcache.result
modified: mysql-test/suite/galera/r/galera_gcache_recover_manytrx.result
modified: mysql-test/suite/galera/t/galera_gcache_recover_full_gcache.test
modified: mysql-test/suite/galera/t/galera_gcache_recover_manytrx.test
modified: storage/innobase/trx/trx0trx.cc
modified: storage/xtradb/trx/trx0trx.cc
When we have a nested subquery then a subquery that was a dependent subquery
may change to an independent one when we optimizer the inner subqueries.
This is handled st_select_lex::optimize_unflattened_subqueries.
Currently a subquery that was changed to independent from dependent after optimization
phase incorrectly shows dependent in the output of Explain, this happens because we
don't update used_tables for the WHERE clause, ON clause, etc after the optimization phase.
If an encrypted table is created during backup, then
mariabackup --backup could wrongly fail.
This caused a failure of the test mariabackup.huge_lsn once on buildbot.
This is due to the way how InnoDB creates .ibd files. It would first
write a dummy page 0 with no encryption information. Due to this,
xb_fil_cur_open() could wrongly interpret that the table is not encrypted.
Subsequently, page_is_corrupted() would compare the computed page
checksum to the wrong checksum. (There are both "before" and "after"
checksums for encrypted pages.)
To work around this problem, we introduce a Boolean option
--backup-encrypted that is enabled by default. With this option,
Mariabackup will assume that a nonzero key_version implies that the
page is encrypted. We need this option in order to be able to copy
encrypted tables from MariaDB 10.1 or 10.2, because unencrypted pages
that were originally created before MySQL 5.1.48 could contain nonzero
garbage in the fields that were repurposed for encryption.
Later, MDEV-18128 would clean up the way how .ibd files are created,
to remove the need for this option.
page_is_corrupted(): Add missing const qualifiers, and do not check
space->crypt_data unless --skip-backup-encrypted has been specified.
xb_fil_cur_read(): After a failed page read, output a page dump.
This is a regression after MDEV-13671.
The bug is related to key part prefix lengths wich are stored in SYS_FIELDS.
Storage format is not obvious and was handled incorrectly which led to data
dictionary corruption.
SYS_FIELDS.POS actually contains prefix length too in case if any key part
has prefix length.
innobase_rename_column_try(): fixed prefixes handling
Tests for prefixed indexes added too.
Closes#1063
would not hide more interesting information, like invalid memory accesses.
some "leaks" are expected
- partly this is due to weird options parsing, that runs twice, and
does not free memory after the first run.
- also we do not mind to exit() whenever it makes sense, without full
cleanup.
Orphan #sql* tables may remain after ALTER TABLE
was interrupted by timeout or KILL or client disconnect.
This is a regression caused by MDEV-16515.
Similar to temporary tables (MDEV-16647), we had better ignore the
KILL when dropping the original table in the final part of ALTER TABLE.
Closes#1020
This fixes a regression that was introduced in MySQL 5.6.6
in an error handling code path, in the following change:
commit 024f363d6b5f09b20d1bba411af55be95c7398d3
Author: kevin.lewis@oracle.com <>
Date: Fri Jun 15 09:01:42 2012 -0500
Bug #14169459 INNODB; DROP TABLE DOES NOT DELETE THE IBD FILE
FOR A TEMPORARY TABLE.
- Refactor code to isolate page validation in page_is_corrupted() function.
- Introduce --extended-validation parameter(default OFF) for mariabackup
--backup to enable decryption of encrypted uncompressed pages during
backup.
- mariabackup would still always check checksum on encrypted data,
it is needed to detect partially written pages.
In MDEV-13103, I made a mistake in the error handling of
page_compressed=1 decryption when the default
innodb_compression_algorithm=zlib is used.
Due to this mistake, with certain versions of zlib,
MariaDB would fail to detect a corrupted page.
The problem was uncovered by the following tests:
mariabackup.unencrypted_page_compressed
mariabackup.encrypted_page_compressed
Write a test case that computes valid crc32 checksums for
an encrypted page, but zeroes out the payload area, so
that the checksum after decryption fails.
xb_fil_cur_read(): Validate the page number before trying
any checksum calculation or decrypting or decompression.
Also, skip zero-filled pages. For page_compressed pages,
ensure that the FIL_PAGE_TYPE was changed. Also, reject
FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED if no decryption was attempted.