MySQL 5.6 do not work with MariaDB 10.1
Analysis: Problem is that tablespace flags bit DATA_DIR
is on different position on MySQL 5.6 compared to
MariaDB 10.1.
Fix: If we detect that there is difference between dictionary
flags and tablespace flags we remove DATA_DIR flag and compare
again. Remote tablespace is tried to locate even in case
when DATA_DIR flag is not set.
Analysis: When pages in doublewrite buffer are analyzed compressed
pages do not have correct checksum.
Fix: Decompress page before checksum is compared. If decompression
fails we still check checksum and corrupted pages are found.
If decompression succeeds, page now contains the original
checksum.
Problem was that link file (.isl) is also opened using O_DIRECT
mode and if this fails the whole create table fails on internal
error.
Fixed by not using O_DIRECT on link files as they are used only
on create table and startup and do not contain real data.
O_DIRECT failures are successfully ignored for data files
if O_DIRECT is not supported by file system on used
data directory.
Make sure that on decrypt we do not try to reference
NULL pointer and if page contains undefined
FIL_PAGE_FILE_FLUSH_LSN field on when page is not
the first page or page is not in system tablespace,
clear it.
There was two problems. Firstly, if page in ibuf is encrypted but
decrypt failed we should not allow InnoDB to start because
this means that system tablespace is encrypted and not usable.
Secondly, if page decrypt is detected we should return false
from buf_page_decrypt_after_read.
Analysis: Lengths which are not UNIV_SQL_NULL, but bigger than the following
number indicate that a field contains a reference to an externally
stored part of the field in the tablespace. The length field then
contains the sum of the following flag and the locally stored len.
This was incorrectly set to
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_MAX)
When it should be
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_DEF)
Additionally, we need to disable support for > 16K page size for
row compressed tables because a compressed page directory entry
reserves 14 bits for the start offset and 2 bits for flags.
This limits the uncompressed page size to 16k. To support
larger pages page directory entry needs to be larger.
Analysis: Problem is that punch hole does not know the actual page size
of the page and does the page belong to an data file or to a log file.
Fix: Pass down the file type and page size to os layer to be used
when trim is called. Also fix unsafe null pointer access to
actual write_size.
Analysis: Problem sees to be the fact that we allow creating or altering
table to use encryption_key_id that does not exists in case where
original table is not encrypted currently. Secondly we should not
do key rotation to tables that are not encrypted or tablespaces
that can't be found from tablespace cache.
Fix: Do not allow creating unencrypted table with nondefault encryption key
and do not rotate tablespaces that are not encrypted (FIL_SPACE_ENCRYPTION_OFF)
or can't be found from tablespace cache.
Added encryption support for online alter table where InnoDB temporary
files are used. Added similar support also for tables containing
full text-indexes.
Made sure that table remains encrypted during discard and import
tablespace.
Analysis: Problem was that in fil_read_first_page we do find that
table has encryption information and that encryption service
or used key_id is not available. But, then we just printed
fatal error message that causes above assertion.
Fix: When we open single table tablespace if it has encryption
information (crypt_data) store this crypt data to the table
structure. When we open a table and we find out that tablespace
is not available, check has table a encryption information
and from there is encryption service or used key_id is not available.
If it is, add additional warning for SQL-layer.
Analysis: Problem was that in fil_read_first_page we do find that
table has encryption information and that encryption service
or used key_id is not available. But, then we just printed
fatal error message that causes above assertion.
Fix: When we open single table tablespace if it has encryption
information (crypt_data) store this crypt data to the table
structure. When we open a table and we find out that tablespace
is not available, check has table a encryption information
and from there is encryption service or used key_id is not available.
If it is, add additional warning for SQL-layer.
Analysis: Problem was that when a new tablespace is created a default
encryption info is also created and stored to the tablespace. Later a
new encryption information was created with correct key_id but that
does not affect on IV.
Fix: Push encryption mode and key_id to lower levels and create
correct encryption info when a new tablespace is created.
This fix does not contain test case because, currently incorrect
encryption key causes page corruption and a lot of error messages
to error log causing mtr to fail.
Analysis: In fil_crypt_space_needs_rotation we first make sure that
tablespace is found and then separately that it is normal tablespace.
Thus, tablespace could be dropped between these two functions calls.
Fix: If space is not found from fil_system return tablespace type
ULINT_UNDEFINED and naturally do not continue rotating space.
Analysis: There is race between drop table and encryption threads that
could cause encryption thread to enter mutex that has been already
released.
Fix: When destroying crypt_data first enter the mutex and set crypt data
unavailable, then release the memory and clean up the data. This should
make the race more unprobable. Additionally, added big_test for
create_or_replace as it could fail testcase timeout
if you have slow I/O (tested that testcase passes with --mem).
Analysis: Problem was that actual payload size (page size) after compression
was handled incorrectly on encryption. Additionally, some of the variables
were not initialized.
Fixed by encrypting/decrypting only the actual compressed page size.
Analysis: Problem is that both encrypted tables and compressed tables use
FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION to store
required metadata. Furhermore, for only compressed tables currently
code skips compression.
Fixes:
- Only encrypted pages store key_version to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
no need to fix
- Only compressed pages store compression algorithm to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
no need to fix as they have different page type FIL_PAGE_PAGE_COMPRESSED
- Compressed and encrypted pages now use a new page type FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED and
key_version is stored on FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION and compression
method is stored after FIL header similar way as compressed size, so that first
FIL_PAGE_COMPRESSED_SIZE is stored followed by FIL_PAGE_COMPRESSION_METHOD
- Fix buf_page_encrypt_before_write function to really compress pages if compression is enabled
- Fix buf_page_decrypt_after_read function to really decompress pages if compression is used
- Small style fixes