Analysis: When a page is read from encrypted table and page can't be
decrypted because of bad key (or incorrect encryption algorithm or
method) page was incorrectly left on buffer pool.
Fix: Remove page from buffer pool and from pending IO.
Analysis: Problem was that in fil_read_first_page we do find that
table has encryption information and that encryption service
or used key_id is not available. But, then we just printed
fatal error message that causes above assertion.
Fix: When we open single table tablespace if it has encryption
information (crypt_data) store this crypt data to the table
structure. When we open a table and we find out that tablespace
is not available, check has table a encryption information
and from there is encryption service or used key_id is not available.
If it is, add additional warning for SQL-layer.
MDEV-8409: Changing file-key-management-encryption-algorithm causes crash and no real info why
Analysis: Both bugs has two different error cases. Firstly, at startup
when server reads latest checkpoint but requested key_version,
key management plugin or encryption algorithm or method is not found
leading corrupted log entry. Secondly, similarly when reading system
tablespace if requested key_version, key management plugin or encryption
algorithm or method is not found leading buffer pool page corruption.
Fix: Firsly, when reading checkpoint at startup check if the log record
may be encrypted and if we find that it could be encrypted, print error
message and do not start server. Secondly, if page is buffer pool seems
corrupted but we find out that there is crypt_info, print additional
error message before asserting.
Added new dynamic configuration variable innodb_buf_dump_status_frequency
to configure how often buffer pool dump status is printed in the logs.
A number between [0, 100] that tells how oftern buffer pool dump status
in percentages should be printed. E.g. 10 means that buffer pool dump
status is printed when every 10% of number of buffer pool pages are
dumped. Default is 0 (only start and end status is printed).
Analysis: Problem was that actual payload size (page size) after compression
was handled incorrectly on encryption. Additionally, some of the variables
were not initialized.
Fixed by encrypting/decrypting only the actual compressed page size.
Analysis: Problem is that there is not enough temporary buffer slots
for pending IO requests.
Fixed by allocating same amount of temporary buffer slots as there
are max pending IO requests.
Analysis: Problem is that both encrypted tables and compressed tables use
FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION to store
required metadata. Furhermore, for only compressed tables currently
code skips compression.
Fixes:
- Only encrypted pages store key_version to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
no need to fix
- Only compressed pages store compression algorithm to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
no need to fix as they have different page type FIL_PAGE_PAGE_COMPRESSED
- Compressed and encrypted pages now use a new page type FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED and
key_version is stored on FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION and compression
method is stored after FIL header similar way as compressed size, so that first
FIL_PAGE_COMPRESSED_SIZE is stored followed by FIL_PAGE_COMPRESSION_METHOD
- Fix buf_page_encrypt_before_write function to really compress pages if compression is enabled
- Fix buf_page_decrypt_after_read function to really decompress pages if compression is used
- Small style fixes
Make sure that when we publish the crypt_data we access the
memory cache of the tablespace crypt_data. Make sure that
crypt_data is stored whenever it is really needed.
All this is not yet enough in my opinion because:
sql/encryption.cc has DBUG_ASSERT(scheme->type == 1) i.e.
crypt_data->type == CRYPT_SCHEME_1
However, for InnoDB point of view we have global crypt_data
for every tablespace. When we change variables on crypt_data
we take mutex. However, when we use crypt_data for
encryption/decryption we use pointer to this global
structure and no mutex to protect against changes on
crypt_data.
Tablespace encryption starts in fil_crypt_start_encrypting_space
from crypt_data that has crypt_data->type = CRYPT_SCHEME_UNENCRYPTED
and later we write page 0 CRYPT_SCHEME_1 and finally whe publish
that to memory cache.
Analysis: Problem was that tablespaces not encrypted might not have
crypt_data stored on disk.
Fixed by always creating crypt_data to memory cache of the tablespace.
MDEV-8138: strange results from encrypt-and-grep test
Analysis: crypt_data->type is not updated correctly on memory
cache. This caused problem with state tranfer on
encrypted => unencrypted => encrypted.
Fixed by updating memory cache of crypt_data->type correctly based on
current srv_encrypt_tables value to either CRYPT_SCHEME_1 or
CRYPT_SCHEME_UNENCRYPTED.
Analysis: Problem was that we did create crypt data for encrypted table but
this new crypt data was not written to page 0. Instead a default crypt data
was written to page 0 at table creation.
Fixed by explicitly writing new crypt data to page 0 after successfull
table creation.
With changes:
* update tests to pass (new encryption/encryption_key_id syntax).
* not merged the code that makes engine aware of the encryption mode
(CRYPT_SCHEME_1_CBC, CRYPT_SCHEME_1_CTR, storing it on disk, etc),
because now the encryption plugin is handling it.
* compression+encryption did not work in either branch before the
merge - and it does not work after the merge. it might be more
broken after the merge though - some of that code was not merged.
* page checksumming code was not moved (moving of page checksumming
from fil_space_encrypt() to fil_space_decrypt was not merged).
* restored deleted lines in buf_page_get_frame(), otherwise
innodb_scrub test failed.
Step 3:
-- Make encrytion_algorithm changeable by SUPER
-- Remove AES_ECB method from encryption_algorithms
-- Support AES method change by storing used method on InnoDB/XtraDB objects
-- Store used AES method to crypt_data as different crypt types
-- Store used AES method to redo/undo logs and checkpoint
-- Store used AES method on every encrypted page after key_version
-- Add test
Step 2:
-- Introduce temporal memory array to buffer pool where to allocate
temporary memory for encryption/compression
-- Rename PAGE_ENCRYPTION -> ENCRYPTION
-- Rename PAGE_ENCRYPTION_KEY -> ENCRYPTION_KEY
-- Rename innodb_default_page_encryption_key -> innodb_default_encryption_key
-- Allow enable/disable encryption for tables by changing
ENCRYPTION to enum having values DEFAULT, ON, OFF
-- In create table store crypt_data if ENCRYPTION is ON or OFF
-- Do not crypt tablespaces having ENCRYPTION=OFF
-- Store encryption mode to crypt_data and redo-log
Step 1:
-- Remove page encryption from dictionary (per table
encryption will be handled by storing crypt_data to page 0)
-- Remove encryption/compression from os0file and all functions
before that (compression will be added to buf0buf.cc)
-- Use same CRYPT_SCHEME_1 for all encryption methods
-- Do some code cleanups to confort InnoDB coding style
This patch ports the work that facebook has performed
to make innochecksum handle compressed tables.
the basic idea is to use actual innodb-code to perform
checksum verification rather than duplicating in innochecksum.cc.
to make this work, innodb code has been annotated with
lots of #ifndef UNIV_INNOCHECKSUM so that it can be
compiled outside of storage/innobase.
A new testcase is also added that verifies that innochecksum
works on compressed/non-compressed tables.
Merged from commit fabc79d2ea976c4ff5b79bfe913e6bc03ef69d42
from https://code.google.com/p/google-mysql/
The actual steps to produce this patch are:
take innochecksum from 5.6.14
apply changes in innodb from facebook patches needed to make innochecksum compile
apply changes in innochecksum from facebook patches
add handcrafted testcase
The referenced facebook patches used are:
91e25120e7847fe76ea51135628a5a4dbf7c240c
in file buf0mtflu.cc line 439
Analysis: At shutdown multi-threaded flush sends a exit work
items to all mtflush threads. We wait until the work queue is
empty. However, as we did not hold the mutex, some other thread
could also put work-items to work queue.
Fix: Take mutex before adding exit work items to work queue and
wait until all work-items are really processed. Release
mutex after we have marked that multi-threaded flush is not
anymore active.
Fix test failure on innodb_bug12902967 caused by unnecessary
info output on xtradb/buf/buf0mtflush.cc.
Do not try to enable atomic writes if the file type
is not OS_DATA_FILE. Atomic writes are unnecessary
for log files. If we try to enable atomic writes
to log writes that are stored to media supporting
atomic writes we will end up problems later.
Analysis: InnoDB error monitor is responsible to call every second
sync_arr_wake_threads_if_sema_free() to wake up possible hanging
threads if they are missed in mutex_signal_object. This is not
possible if error monitor itself is on mutex/semaphore wait. We
should avoid all unnecessary mutex/semaphore waits on error monitor.
Currently error monitor calls function buf_flush_stat_update()
that calls log_get_lsn() function and there we will try to get
log_sys mutex. Better, solution for error monitor is that in
buf_flush_stat_update() we will try to get lsn with
mutex_enter_nowait() and if we did not get mutex do not update
the stats.
Fix: Use log_get_lsn_nowait() function on buf_flush_stat_update()
function. If returned lsn is 0, we do not update flush stats.
log_get_lsn_nowait() will use mutex_enter_nowait() and if
we get mutex we return a correct lsn if not we return 0.
Merged Facebook commit 617aef9f911d825e9053f3d611d0389e02031225
authored by Inaam Rana to InnoDB storage engine (not XtraDB)
from https://github.com/facebook/mysql-5.6
WL#7047 - Optimize buffer pool list scans and related batch processing
Reduce excessive scanning of pages when doing flush list batches. The
fix is to introduce the concept of "Hazard Pointer", this reduces the
time complexity of the scan from O(n*n) to O.
The concept of hazard pointer is reversed in this work. Academically
hazard pointer is a pointer that the thread working on it will declar
such and as long as that thread is not done no other thread is allowe
do anything with it.
In this WL we declare the pointer as a hazard pointer and then if any
thread attempts to work on it, it is allowed to do so but it has to a
the hazard pointer to the next valid value. We use hazard pointer sol
reverse traversal of lists within a buffer pool instance.
Add an event to control the background flush thread. The background f
thread wait has been converted to an os event timed wait so that it c
signalled by threads that want to kick start a background flush when
buffer pool is running low on free/dirty pages.
Merge Facebook commit 4f3e0343fd2ac3fc7311d0ec9739a8f668274f0d
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
Adds innodb_idle_flush_pct to enable tuning of the page flushing rate
when the system is relatively idle. We care about this, since doing
extra unnecessary flash writes shortens the lifespan of the flash.
Merged Facebook commit ecff018632c6db49bad73d9233c3cdc9f41430e9
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
This change is to fix: http://bugs.mysql.com/62534
This makes innodb_max_dirty_pages_pct a double with min,default,max values
0.001, 75, 99.999.
This also makes innodb_max_dirty_pages_pct_lwm and adaptive_flushing_lwm
doubles, as these sysvars are inter-dependent.
Added more to the BUFFER POOL AND MEMORY section of SHOW INNODB STATUS:
Percent pages dirty: X.X
This is all n_dirty_pages / used_pages
Percent all pages dirty: X.X
This is all n_dirty_pages / all-pages
Max dirty pages percent: X.X
This is innodb_max_dirty_pages_pct
Also changed all of buf from 2 to 3 digits of precision (%.2f -> %.3f).