* no --encryption-algorithm option anymore
* encrypt/decrypt methods in the encryption plugin
* ecnrypt/decrypt methods in the encryption_km service
* file_km plugin has --file-key-management-encryption-algorithm
* debug_km always uses aes_cbc
* example_km changes between aes_cbc and aes_ecb for different key versions
if XA PREPARE transactions hold explicit locks.
innobase_shutdown_for_mysql(): Call trx_sys_close() before lock_sys_close()
(and dict_close()) so that trx_free_prepared() will see all locks intact.
RB: 8561
Reviewed-by: Vasil Dimov <vasil.dimov@oracle.com>
Step 3:
-- Make encrytion_algorithm changeable by SUPER
-- Remove AES_ECB method from encryption_algorithms
-- Support AES method change by storing used method on InnoDB/XtraDB objects
-- Store used AES method to crypt_data as different crypt types
-- Store used AES method to redo/undo logs and checkpoint
-- Store used AES method on every encrypted page after key_version
-- Add test
Step 2:
-- Introduce temporal memory array to buffer pool where to allocate
temporary memory for encryption/compression
-- Rename PAGE_ENCRYPTION -> ENCRYPTION
-- Rename PAGE_ENCRYPTION_KEY -> ENCRYPTION_KEY
-- Rename innodb_default_page_encryption_key -> innodb_default_encryption_key
-- Allow enable/disable encryption for tables by changing
ENCRYPTION to enum having values DEFAULT, ON, OFF
-- In create table store crypt_data if ENCRYPTION is ON or OFF
-- Do not crypt tablespaces having ENCRYPTION=OFF
-- Store encryption mode to crypt_data and redo-log
Step 1:
-- Remove page encryption from dictionary (per table
encryption will be handled by storing crypt_data to page 0)
-- Remove encryption/compression from os0file and all functions
before that (compression will be added to buf0buf.cc)
-- Use same CRYPT_SCHEME_1 for all encryption methods
-- Do some code cleanups to confort InnoDB coding style
PROBLEM
Create time is calculated as last status change time of .frm file.
The first problem was that innodb was passing file name as
"table_name#po#p0.frm" to the stat() call which calculates the create time.
Since there is no frm file with this name create_time will be stored as NULL.
The second problem is ha_partition::info() updates stats for create time
when HA_STATUS_CONST flag was set ,where as innodb calculates this statistic
when HA_STATUS_TIME is set,which causes create_time to be set as NULL.
Fix
Pass proper .frm name to stat() call and calculate create time when
HA_STATUS_CONST flag is set.
Analysis: MySQL table definition contains also virtual columns. Similarly,
index fielnr references MySQL table fields. However, InnoDB table definition
does not contain virtual columns. Therefore, when matching MySQL key fieldnr
we need to use actual column name to find out referenced InnoDB dictionary
column name.
Fix: Add new function to match MySQL index key columns to InnoDB dictionary.
TO USE A SECOND WATCH PAGE PER INSTANCE
Description:
BUF_POOL_WATCH_SIZE is also initialized to number of purge threads.
so BUF_POOL_WATCH_SIZE will never be lesser than number of purge threads.
From the code, there is no scope for purge thread to skip buf_pool_watch_unset.
So there can be at most one buffer pool watch active per purge thread.
In other words, there is no chance for purge thread instance to hold a watch
when setting another watch.
Solution:
Adding code comments to clarify the issue.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Approved via Bug page.
Currently crypt data is written to file space always. Use
that to obtain random IV for every object (file).
Beatify code to confort InnoDB coding styles.
Conflicts:
storage/innobase/fil/fil0crypt.cc
storage/xtradb/fil/fil0crypt.cc
There is a bug in Visual Studio 2010
Visual Studio has a feature "Checked Iterators". In a debug build, every
iterator operation is checked at runtime for errors, e g, out of range.
Disable this "Checked Iterators" for Windows and Debug if defined.
Problem was that static array was used for storing thread mutex sync levels.
Fixed by using std::vector instead.
Does not contain test case to avoid too big memory/disk space usage
on buildbot VMs.
Parallel replication (in 10.0 / "conservative" mode) relies on binlog group
commits to group transactions that can be safely run in parallel on the
slave. The --binlog-commit-wait-count and --binlog-commit-wait-usec options
exist to increase the number of commits per group. But in case of conflicts
between transactions, this can cause unnecessary delay and reduced througput,
especially on a slave where commit order is fixed.
This patch adds a heuristics to reduce this problem. When transaction T1 goes
to commit, it will first wait for N transactions to queue up for a group
commit. However, if we detect that another transaction T2 is waiting for a row
lock held by T1, then we will skip the wait and let T1 commit immediately,
releasing locks and let T2 continue.
On a slave, this avoids the unfortunate situation where T1 is waiting for T2
to join the group commit, but T2 is waiting for T1 to release locks, causing
no work to be done for the duration of the --binlog-commit-wait-usec timeout.
(The heuristic seems reasonable on the master as well, so it is enabled for
all transactions, not just replication transactions).
RESERVATION AND SIGNAL COUNT
Problem:
Reservation and Signal count value shows negative value for show engine
innodb statement.
Solution:
This is happening due to counter overflow error. Reservation and Signal
count values are defined as unsigned long but these variables are converted to
long while printing it. Change Reservation and Signal count values as unsigned
long datatype while printing it.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Approved in bug page.
available space on disk
Add error handling when disk full situation happens and
intentionally bring server down with stacktrace because
on all cases InnoDB can't continue anyway.
when created FK
Analysis: Table name is on filename charset but foreign key
identifiers are not. This lead incorrect foreign key
identifier number to be used.
Fix: Convert foreign key identifier to filename charset before
comparing it to table name when largest foreign key identifier
number is resolved.
Analysis: after a red-black-tree lookup we use node withouth
checking did lookup succeed or not. This lead to situation
where NULL-pointer was used.
Fix: Add additional check that found node from red-back-tree
is valid.
Re-applied lost in the merge revision:
commit ed313e8a92
Author: Sergey Vojtovich <svoj@mariadb.org>
Date: Mon Dec 1 14:58:29 2014 +0400
MDEV-7148 - Recurring: InnoDB: Failing assertion: !lock->recursive
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
Analysis: On master when executing (single/multi) row INSERTs/REPLACEs
InnoDB fallback to old style autoinc locks (table locks)
only if another transaction has already acquired the AUTOINC lock.
Instead on slave as we are executing log_events and sql_command
is not correctly set, InnoDB does not use new style autoinc
locks when it could.
Fix: Use new style autoinc locks also when
thd_sql_command(user_thd) == SQLCOM_END i.e. this is RBR event.
Merged 615dd07d90 from https://github.com/facebook/mysql-5.6/
authored by rongrong. Removed C++11 requirement by using
std::map instead of std::unordered_set.
Add analysis to leaf pages to estimate how fragmented an index is
and how much benefit we can get out of defragmentation.
Problem:
This is a coding mistake during error handling. When the specified foreign
key constraint is wrong because of data type mismatch, the resulting
foreign key object will not have valid foreign->id (it will be NULL.)
Solution:
While removing the foreign key object from dictionary cache during error
handling, ensure that foreign->id is not null before using it.
rb#8204 approved by Sunny.