Added encryption support for online alter table where InnoDB temporary
files are used. Added similar support also for tables containing
full text-indexes.
Made sure that table remains encrypted during discard and import
tablespace.
removing IMPOSSIBLE_RESULT from Item_result, as it's not
needed any more. The fact that an Item is not in a comparison
context is now always designated by IDENTITY_SUBST in Subst_constraint.
Previously IMPOSSIBLE_RESULT and IDENTITY_SUBST co-existed but
actually meant the same thing.
Analysis: Server tried to continue reading tablespace using a cursor after
we had resolved that pages in the tablespace can't be decrypted.
Fixed by addind check is tablespace still encrypted.
Analysis: Problem was that in fil_read_first_page we do find that
table has encryption information and that encryption service
or used key_id is not available. But, then we just printed
fatal error message that causes above assertion.
Fix: When we open single table tablespace if it has encryption
information (crypt_data) store this crypt data to the table
structure. When we open a table and we find out that tablespace
is not available, check has table a encryption information
and from there is encryption service or used key_id is not available.
If it is, add additional warning for SQL-layer.
Analysis: Problem was that in fil_read_first_page we do find that
table has encryption information and that encryption service
or used key_id is not available. But, then we just printed
fatal error message that causes above assertion.
Fix: When we open single table tablespace if it has encryption
information (crypt_data) store this crypt data to the table
structure. When we open a table and we find out that tablespace
is not available, check has table a encryption information
and from there is encryption service or used key_id is not available.
If it is, add additional warning for SQL-layer.
Instead of encrypt(src, dst, key, iv) that encrypts all
data in one go, now we have encrypt_init(key,iv),
encrypt_update(src,dst), and encrypt_finish(dst).
This also causes collateral changes in the internal my_crypt.cc
encryption functions and in the encryption service.
There are wrappers to provide the old all-at-once encryption
functionality. But binlog events are often written piecewise,
they'll need the new api.
make repeated cmake runs less verbose:
* remove few not very useful MESSAGE's
* only run pkg_check_modules() if there's no cached result
* only print QQGraph messages on the first run
Fix all cmake tests (including plugin) to use
MY_CHECK_AND_SET_COMPILER_FLAG. And fix that function
to be compatible with cmake 3.0. This way flag checks
are correctly cached (even in cmake 3.0) and properly reused.
- foreign_keys: adjusted according to code changes;
- type_spatial: adjusted according to code changes;
- type_spatial_indexes (for MyISAM): disabled till MDEV-8675 is fixed
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
When wsrep is enabled, for any update on innodb tables, the
corresponding keys are appended to galera's transaction writeset
(wsrep_append_keys()). However, for LOAD DATA, this got skipped
if binary logging was disabled or it was non-ROW based.
As a result, while the updates from LOAD DATA on non-partitioned
tables replicated fine as wsrep implicitly enables binary logging
(if not enabled, explicitly), the same did not work on partitioned
tables as for partitioned tables the binary logging gets disabled
temporarily (ha_partition::write_row()).
Fixed by removing the unwanted conditions from the check.
Also backported some changes from 10.0-galera to make sure
wsrep_load_data_splitting affects LOAD DATA commands only.
In original code, sometimes one got an automatic DEFAULT value in some cases, in other cases not.
For example:
create table t1 (a int primary key) - No default
create table t2 (a int, primary key(a)) - DEFAULT 0
create table t1 SELECT .... - Default for all fields, even if they where defined as NOT NULL
ALTER TABLE ... MODIFY could sometimes add an unexpected DEFAULT value.
The patch is quite big because we had some many test cases that used
CREATE ... SELECT or CREATE ... (...PRIMARY KEY(xxx)) which doesn't have an automatic DEFAULT anymore.
Other things:
- Removed warnings from InnoDB when waiting from semaphore (got this when testing things with --big)
Issue was two fold (both in MyISAM and Aria)
- optimize and repair failed if there was an old .TMM file around. As optimized and repair are protected against multiple execution, I decided to change so that we just truncate the file if it exists.
- I had missed to check for error condition if creation of the temporary index file failed. This caused the strange behaviour that it looked as if optimized would have worked once.
Analysis: Problem was that when a new tablespace is created a default
encryption info is also created and stored to the tablespace. Later a
new encryption information was created with correct key_id but that
does not affect on IV.
Fix: Push encryption mode and key_id to lower levels and create
correct encryption info when a new tablespace is created.
This fix does not contain test case because, currently incorrect
encryption key causes page corruption and a lot of error messages
to error log causing mtr to fail.
of more than 45G with a key_cache_block_size of 1024 or less.
The problem was that some of the arguments to my_multi_malloc() got to be
more than 4G.
Fix:
- Inntroduced my_multi_malloc_large() that can handle big regions.
- Changed MyISAM and Aria key caches to use my_multi_malloc_large().
I didn't change the default my_multi_malloc() as this would be a too big
patch and we don't allocate 4G blocks anywhere else.
Analysis: Handler used table flag HA_REQUIRE_PRIMARY_KEY but a bug on
sql_table.cc function mysql_prepare_create_table internally marked
secondary key with NOT NULL colums as unique key and did not then
fail on requirement that table should have primary key or unique key.
Analysis: Handler table flag HA_REQUIRE_PRIMARY_KEY alone is not enough
to force primary or unique key, if table has at least one NOT NULL
column and secondary key for that column.
Fix: Add additional check that table really has primary key or
unique key for InnoDB terms.
MDEV-8409: Changing file-key-management-encryption-algorithm causes crash and no real info why
Analysis: Both bugs has two different error cases. Firstly, at startup
when server reads latest checkpoint but requested key_version,
key management plugin or encryption algorithm or method is not found
leading corrupted log entry. Secondly, similarly when reading system
tablespace if requested key_version, key management plugin or encryption
algorithm or method is not found leading buffer pool page corruption.
Fix: Firsly, when reading checkpoint at startup check if the log record
may be encrypted and if we find that it could be encrypted, print error
message and do not start server. Secondly, if page is buffer pool seems
corrupted but we find out that there is crypt_info, print additional
error message before asserting.