Test crash recovery from an encrypted redo log with innodb_encrypt_log=0.
Previously, we did a clean shutdown, so only the log checkpoint
information would have been read from the redo log. With this change,
we will be reading and applying encrypted redo log records.
include/start_mysqld.inc: Observe $restart_parameters.
encryption.innodb-log-encrypt: Remove some unnecessary statements,
and instead of restarting the server and concurrently accessing
the files while the server is running, kill the server, check the
files, and finally start up the server.
innodb.log_data_file_size: Use start_mysqld.inc with $restart_parameters.
- Changed error handlers interface so that they can change error level in
the handler
- Give warnings and errors when calculating virtual columns
- On insert/update error is fatal in strict mode.
- SELECT and DELETE will only give a warning if a virtual field generates an error
- Added VCOL_UPDATE_FOR_DELETE and VCOL_UPDATE_INDEX_FOR_REPLACE to be able to
easily detect in update_virtual_fields() if we should use an error
handler to mask errors or not.
Problem was that one internal record buffer in MYISAM was not big enough to handle virtual fields.
Fixed by extending buffer.
Fixed test case to 79 characters
Found and fixed 2 problems:
- Filesort addon fields didn't mark virtual columns properly
- multi-range-read calculated vcol bitmap but was not using it.
This caused wrong vcol field to be calculated on read, which caused the assert.
When updating a table with virtual BLOB columns, the following might
happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To fix this I have introduced a new String object 'read_value' in
Field_blob. When updating virtual columns when a row has been read,
the allocated value is stored in 'read_value' instead of 'value'. The
allocated blobs for the new row is stored in 'value' as before.
I also made, as a safety precaution, the insert delayed handling of
blobs more general by using value to store strings instead of the
record. This ensures that virtual functions on delayed insert should
work in as in the case of normal insert.
Triggers are now properly updating the read, write and vcol maps for used
fields. This means that we don't need VCOL_UPDATE_FOR_READ_WRITE anymore
and there is no need for any other special handling of triggers in
update_virtual_fields().
To be able to test how many times virtual fields are invoked, I also
relaxed rules that one can use local (@) variables in DEFAULT and non
persistent virtual field expressions.
- MDEV-11621 rpl.rpl_gtid_stop_start fails sporadically in buildbot
- MDEV-11620 rpl.rpl_upgrade_master_info fails sporadically in buildbot
The issue above was probably that the build machine was overworked and the
shutdown took longer than 30 resp 10 seconds, which caused MyISAM tables
to be marked as crashed.
Fixed by flushing myisam tables before doing a forced shutdown/kill.
I also increased timeout for forced shutdown from 10 seconds to 60 seconds
to fix other possible issues on slow machines.
Fixed also some compiler warnings
- privilege_table_io.test didn't properly reset roles_mapping
- Fixed memory allocation problem with CHECK CONSTRAINT, found when
running --valgrind main.check_constraint
- Atomic writes are enabled by default
- Automatically detect if device supports atomic write and use it if
atomic writes are enabled
- Remove ATOMIC WRITE options from CREATE TABLE
- Atomic write is a device option, not a table options as the table may
crash if the media changes
- Add support for SHANNON SSD cards
Perform a slow shutdown at the start of the test, and create all
InnoDB tables with STATS_PERSISTENT=0, so that any I/O related to
background tasks (change buffer merge, purge, persistent statistics)
should be eliminated.
This should be a non-functional change. I was unable to repeat
MDEV-11626 innodb.innodb-change-buffer-recovery fails for xtradb
and cannot determine the reason for the failure without having access
to the files.
The repeatability of MDEV-11626 should not be affected by these changes.
Problem was with deleting non existing .frm file for a storage engine that
doesn't have .frm files (yet)
Fixed by not giving an error for non existing .frm files for storage engines
that are using discovery
Fixed also valgrind supression related to the given test case
Sometimes innodb_data_file_size_debug was reported as INT UNSIGNED
instead of BIGINT UNSIGNED. Make it uint instead of ulong to get
a more deterministic result.
encryption.innodb_scrub: Clean up. Make it also cover ROW_FORMAT=COMPRESSED,
removing the need for encryption.innodb_scrub_compressed.
Add a FIXME comment saying that we should create a secondary index, to
demonstrate that also undo log pages get scrubbed. Currently that is
not working!
Also clean up encryption.innodb_scrub_background, but keep it disabled,
because the background scrubbing does not work reliably.
Fix both tests so that if something is not scrubbed, the test will be
aborted, so that the data files will be preserved. Allow the tests to
run on Windows as well.
10.1 is merged into 10.2 now. Two issues are left to fix:
(1) encryption.innochecksum test
(2) read_page0 vs page_0_crypt_read
(1) innochecksum tool did not compile after merge because
buf_page_is_corrupted uses fil_crypt_t that has been changed.
extra/CMakeLists.txt: Added fil/fil0crypt.cc as dependency
as we need to use fil_crypt_verify_checksum for encrypted pages.
innochecksum.cc: If we think page is encrypted i.e.
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION != 0 we call
fil_crypt_verify_checksum() function to compare calculated
checksum to stored checksum calculated after encryption
(this is stored on different offset i.e.
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION + 4).
If checksum does not match we call normal buf_page_is_corrupted
to compare calculated checksum to stored checksum.
fil0crypt.cc: add #ifdef UNIV_INNOCHECKSUM to be able to compile
this file for innochecksum tool.
(2) read_page0 is not needed and thus removed.
Problem was that log_scrub function did not take required log_sys mutex.
Background: Unused space in log blocks are padded with MLOG_DUMMY_RECORD if innodb-scrub-log
is enabled. As log files are written on circular fashion old log blocks can be reused
later for new redo-log entries. Scrubbing pads unused space in log blocks to avoid visibility
of the possible old redo-log contents.
log_scrub(): Take log_sys mutex
log_pad_current_log_block(): Increase srv_stats.n_log_scrubs if padding is done.
srv0srv.cc: Set srv_stats.n_log_scrubs to export vars innodb_scrub_log
ha_innodb.cc: Export innodb_scrub_log to global status.
The C preprocessor symbol WITH_NUMA is never defined. Instead, the symbol
HAVE_LIBNUMA is used for checking if the feature is to be used.
If cmake -DWITH_NUMA=OFF is specified, HAVE_LIBNUMA will not be defined
at compilation time even if the library is available.
If cmake -DWITH_NUMA=ON is specified but the library is not available
at configuration time, the compilation will be aborted.
Problem:-
The condition that checks for node readiness is too strict as it does
not allow SELECTs even if these selects do not access any tables.
For example,if we run
SELECT 1;
OR
SELECT @@max_allowed_packet;
Solution:-
We need not to report this error when all_tables(lex->query_tables)
is NULL:
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
The configuration parameter innodb_use_fallocate, which is mapped to
the variable srv_use_posix_fallocate, has no effect in MariaDB 10.2.2
or MariaDB 10.2.3.
Thus the configuration parameter and the variable should be removed.
fil_space_t::recv_size: New member: recovered tablespace size in pages;
0 if no size change was read from the redo log,
or if the size change was implemented.
fil_space_set_recv_size(): New function for setting space->recv_size.
innodb_data_file_size_debug: A debug parameter for setting the system
tablespace size in recovery even when the redo log does not contain
any size changes. It is hard to write a small test case that would
cause the system tablespace to be extended at the critical moment.
recv_parse_log_rec(): Note those tablespaces whose size is being changed
by the redo log, by invoking fil_space_set_recv_size().
innobase_init(): Correct an error message, and do not require a larger
innodb_buffer_pool_size when starting up with a smaller innodb_page_size.
innobase_start_or_create_for_mysql(): Allow startup with any initial
size of the ibdata1 file if the autoextend attribute is set. Require
the minimum size of fixed-size system tablespaces to be 640 pages,
not 10 megabytes. Implement innodb_data_file_size_debug.
open_or_create_data_files(): Round the system tablespace size down
to pages, not to full megabytes, (Our test truncates the system
tablespace to more than 800 pages with innodb_page_size=4k.
InnoDB should not imagine that it was truncated to 768 pages
and then overwrite good pages in the tablespace.)
fil_flush_low(): Refactored from fil_flush().
fil_space_extend_must_retry(): Refactored from
fil_extend_space_to_desired_size().
fil_mutex_enter_and_prepare_for_io(): Extend the tablespace if
fil_space_set_recv_size() was called.
The test case has been successfully run with all the
innodb_page_size values 4k, 8k, 16k, 32k, 64k.
* some of these tests run just fine with InnoDB:
-> s/have_xtradb/have_innodb/
* sys_var tests did basic tests for xtradb only variables
-> remove them, they're useless anyway (sysvar_innodb does it better)
* multi_update had innodb specific tests
-> move to multi_update_innodb.test
Problem was that for encryption we use temporary scratch area for
reading and writing tablespace pages. But if page was not really
decrypted the correct updated page was not moved to scratch area
that was then written. This can happen e.g. for page 0 as it is
newer encrypted even if encryption is enabled and as we write
the contents of old page 0 to tablespace it contained naturally
incorrect space_id that is then later noted and error message
was written. Updated page with correct space_id was lost.
If tablespace is encrypted we use additional
temporary scratch area where pages are read
for decrypting readptr == crypt_io_buffer != io_buffer.
Destination for decryption is a buffer pool block
block->frame == dst == io_buffer that is updated.
Pages that did not require decryption even when
tablespace is marked as encrypted are not copied
instead block->frame is set to src == readptr.
If tablespace was encrypted we copy updated page to
writeptr != io_buffer. This fixes above bug.
For encryption we again use temporary scratch area
writeptr != io_buffer == dst
that is then written to the tablespace
(1) For normal tables src == dst == writeptr
ut_ad(!encrypted && !page_compressed ?
src == dst && dst == writeptr + (i * size):1);
(2) For page compressed tables src == dst == writeptr
ut_ad(page_compressed && !encrypted ?
src == dst && dst == writeptr + (i * size):1);
(3) For encrypted tables src != dst != writeptr
ut_ad(encrypted ?
src != dst && dst != writeptr + (i * size):1);