Apparently, in older versions of Perl unpack does not have a logic
for using $_ as a default value for the second argument. Fixed by
specifying it explicitly
MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K
The storage format of FSP_SPACE_FLAGS was accidentally broken
already in MariaDB 10.1.0. This fix is bringing the format in
line with other MySQL and MariaDB release series.
Please refer to the comments that were added to fsp0fsp.h
for details.
This is an INCOMPATIBLE CHANGE that affects users of
page_compression and non-default innodb_page_size. Upgrading
to this release will correct the flags in the data files.
If you want to downgrade to earlier MariaDB 10.1.x, please refer
to the test innodb.101_compatibility how to reset the
FSP_SPACE_FLAGS in the files.
NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret
uncompressed data files with innodb_page_size=4k or 64k as
compressed innodb_page_size=16k files, and then probably fail
when trying to access the pages. See the comments in the
function fsp_flags_convert_from_101() for detailed analysis.
Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16.
In this way, compressed innodb_page_size=16k tablespaces will not
be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20.
Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the
dict_table_t::flags when the table is available, in
fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace().
During crash recovery, fil_load_single_table_tablespace() will use
innodb_compression_level for the PAGE_COMPRESSION_LEVEL.
FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags
that are not to be written to FSP_SPACE_FLAGS. Currently, these will
include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR.
Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support
one innodb_page_size for the whole instance.
When creating a dummy tablespace for the redo log, use
fil_space_t::flags=0. The flags are never written to the redo log files.
Remove many FSP_FLAGS_SET_ macros.
dict_tf_verify_flags(): Remove. This is basically only duplicating
the logic of dict_tf_to_fsp_flags(), used in a debug assertion.
fil_space_t::mark: Remove. This flag was not used for anything.
fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter
mark_space, and add a parameter for table flags. Check that
fil_space_t::flags match the table flags, and adjust the (memory-only)
flags based on the table flags.
fil_node_open_file(): Remove some redundant or unreachable conditions,
do not use stderr for output, and avoid unnecessary server aborts.
fil_user_tablespace_restore_page(): Convert the flags, so that the
correct page_size will be used when restoring a page from the
doublewrite buffer.
fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove.
It suffices to have fil_space_is_page_compressed().
FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL,
FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not
exist in the FSP_SPACE_FLAGS but only in memory.
fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS
in page 0. Called by fil_open_single_table_tablespace(),
fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql()
except if --innodb-read-only is active.
fsp_flags_is_valid(ulint): Reimplement from the scratch, with
accurate comments. Do not display any details of detected
inconsistencies, because the output could be confusing when
dealing with MariaDB 10.1.x data files.
fsp_flags_convert_from_101(ulint): Convert flags from buggy
MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags
cannot be in MariaDB 10.1.x format.
fsp_flags_match(): Check the flags when probing files.
Implemented based on fsp_flags_is_valid()
and fsp_flags_convert_from_101().
dict_check_tablespaces_and_store_max_id(): Do not access the
page after committing the mini-transaction.
IMPORT TABLESPACE fixes:
AbstractCallback::init(): Convert the flags.
FetchIndexRootPages::operator(): Check that the tablespace flags match the
table flags. Do not attempt to convert tablespace flags to table flags,
because the conversion would necessarily be lossy.
PageConverter::update_header(): Write back the correct flags.
This takes care of the flags in IMPORT TABLESPACE.
contains a bad and a good copy
Clean up the InnoDB doublewrite buffer code.
buf_dblwr_init_or_load_pages(): Do not add empty pages to the buffer.
buf_dblwr_process(): Do consider changes to pages that are all zero.
Do not abort when finding a corrupted copy of a page in the doublewrite
buffer, because there could be multiple copies in the doublewrite buffer,
and only one of them needs to be good.
It was used for get_datetime_value() and for thd->is_error().
But in fact, get_datetime_value() never used thd argument, because the
cache ptr argument was NULL. And thd->is_error() check was not needed
at that place at all.
it used current_thd->alloc() and allocated on the thd's execution arena,
not on table->expr_arena.
Remove THD::arena_for_cached_items that is temporarily set in
update_virtual_fields(), and replaces THD arena in get_datetime_value().
Instead set THD arena to table->expr_arena for the whole duration
of update_virtual_fields()
Item_func_le included Arg_comparator. Arg_comparator remembered
the current_thd during fix_fields and used that value during
execution to allocate Item_cache in get_datetime_value().
But for vcols fix_fields and val_int can happen in different threads.
Same bug for Item_func_in using in_datetime or cmp_item_datetime,
both also remembered current_thd at fix_fields() to use it later
for get_datetime_value().
As a fix, these objects no longer remember the current_thd,
and get_datetime_value() uses current_thd at run time. This
should not increase the number of current_thd calls much, as
Item_cache is created only once anyway.
Test crash recovery from an encrypted redo log with innodb_encrypt_log=0.
Previously, we did a clean shutdown, so only the log checkpoint
information would have been read from the redo log. With this change,
we will be reading and applying encrypted redo log records.
include/start_mysqld.inc: Observe $restart_parameters.
encryption.innodb-log-encrypt: Remove some unnecessary statements,
and instead of restarting the server and concurrently accessing
the files while the server is running, kill the server, check the
files, and finally start up the server.
innodb.log_data_file_size: Use start_mysqld.inc with $restart_parameters.
- Changed error handlers interface so that they can change error level in
the handler
- Give warnings and errors when calculating virtual columns
- On insert/update error is fatal in strict mode.
- SELECT and DELETE will only give a warning if a virtual field generates an error
- Added VCOL_UPDATE_FOR_DELETE and VCOL_UPDATE_INDEX_FOR_REPLACE to be able to
easily detect in update_virtual_fields() if we should use an error
handler to mask errors or not.
Problem was that one internal record buffer in MYISAM was not big enough to handle virtual fields.
Fixed by extending buffer.
Fixed test case to 79 characters
Found and fixed 2 problems:
- Filesort addon fields didn't mark virtual columns properly
- multi-range-read calculated vcol bitmap but was not using it.
This caused wrong vcol field to be calculated on read, which caused the assert.
When updating a table with virtual BLOB columns, the following might
happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To fix this I have introduced a new String object 'read_value' in
Field_blob. When updating virtual columns when a row has been read,
the allocated value is stored in 'read_value' instead of 'value'. The
allocated blobs for the new row is stored in 'value' as before.
I also made, as a safety precaution, the insert delayed handling of
blobs more general by using value to store strings instead of the
record. This ensures that virtual functions on delayed insert should
work in as in the case of normal insert.
Triggers are now properly updating the read, write and vcol maps for used
fields. This means that we don't need VCOL_UPDATE_FOR_READ_WRITE anymore
and there is no need for any other special handling of triggers in
update_virtual_fields().
To be able to test how many times virtual fields are invoked, I also
relaxed rules that one can use local (@) variables in DEFAULT and non
persistent virtual field expressions.
- MDEV-11621 rpl.rpl_gtid_stop_start fails sporadically in buildbot
- MDEV-11620 rpl.rpl_upgrade_master_info fails sporadically in buildbot
The issue above was probably that the build machine was overworked and the
shutdown took longer than 30 resp 10 seconds, which caused MyISAM tables
to be marked as crashed.
Fixed by flushing myisam tables before doing a forced shutdown/kill.
I also increased timeout for forced shutdown from 10 seconds to 60 seconds
to fix other possible issues on slow machines.
Fixed also some compiler warnings
- privilege_table_io.test didn't properly reset roles_mapping
- Fixed memory allocation problem with CHECK CONSTRAINT, found when
running --valgrind main.check_constraint
- Atomic writes are enabled by default
- Automatically detect if device supports atomic write and use it if
atomic writes are enabled
- Remove ATOMIC WRITE options from CREATE TABLE
- Atomic write is a device option, not a table options as the table may
crash if the media changes
- Add support for SHANNON SSD cards
Perform a slow shutdown at the start of the test, and create all
InnoDB tables with STATS_PERSISTENT=0, so that any I/O related to
background tasks (change buffer merge, purge, persistent statistics)
should be eliminated.
This should be a non-functional change. I was unable to repeat
MDEV-11626 innodb.innodb-change-buffer-recovery fails for xtradb
and cannot determine the reason for the failure without having access
to the files.
The repeatability of MDEV-11626 should not be affected by these changes.
Problem was with deleting non existing .frm file for a storage engine that
doesn't have .frm files (yet)
Fixed by not giving an error for non existing .frm files for storage engines
that are using discovery
Fixed also valgrind supression related to the given test case
Sometimes innodb_data_file_size_debug was reported as INT UNSIGNED
instead of BIGINT UNSIGNED. Make it uint instead of ulong to get
a more deterministic result.
When the test is run as a part of the suite with valgrind,
only allow it to be executed if --big-test is set.
If the test is run by specifying its name explicitly, it
will still be executed, even with valgrind without big-test,
MTR has special logic for that
encryption.innodb_scrub: Clean up. Make it also cover ROW_FORMAT=COMPRESSED,
removing the need for encryption.innodb_scrub_compressed.
Add a FIXME comment saying that we should create a secondary index, to
demonstrate that also undo log pages get scrubbed. Currently that is
not working!
Also clean up encryption.innodb_scrub_background, but keep it disabled,
because the background scrubbing does not work reliably.
Fix both tests so that if something is not scrubbed, the test will be
aborted, so that the data files will be preserved. Allow the tests to
run on Windows as well.
10.1 is merged into 10.2 now. Two issues are left to fix:
(1) encryption.innochecksum test
(2) read_page0 vs page_0_crypt_read
(1) innochecksum tool did not compile after merge because
buf_page_is_corrupted uses fil_crypt_t that has been changed.
extra/CMakeLists.txt: Added fil/fil0crypt.cc as dependency
as we need to use fil_crypt_verify_checksum for encrypted pages.
innochecksum.cc: If we think page is encrypted i.e.
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION != 0 we call
fil_crypt_verify_checksum() function to compare calculated
checksum to stored checksum calculated after encryption
(this is stored on different offset i.e.
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION + 4).
If checksum does not match we call normal buf_page_is_corrupted
to compare calculated checksum to stored checksum.
fil0crypt.cc: add #ifdef UNIV_INNOCHECKSUM to be able to compile
this file for innochecksum tool.
(2) read_page0 is not needed and thus removed.
Problem was that log_scrub function did not take required log_sys mutex.
Background: Unused space in log blocks are padded with MLOG_DUMMY_RECORD if innodb-scrub-log
is enabled. As log files are written on circular fashion old log blocks can be reused
later for new redo-log entries. Scrubbing pads unused space in log blocks to avoid visibility
of the possible old redo-log contents.
log_scrub(): Take log_sys mutex
log_pad_current_log_block(): Increase srv_stats.n_log_scrubs if padding is done.
srv0srv.cc: Set srv_stats.n_log_scrubs to export vars innodb_scrub_log
ha_innodb.cc: Export innodb_scrub_log to global status.
The C preprocessor symbol WITH_NUMA is never defined. Instead, the symbol
HAVE_LIBNUMA is used for checking if the feature is to be used.
If cmake -DWITH_NUMA=OFF is specified, HAVE_LIBNUMA will not be defined
at compilation time even if the library is available.
If cmake -DWITH_NUMA=ON is specified but the library is not available
at configuration time, the compilation will be aborted.
Problem:-
The condition that checks for node readiness is too strict as it does
not allow SELECTs even if these selects do not access any tables.
For example,if we run
SELECT 1;
OR
SELECT @@max_allowed_packet;
Solution:-
We need not to report this error when all_tables(lex->query_tables)
is NULL:
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.