This is basically port of WL6045:Improve Innochecksum with some
code refactoring on innochecksum.
Added page0size.h include from 10.2 to make 10.1 vrs 10.2 innochecksum
as identical as possible.
Added page 0 checksum checking and if that fails whole test fails.
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
Always read full page 0 to determine does tablespace contain
encryption metadata. Tablespaces that are page compressed or
page compressed and encrypted do not compare checksum as
it does not exists. For encrypted tables use checksum
verification written for encrypted tables and normal tables
use normal method.
buf_page_is_checksum_valid_crc32
buf_page_is_checksum_valid_innodb
buf_page_is_checksum_valid_none
Add Innochecksum logging to file
buf_page_is_corrupted
Remove ib_logf and page_warn_strict_checksum
calls in innochecksum compilation. Add innochecksum
logging to file.
fil0crypt.cc fil0crypt.h
Modify to be able to use in innochecksum compilation and
move fil_space_verify_crypt_checksum to end of the file.
Add innochecksum logging to file.
univ.i
Add innochecksum strict_verify, log_file and cur_page_num
variables as extern.
page_zip_verify_checksum
Add innochecksum logging to file.
innochecksum.cc
Lot of changes most notable able to read encryption
metadata from page 0 of the tablespace.
Added test case where we corrupt intentionally
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION (encryption key version)
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION+4 (post encryption checksum)
FIL_DATA+10 (data)
Backport of 7e29f2d64f from 10.1.
Create_field does not set BINARY_FLAG, so the check didn't work at all.
Also, character sets were already compared, so this check would've been
redundant (if it would've worked).
1. use Regexp_processor_pcre::set_recursion_limit() to set the
recursion limit depending on the current available stack size
2. make pcre stack frame to be estimated no less than 500 bytes.
sometimes pcre estimates it too low, even though the manual
says 500+16 bytes (it was estimated only 188 for me, actual
frame size was 512).
3. do it for embedded too
Crashes with innodb_page_size=64K. Does not crash at <= 32K.
Problem was that when blob record that was earlier < 16k is
enlarged at update wo that length > 16K it should be stored
externally. However, that was not enforced when page-size = 64K
(note that 16K+1 < 64K/2 i.e. half of the btree leaf page).
btr_cur_optimistic_update: limit max record size to 16K
or in REDUNDANT row format to 16K-1.
Problem
rpl.rpl_mdev-11092 fails in buildbot because after starting slave in
wait_for_slave_sql_error_and_skip.inc slave is started but there may be
chances that we have not skipped the last error and Last_SQL_Errno is
still not zero untill the end of rpl_end.inc , which will compare
Last_SQL_Errno to 0. So in this this case rpl_mdev-11092 fails.
Solution
After starting slave in wait_for_slave_sql_error_and_skip.inc we will wait for
Last_SQL_Errno to become 0.
A few tests assumes that the CYCLE timer is always available,
which is not true on some platforms (e.g. ARM).
Fixing the tests not to reply on the CYCLE availability.
Same MDEV, second bug.
Merge buffer must fit at least MERGEBUFF2 (that is, 15) key values.
Because merge_index() can merge that many buffers, and
merge_many_buff() leaves that many buffers unmerged.
In all InnoDB row formats, the pointers or lengths stored in the record
header can be at most 14 bits, that is, count up to 16383.
In ROW_FORMAT=REDUNDANT, this limits the maximum possible record length
to 16383 bytes. In other ROW_FORMAT, it could merely limit the maximum
length of variable-length fields.
When MySQL 5.7 introduced innodb_page_size=32k and 64k, the maximum
record length was limited to 16383 bytes (I hope 16383, not 16384,
to be able to distinguish from a record whose length is 0 bytes).
This change is present in MariaDB Server 10.2.
btr_cur_optimistic_update(): Restrict maximum record size to 16K-1
for REDUNDANT and 64K page size.
dict_index_too_big_for_tree(): The maximum allowed record size
is half a B-tree page or 16K(-1 for REDUNDANT) for 64K page size.
convert_error_code_to_mysql(): Fix error message to print
correct limits.
my_error_innodb(): Fix error message to print correct limits.
page_zip_rec_needs_ext() : record size was already restricted to 16K.
Restrict REDUNDANT to 16K-1.
rem0rec.h: Introduce REDUNDANT_REC_MAX_DATA_SIZE (16K-1)
and COMPRESSED_REC_MAX_DATA_SIZE (16K).
Item_in_subselect::pushed_cond_guards[] array is allocated only when
left_expr->maybe_null. And it is used (for row expressions) when
left_expr->element_index(i)->maybe_null.
For left_expr being a multi-column subquery, its maybe_null is
always false when the subquery doesn't use tables (see
Item_singlerow_subselect::fix_length_and_dec()
and subselect_single_select_engine::fix_length_and_dec()),
otherwise it's always true.
But row elements can be NULL regardless, so let's always allocate
pushed_cond_guards for multi-column subqueries, no matter whether
its maybe_null was forced to true or false.
In get_mm_tree we have to change Field_geom::geom_type to
GEOMETRY as we have to let storing all types of the spatial features
in the field. So now we restore the original geom_type as it's
done.
The option innodb_log_compressed_pages was contributed by
Facebook to MySQL 5.6. It was disabled in the 5.6.10 GA release
due to problems that were fixed in 5.6.11, which is when the
option was enabled.
The option was set to innodb_log_compressed_pages=ON by default
(disabling the feature), because safety was considered more
important than speed. The option innodb_log_compressed_pages=OFF
can *CORRUPT* ROW_FORMAT=COMPRESSED tables on crash recovery
if the zlib deflate function is behaving differently (producing
a different amount of compressed data) from how it behaved
when the redo log records were written (prior to the crash recovery).
In MDEV-6935, the default value was changed to
innodb_log_compressed_pages=OFF. This is inherently unsafe, because
there are very many different environments where MariaDB can be
running, using different zlib versions. While zlib can decompress
data just fine, there are no guarantees that different versions will
always compress the same data to the exactly same size. To avoid
problems related to zlib upgrades or version mismatch, we must
use a safe default setting.
This will reduce the write performance for users of
ROW_FORMAT=COMPRESSED tables. If you configure
innodb_log_compressed_pages=ON, please make sure that you will
always cleanly shut down InnoDB before upgrading the server
or zlib.
The test did not handle correctly possible difference in system
timezone. The fix is to remove non-functional setting of local
time_zone and instead allow timestamp replacement to work with
any date/time
When using innodb_page_size=16k, InnoDB tables
that were created in MariaDB 10.1.0 to 10.1.20 with
PAGE_COMPRESSED=1 and
PAGE_COMPRESSION_LEVEL=2 or PAGE_COMPRESSION_LEVEL=3
would fail to load.
fsp_flags_is_valid(): When using innodb_page_size=16k, use a
more strict check for .ibd files, with the assumption that
nobody would try to use different-page-size files.
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
Problem was that in a circular replication setup the master remembers
position to events it has generated itself when reading from a slave.
If there are no new events in the queue from the slave, a
Gtid_list_log_event is generated to remember the last skipped event.
The problem happens if there is a network delay and we generate a
Gtid_list_log_event in the middle of the transaction, in which case there
will be an implicit comment and a new transaction with serverid=0 will be
logged.
The fix was to not generate any Gtid_list_log_events in the middle of a
transaction.