Basic idea of the patch: disallow creating tables which allow to create
rows which are too big to insert. In other words, if user created a table user
should never see an errors like 'can not insert row as it is too big for current
page size'.
SET innodb_strict_mode=OFF; will allow to create very long tables and only a
warning will be issued.
dict_table_t::get_overflow_field_local_len(): this function lets know a maximum
local field len for overflow fields for every file and row format.
innobase_check_column_length(): improve name to too_big_key_part_length()
and reuse in a different part of code.
create_table_info_t::prepare_create_table(): add check for maximum allowed
key part length to keep ALGORITHM=COPY behavior similar to ALGORITHM=INPLACE
behavior. Affected test is innodb.strict_mode
Rename dict_index_too_big_for_tree() to
dict_index_t::rec_potentially_too_big(): copy overflow-related size computation
from dtuple_convert_big_rec(). A lot of tests was changed because of that.
I wonder whether users will complain about it?
Test innodb.max_record_size tests dict_index_t::rec_potentially_too_big()
for different row formats and page sizes.
MDEV-19486 and one more similar bug appeared because handler::write_row() interface
welcomes to modify buffer by storage engine. But callers are not ready for that
thus bugs are possible in future.
handler::write_row():
handler::ha_write_row(): make argument const
Use on every virtual function override.
ha_innobase: mark a final
ha_innobase::bas_ext(): remove as unused
ha_innobase::get_cascade_foreign_key_table_list: remove as unused
ha_innobase::end_stmt(): merge into ha_innobase::reset()
According to the code, it was Windows specific "simulated AIO"
workaround. The simulated s not supported on Windows anymore.
Thus, remove the dead code
Many InnoDB internal variables and counters were only exposed
in an unstructured fashion via SHOW ENGINE INNODB STATUS.
Expose more variables via SHOW STATUS. Many of these were
exported in XtraDB.
Also, introduce SHOW_SIZE_T and use the proper size for
exporting the InnoDB variables.
Remove some unnecessary indirection via export_vars, and
bind some variables directly.
dict_sys_t::rough_size(): Replaces dict_sys_get_size()
and includes the hash table sizes.
This is based on a contribution by Tony Liu from ServiceNow.
Shorten some VARCHAR attributes to a more reasonable length.
INNODB_METRICS: Rename the column STATUS to ENABLED, and make it Boolean.
Replace with INT(1) many Boolean attributes that were declared as VARCHAR
containing 'NO','YES','disabled','enabled','Uninitialized','Initialized'.
Replace some VARCHAR attributes with ENUM.
Replace some BIGINT with INT when 32 bits are sufficient.
Remove INNODB_SYS_TABLESPACES.SPACE_TYPE. The type of a tablespace
can be derived from the tablespace ID. A fixed number is used for
the system tablespace and the temporary tablespace. All other tablespaces
are single-table or single-partition tablespaces.
i_s_locks_row_t::lock_type, lock_get_type_str(): Remove.
This is a redundant field. Table and record locks can be
distinguished by whether i_s_locks_row_t::lock_index is NULL.
fill_trx_row(): Do not unnecessarily copy the constant strings that
trx->op_info is pointing to.
i_s_locks_row_t::lock_mode: Replace string with integer.
lock_get_mode_str(), lock_get_trx_id(), lock_get_trx(): Remove.
field_store_ulint(): Remove.
In row_ins_foreign_check_on_constraint(), clustered index record is being passed to wsrep_append_foreign_key() after releasing the latch. If a record has been changed by other thread in the meantime then it could lead to a crash when
wsrep_rec_get_foreign_key () tries to access the record.
row_ins_foreign_check_on_constraint
Use cascade->pcur->old_rec instead of clust_rec.
row_ins_check_foreign_constraint
Add missing error printout.
The test innodb.leaf_page_corrupted_during_recovery
fails on buildbot with
Warning 1406 Data too long for column 'line' at row 10
line
len 16384; hex ...
because of a page dumps that InnoDB is generating for a corrupted page
Since this test is using debug instrumentation, we will solve the
issue by disabling page dumps in debug builds altogether. Users of
debug builds will likely know how to extract page dumps in other means.
Page dump output could sometimes be useful when diagnosing problems
that users are facing. Hence we will keep the page dump output in
non-debug (release) builds.
- Ported mysql Bug#20597981 test case to mariadb-10.2
- InnoDB never used fts_doc_id_in_read_set. Basically it tells
innodb to read the fts_doc_id from the index record itself.
Add the following parameter.
- spider_sync_sql_mode
Local sql_mode synchronous existence to remote server.
0 : It doesn't synchronize.
1 : It synchronizes.
The default value is 1
error code 12701 is already included in default value, but other plugin specific error codes are ignored because of checking with ER_ERROR_LAST. ER_ERROR_LAST does not include plugin specific error codes. So I just removed it for fixing this issue.
- Introduce a new variable called innodb_encrypt_temporary_tables which is
a boolean variable. It decides whether to encrypt the temporary tablespace.
- Encrypts the temporary tablespace based on full checksum format.
- Introduced a new counter to track encrypted and decrypted temporary
tablespace pages.
- Warnings issued if temporary table creation has conflict value with
innodb_encrypt_temporary_tables
- Added a new test case which reads and writes the pages from/to temporary
tablespace.
Added the condition in innochecksum tool to check page id mismatch.
This could catch the write corruption caused by InnoDB.
Added the debug insert inside fil_io() to check whether it writes
the page to wrong offset.
The bug was that when long item-strings was converted to VARCHAR,
type_handler::string_type_handler() didn't take into account max
VARCHAR length. The resulting Aria temporary table was created with
a VARCHAR field of length 1 when it should have been 65537. This caused
MariaDB to send impossible records to ma_write() and Aria reported
eventually the table as crashed.
Fixed by updating Type_handler::string_type_handler() to not create too long
VARCHAR fields. To make things extra safe, I also added checks in when
writing dynamic Aria records to ensure we find the wrong record during write
instead of during read.
Starting with the Intel Skylake microarchitecture, the PAUSE
instruction latency is about 140 clock cycles instead of earlier 10.
On AMD processors, the latency could be 10 or 50 clock cycles,
depending on microarchitecture.
Because of this big range of latency, let us scale the loops around
the PAUSE instruction based on timing results at server startup.
my_cpu_relax_multiplier: New variable: How many times to invoke PAUSE
in a loop. Only defined for IA-32 and AMD64.
my_cpu_init(): Determine with RDTSC the time to run 16 PAUSE instructions
in two unrolled loops according, and based on the quicker of the two
runs, initialize my_cpu_relax_multiplier. This form of calibration was
suggested by Mikhail Sinyavin from Intel.
LF_BACKOFF(), ut_delay(): Use my_cpu_relax_multiplier when available.
ut_delay(): Define inline in my_cpu.h.
UT_COMPILER_BARRIER(): Remove. This does not seem to have any effect,
because in our ut_delay() implementation, no computations are being
performed inside the loop. The purpose of UT_COMPILER_BARRIER() was to
prohibit the compiler from reordering computations. It was not
emitting any code.
There was a bug in the page cache that didn't take into account that
another thread could be waiting for a page to be read by read_big_block().
Fixed by releasing all waiters in read_big_block()
MDEV-19585 Assertion with S3 table and flush_tables
The limit has to be increased so that MariaDB can create system tables.
It should not have any notable impact on performance.
There should not be any notable performance differences between 1K and 4K,
especially for temporary tables. In most cases using bigger blocks is also
faster (with the possible exception of doing key reads of not fixed length
keys).
dict_table_t::stats_latch_created: remove along with related stuff
dict_table_t::stats_latch: make value member, not pointer. And always lock this
for simplicity, even to stats cloned table.
based on the work of Sergey Vojtovich
The error occured because aria_copy_to_s3() function tried to copy .frm file
of partition, but partition does not have it's own .frm file. The same is true
for aria_rename_s3().
To fix this issue the new parameter was added to those two functions to specify
if .frm file must be copied or not. The parameter is set to 'false' for
partitions.
Also there was other issue with EXCHANGE PARTITION. Briefly, there is the
following sequence of operations(see exchange_name_with_ddl_log() for details):
1) rename swap table to temporary table,
2) rename partition to swap table,
3) rename temporary table to partition.
On step (1) .frm file is renamed too. On step (2) the swap table does not
have .frm file, as partition does not have it. On step (3) partition will have
.frm file, because it will be renamed from temporary table. All of this causes
error on different stages of the table access. To fix it, .frm is not touched
at all for s3 during EXCHANGE PARTITION operation. This is implemented in
ha_s3::rename_table() by additional checking of
current_thd->lex->alter_info.partition_flags(see also ALTER_PARTITION_EXCHANGE
in sql_yacc.yy).
The bug occured when the optimizer decided to use a rowid filter built
by a range index scan to access an InnoDB table with generated clustered
index.
When a table is accessed by a secondary index Idx employing a rowid filter the
the value of pk contained in the found index tuple is checked against the
filter. A call of the handler function position is supposed to put the
pk value into the handler::ref buffer. However for generated clustered
primary keys it did not happened. The patch fixes this problem.
The problem was that the code in maria_extra assumed that there could be
only one table open when doing maria_extra(MA_FORCE_REOPEN)
However in the case of triggers, there can be multiple copies of
the table open.
Fixed by removing assert.
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
- When recovery failed, errors would not be printed on
new lines.
- Print more information if file lengths are changed
- Added logging of table name for entries INCOMPLETE_LOG and
REDO_REPAIR_TABLE
MDEV-18461 Aria crash recovery failures
This does not fix the bug reported in the MDEV, but
now we get an error message of the problem instead of
an assert.
Make Field::is_equal() const and return bool as it's a naturally fitting
type for it. Also it's agrument was narrowed to Column_definition.
InnoDB can change type of some columns by itself. InnoDB-specific code used to
reside in Field_xxx:is_equal() methods. Now engine-specific stuff was
moved to a virtual methods of handler::can_convert{string,varstring,blob,geom}.
These methods are called by Field::can_be_converted_by_engine() which is a
double dispatch pattern.
Some InnoDB-specific code still resides in compare_keys_but_name(). It should
be moved from here someday to handler::compare_key_parts(...) or similar.
IS_EQUAL_WITH_REINTERPRET_COMPATIBLE_CHARSET
IS_EQUAL_WITH_REINTERPRET_COMPATIBLE_CHARSET_BUT_COLLATE: both was removed
IS_EQUAL_NO, IS_EQUAL_YES are not needed now and should be removed
along with deprecated handler::check_if_incompatible_data().
HA_EXTENDED_TYPES_CONVERSION: was removed as such logic is not needed now by
server code.
ALTER_COLUMN_EQUAL_PACK_LENGTH: was renamed to a more generic
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE
Patch is about two cases:
1) On some collate changes it's possible to rebuild only secondary indexes
2) For non-indexed columns collate can be changed INSTANTly
Implemented mostly in Field_{string,varstring,blob}::is_equal().
Make this method return how exactly collationa differs.
This information is later used by fill_alter_inplace_info() to pass
correct info to engine.
MariaDB doesn't support Read-Free replication, so showing them only causes
confusion.
Removed variables:
- @@rocksdb_read_free_rpl
- @@rocksdb_read_free_rpl_tables
The test cases for the MDEV found several independent bugs
in MariaDB server and Aria:
- If a temporary table was marked as crashed, it could never
be deleted.
- Opening of a crashed temporary table gave an error message
but the error was never forwarded to the caller which caused
an assert() in my_ok()
- init_read_record() did mmap of all temporary tables, which is
probably not a good idea as this area can potentially be
very big. Changed code to only mmap internal temporary tables.
- mmap-ed tables where not unmapped in case of repair/optimize
which caused bad data in table and crashes if the original
table files where replaced with new ones (as the old mmap
was still in place). Fixed by removing the mmap in case
of repair.
- Cleaned up usage of code that disabled mmap in Aria
Problem was that in case of implicit rollback for alter table
Aria did try to run commit twice.
The test case for this is tricky to do in 10.2, so it will
be added to 10.4 as part of BACKUP STAGE testing.
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
- Don't add DZSTD_STATIC_LINKING_ONLY
- Don't use upstream's way of linking with Jemalloc (MyRocks/MariaDB has
its own in build_rocksdb.cmake)
- Don't depend on libunwind
Copy of
commit dcd9379eb5707bc7514a2ff4d9127790356505cb
Author: Manuel Ung <mung@fb.com>
Date: Fri Jun 14 10:38:17 2019 -0700
Skip valgrind for rocksdb.force_shutdown
Summary:
This test does unclean shutdown, and leaks memory.
Squash with: D15749084
Reviewed By: hermanlee
Differential Revision: D15828957
fbshipit-source-id: 30541455d74
If the allocation of spider_table_sts_threads failed,
we would DBUG_RETURN(error_num) without having initialized
it earlier.
Pre-initialize error_num to HA_ERR_OUT_OF_MEM and remove
a lot of assignments that thus became redundant.
This error was introduced in 207594afac
(Spider 3.3.13).
Problem:
=========
One of the purge thread access the corrupted page and tries to remove from
LRU list. In the mean time, other purge threads are waiting for same page
in buf_wait_for_read(). Assertion(buf_fix_count == 0) fails for the
purge thread which tries to remove the page from LRU list.
Solution:
========
- Set the page id as FIL_NULL to indicate the page is corrupted before
removing the block from LRU list. Acquire hash lock for the particular
page id and wait for the other threads to release buf_fix_count
for the block.
- Added the error check for btr_cur_open() in row_search_on_row_ref().
Explicitly mention every options in .clang-format to protect us from possible
future changes.
Remove separate InnoDB style.
Change style to look more like this script:
for x in $@
do
indent -kr -bl -bli0 -l79 -i2 -nut -c48 -dj -cp0 $x
sed -ri -e 's/ = /= /g'\
-e '/switch.*\)$/{N;s/\n[ ]+/ /}' $x
done
Significant different is that 'switch' and '{' are put on different lines
because it's impossible in clang-format to set formatting rules just for
'switch' statement.
page_zip_compress(), page_zip_compress_write_log(),
page_zip_copy_recs(): Replace the parameters page,page_zip with block,
and set buf_page_t::init_on_flush on success
if innodb_log_optimize_ddl=OFF.
page_zip_parse_compress_no_data(): Merge with the only caller
recv_parse_or_apply_log_rec_body().
Thanks to MDEV-12699, the doublewrite buffer will only be needed in
those cases when a page is being updated in the data file. If the page
had never been written to the data file since it was initialized,
then recovery will be able to reconstruct the page based solely on
the contents of the redo log files.
The doublewrite buffer is only really needed when recovery needs to read
the page in order to apply redo log.
Note: As noted in MDEV-19739, we cannot safely disable the doublewrite
buffer if any MLOG_INDEX_LOAD records were written in the past or will
be written in the future. These records denote that redo logging was
disabled for some pages in a tablespace. Ideally, we would have
the setting innodb_log_optimize_ddl=OFF by default, and would not allow
it to be set while the server is running. If we wanted to make this
safe, assignments with SET GLOBAL innodb_log_optimize_ddl=...
should not only issue a redo log checkpoint (including a write of all
dirty pages from the entire buffer pool), but it should also wait for
all pending ALTER TABLE activity to complete. We elect not to do this.
Avoiding unnecessary use of the doublewrite buffer should improve the
write performance of InnoDB.
buf_page_t::init_on_flush: A new flag to indicate whether it is safe to
skip doublewrite buffering when writing the page.
fsp_init_file_page(): When writing a MLOG_INIT_FILE_PAGE2 record,
set the init_on_flush flag if innodb_log_optimize_ddl=OFF.
This is the only function that writes that log record.
buf_flush_write_block_low(): Skip doublewrite if init_on_flush is set.
fil_aio_wait(): Clear init_on_flush.
GCC 9.1.1 noticed that sd_notifyf() was always being invoked with
str=NULL argument for "%s". This code was added in
commit 2e814d4702
but not mentioned in the commit comment.
The STATUS messages for systemd matter during startup and shutdown,
and should not be emitted during normal operation.
ib_senderrf(): Remove the potentially harmful sd_notifyf() calls.
- If InnoDB encounters garbage or incomplete written log block during
recovery then don't throw the error. Treat it as end of the log.
- This kind of incomplete or empty block can be result of killing
InnoDB when writing the redo log.
followup for be5c432a42
ha_partition::calculate_checksum() has to invoke calculate_checksum()
for partitions unconditionally, not under (HA_HAS_OLD_CHECKSUM | HA_HAS_NEW_CHECKSUM).
Because the server uses ::info() to ask for a live checksum, while
calculate_checksum() must, precisely, calculate it the slow way,
also for tables that don't have the live checksum at all.
Also, fix the compilation on Windows (ha_checksum/ulonglong type mix).
InnoDB crash recovery buffers redo log records in a hash table.
The function recv_read_in_area() would pick a random hash bucket
and then try to submit read requests for a few nearby pages.
Let us replace the recv_sys.addr_hash with a std::map, which will
automatically be iterated in sorted order.
recv_sys_t::pages: Replaces recv_sys_t::addr_hash, recv_sys_t::n_addrs.
recv_sys_t::recs: Replaces most of recv_addr_t.
recv_t: Encapsulate a raw singly-linked list of records. This reduces
overhead compared to std::forward_list. Storage and cache overhead,
because the next-element pointer also points to the data payload.
Processing overhead, because recv_sys_t::recs_t::last will point to
the last record, so that recv_sys_t::add() can append straight to the
end of the list.
RECV_PROCESSED, RECV_DISCARDED: Remove. When a page is fully processed,
it will be deleted from recv_sys.pages.
recv_sys_t::trim(): Replaces recv_addr_trim().
recv_sys_t::add(): Use page_id_t for identifying pages.
recv_fold(), recv_hash(), recv_get_fil_addr_struct(): Remove.
recv_read_in_area(): Simplify the iteration.
Some I/O functions and macros that are declared in os0file.h used to
return a Boolean status code (nonzero on success). In MySQL 5.7, they
were changed to return dberr_t instead. Alas, in MariaDB Server 10.2,
some uses of functions were not adjusted to the changed return value.
Until MDEV-19231, the valid values of dberr_t were always nonzero.
This means that some code that was incorrectly checking for a zero
return value from the functions would never detect a failure.
After MDEV-19231, some tests for ALTER ONLINE TABLE would fail with
cmake -DPLUGIN_PERFSCHEMA=NO. It turned out that the wrappers
pfs_os_file_read_no_error_handling_int_fd_func() and
pfs_os_file_write_int_fd_func() were wrongly returning
bool instead of dberr_t. Also the callers of these functions were
wrongly expecting bool (nonzero on success) instead of dberr_t.
This mistake had been made when the addition of these functions was
merged from MySQL 5.6.36 and 5.7.18 into MariaDB Server 10.2.7.
This fix also reverts commit 40becbc3c7
which attempted to work around the problem.
Problem:
=======
fil_iterate() writes imported tablespace page0 as it is to discarded
tablespace. Space id wasn't even changed. While opening the tablespace,
tablespace fails with space id mismatch error.
Fix:
====
fil_iterate() copies the page0 with discarded space id to imported
tablespace.