Commit graph

567 commits

Author SHA1 Message Date
Marko Mäkelä
42a4ae54c2 MDEV-21225 Remove ut_align() and use aligned_malloc()
Before commit 90c52e5291 introduced
aligned_malloc(), InnoDB always used a pattern of over-allocating
memory and invoking ut_align() to guarantee the desired alignment.

It is cleaner to invoke aligned_malloc() and aligned_free() directly.

ut_align(): Remove. In assertions, ut_align_down() can be used instead.
2019-12-05 06:42:31 +02:00
Marko Mäkelä
6b5cdd4ff7 MDEV-19514: Update stale comments 2019-12-04 15:35:58 +02:00
Marko Mäkelä
95e903261e MDEV-21216 InnoDB does dirty read of TRX_SYS page before recovery
InnoDB startup was discovering undo tablespaces in a dirty way.
It was reading a possibly stale copy of the TRX_SYS page before
processing any redo log records.

srv_start(): Do not call buf_pool_invalidate(). Invoke
trx_rseg_get_n_undo_tablespaces() after the recovery has been initiated.

recv_recovery_from_checkpoint_start(): Assert that the buffer pool is
empty. This used to be guaranteed by the buf_pool_invalidate() call.

trx_rseg_get_n_undo_tablespaces(): Move to the calling compilation unit,
and reimplement in a simpler way.

srv_undo_tablespace_create(): Remove the constant parameter
size=SRV_UNDO_TABLESPACE_SIZE_IN_PAGES.

srv_undo_tablespace_open(): Reimplement in a cleaner way, with
more robust error handling.

srv_all_undo_tablespaces_open(): Split from srv_undo_tablespaces_init().

srv_undo_tablespaces_init(): Read all "undo001","undo002" tablespace
files directly, without consulting the TRX_SYS page via calling
trx_rseg_get_n_undo_tablespaces().

This is joint work with Thirunarayanan Balathandayuthapani.
2019-12-04 15:34:28 +02:00
Marko Mäkelä
56f6dab1d0 MDEV-21174: Replace mlog_write_ulint() with mtr_t::write()
mtr_t::write(): Replaces mlog_write_ulint(), mlog_write_ull().
Optimize away writes if the page contents does not change,
except when a dummy write has been explicitly requested.

Because the member function template takes a block descriptor as a
parameter, it is possible to introduce better consistency checks.
Due to this, the code for handling file-based lists, undo logs
and user transactions was refactored to pass around buf_block_t.
2019-12-03 11:05:18 +02:00
Marko Mäkelä
cd92c6c83d MDEV-12353 preparation: Do not write MLOG_REC_MIN_MARK
btr_set_min_rec_mark(): Write MLOG_1BYTE instead of
MLOG_REC_MIN_MARK or MLOG_COMP_REC_MIN_MARK.

On ROW_FORMAT=COMPRESSED pages, the minimum record flag is not stored
at all. The flag is computed for the uncompressed page by
page_zip_decompress(). Hence, nothing needs to be logged for
ROW_FORMAT=COMPRESSED tables for this operation.

To facilitate crash-upgrade and hot backup from older versions,
we will retain the code to parse and apply the old log record types
MLOG_REC_MIN_MARK and MLOG_COMP_REC_MIN_MARK.
2019-12-03 11:05:18 +02:00
Marko Mäkelä
8ebd91c1a9 MDEV-12353 preparation: Do not write MLOG_FILE_WRITE_CRYPT_DATA
The MLOG_FILE_WRITE_CRYPT_DATA record was completely redundant.
It can be replaced with a single MLOG_WRITE_STRING record.

To facilitate upgrade from older versions, we will retain
fil_parse_write_crypt_data().

fil_crypt_parse(): Recover fil_space_crypt_t::write_page0().

fil_space_crypt_t::write_page0(): Write everything in a single
MLOG_WRITE_STRING for easy parsing.

fil_space_crypt_t::page0_offset: Remove.
2019-12-03 11:05:18 +02:00
Marko Mäkelä
25e2a556de MDEV-21133 Optimize access to InnoDB page header fields
Introduce memcpy_aligned<N>(), memcmp_aligned<N>(), memset_aligned<N>()
and use them for accessing InnoDB page header fields that are known
to be aligned.

MY_ASSUME_ALIGNED(): Wrapper for the GCC/clang __builtin_assume_aligned().
Nothing similar seems to exist in Microsoft Visual Studio, and the
C++20 std::assume_aligned is not available to us yet.

Explicitly specified alignment guarantees allow compilers to generate
faster code on platforms with strict alignment rules, instead of
emitting calls to potentially unaligned memcpy(), memcmp(), or memset().
2019-11-26 10:15:03 +02:00
Marko Mäkelä
312569e2fd MDEV-21132 Remove buf_page_t::newest_modification
At each mini-transaction commit, the log sequence number of the
mini-transaction must be written to each modified page, so that
it will be available in the FIL_PAGE_LSN field when the page is
being read in crash recovery.

InnoDB was unnecessarily allocating redundant storage for the
field, in buf_page_t::newest_modification. Let us access
FIL_PAGE_LSN directly.

Furthermore, on ALTER TABLE...IMPORT TABLESPACE, let us write
0 to FIL_PAGE_LSN instead of using log_sys.lsn.

buf_flush_init_for_writing(), buf_flush_update_zip_checksum(),
fil_encrypt_buf_for_full_crc32(), fil_encrypt_buf(),
fil_space_encrypt(): Remove the parameter lsn.

buf_page_get_newest_modification(): Merge with the only caller.

buf_tmp_reserve_compression_buf(), buf_tmp_page_encrypt(),
buf_page_encrypt(): Define static in the same compilation unit
with the only caller.

PageConverter::m_current_lsn: Remove. Write 0 to FIL_PAGE_LSN
on ALTER TABLE...IMPORT TABLESPACE.
2019-11-25 09:39:51 +02:00
Vladislav Vaintroub
a808c18b73 Make .clang-format work with clang-8
Remove keywords that are too new.
2019-11-15 18:09:30 +01:00
Vladislav Vaintroub
5e62b6a5e0 MDEV-16264 Use threadpool for Innodb background work.
Almost all threads have gone
- the "ticking" threads, that sleep a while then do some work)
(srv_monitor_thread, srv_error_monitor_thread, srv_master_thread)
were replaced with timers. Some timers are periodic,
e.g the "master" timer.

- The btr_defragment_thread is also replaced by a timer , which
reschedules it self when current defragment "item" needs throttling

- the buf_resize_thread and buf_dump_threads are substitutes with tasks
Ditto with page cleaner workers.

- purge workers threads are not tasks as well, and purge cleaner
coordinator is a combination of a task and timer.

- All AIO is outsourced to tpool, Innodb just calls thread_pool::submit_io()
and provides the callback.

- The srv_slot_t was removed, and innodb_debug_sync used in purge
is currently not working, and needs reimplementation.
2019-11-15 18:09:30 +01:00
Marko Mäkelä
1fe6e5a1a5 Merge 10.4 into 10.5 2019-11-12 17:21:37 +02:00
Marko Mäkelä
33cb10d4e9 Merge 10.3 into 10.4 2019-11-12 16:55:44 +02:00
Marko Mäkelä
5098d708a0 Merge 10.2 into 10.3 2019-11-12 16:42:58 +02:00
Marko Mäkelä
dc8380b65d MDEV-14602: Cleanup recv_dblwr_t::find_page()
Avoid creating std::vector, and use single instead of double traversal.
2019-11-12 14:41:24 +02:00
Marko Mäkelä
b5ef7ffa59 Use uint16_t for FIL_PAGE_TYPE
Since commit 5d596064d6
fil_page_type_is_index() expects uint16_t, not ulint.
2019-11-08 13:43:42 +02:00
Marko Mäkelä
51fb39bf17 MDEV-19586: Restore pointer indirection for recv_t::data
Something was wrong with the removal of pointer indirection,
because on 32-bit Windows we got crash recovery failures.
Curiously, WITH_ASAN of a 32-bit debug build did not notice anything.

This reverts a part of commit b7fc2c899e.

We have a 2MiB recv_sys.buf for the initial buffering. The minimum size
of log_sys.buf would be 16MiB, and that buffer should be practically
unused during recovery. If the buffer pool size is measured in gigabytes,
it would indeed make sense to use the buffer pool for the recovered log
records, perhaps after we have run out of log_sys.buf.

FIXME: allow the entire buf_block_t::frame to be used for buffered
log records, and remove MEM_HEAP_FOR_RECV_SYS. For example, use
buf_page_t::list or buf_page_t::LRU for keeping track of memory that
was allocated for recovery? Most members of buf_block_t
(except buf_block_t::frame) are unused when
block->page.state == BUF_BLOCK_MEMORY.
2019-11-04 16:13:43 +02:00
Marko Mäkelä
541b00a3ba MDEV-19586: Clean up recv_t a little
recv_t, recv_t::data_t: Define constructors that copy the log records

Ideally, we should remove recv_sys.buf, RECV_DATA_BLOCK_SIZE,
and recv_sys_justify_left_parsing_buf(), and let the
recv_sys.pages point directly to a buffer of parsed redo log
records.

The RECV_PARSING_BUF_SIZE (size of recv_sys.buf) is only 2MiB,
while the minimum innodb_log_buffer_size (size of log_sys.buf) is 16MiB,
and log_sys.buf is unused during redo log apply!
2019-11-04 09:30:53 +02:00
Marko Mäkelä
64a02e4fa2 MDEV-19586: Add const qualifiers
Except for fil_name_process(), which invokes os_normalize_path(),
the redo log record parser will not modify the redo log records.
Add const qualifiers accordingly.
2019-11-04 09:25:26 +02:00
Sergei Golubchik
3ef01d1118 compilation failure, gcc 8.3.0
storage/innobase/log/log0recv.cc|1760 col 35 error| 'void* memcpy(void*, const void*, size_t)' writing to an object of type 'struct recv_t' with no trivial copy-assignment [-Werror=class-memaccess]
2019-11-03 12:11:23 +01:00
Marko Mäkelä
e5fed3b94d MDEV-19586: Avoid std::map::emplace()
GCC 4.7 only knows about std::map::insert(), not emplace().

Also, reformat the function in the common style.
2019-11-01 16:56:44 +02:00
Marko Mäkelä
b7fc2c899e Refactor recv_sys_t::recs_t into page_recv_t
page_recv_t: Replaces recv_sys_t::recs_t.
page_recv_t::state is not private, even though some accessors exist.

page_recv_t::log: A singly-linked list of log_rec_t* with STL decoration
and the custom operations trim() and append(). The list members are private.

recv_t::data_t: Replaces recv_data_t.

recv_t::data: Remove the pointer indirection for the first log chunk,
and copy the first chunk directly after the record. Adjust the
definition of RECV_DATA_BLOCK_SIZE accordingly.
2019-11-01 15:06:31 +02:00
Marko Mäkelä
2aa1f77ef1 MDEV-19586: Rename recv_sys.empty() to recv_sys.clear()
In the collections of Standard Template Library,
empty() is a predicate and clear() empties a collection.
Let us rename recv_sys.empty() to recv_sys.clear() to avoid confusion.
2019-11-01 14:12:55 +02:00
Marko Mäkelä
0ccfdc8eff Remove InnoDB wrappers of <string.h> functions
ut_strcmp(), ut_strcpy(), ut_strlen(), ut_memcpy(), ut_memcmp(),
ut_memmove(): Remove. Invoke the standard library functions directly.
2019-10-30 07:31:39 +02:00
Eugene Kosov
7440e61a64 MDEV-18115: Remove the only async write (of redo log)
TODO: do not use fil_* functions for redo log files.

log_t::checkpoint_lock: remove this lock which was used to wait for
async I/O completion.

checkpoint_lock_key
checkpoint_lock: remove now unneeded globals

log_write_checkpoint_info(): remove sync argument because all checkpoint
writes are synchronous now

log_write_checkpoint_info(): remove sync argument

log_group_checkpoint(): merge with the only caller

log_complete_checkpoint(): merge with the only caller

log_t::complete_checkpoint(): remove by merging with the only caller.
2019-10-29 22:03:02 +03:00
Marko Mäkelä
2b710090aa Merge 10.4 into 10.5 2019-10-29 16:31:40 +02:00
Marko Mäkelä
d4e1fa39f0 Use C++11 "range for" in crash recovery code 2019-10-29 11:58:11 +02:00
Marko Mäkelä
3043f38436 Merge 10.4 into 10.5 2019-10-28 17:10:34 +02:00
Marko Mäkelä
d67ea8151f Rename LOG_HEADER_FORMAT_ to log_t::FORMAT_ 2019-10-28 16:16:57 +02:00
Marko Mäkelä
b42294bc64 MDEV-19514 Defer change buffer merge until pages are requested
We will remove the InnoDB background operation of merging buffered
changes to secondary index leaf pages. Changes will only be merged as a
result of an operation that accesses a secondary index leaf page,
such as a SQL statement that performs a lookup via that index,
or is modifying the index. Also ROLLBACK and some background operations,
such as purging the history of committed transactions, or computing
index cardinality statistics, can cause change buffer merge.
Encryption key rotation will not perform change buffer merge.

The motivation of this change is to simplify the I/O logic and to
allow crash recovery to happen in the background (MDEV-14481).
We also hope that this will reduce the number of "mystery" crashes
due to corrupted data. Because change buffer merge will typically
take place as a result of executing SQL statements, there should be
a clearer connection between the crash and the SQL statements that
were executed when the server crashed.

In many cases, a slight performance improvement was observed.

This is joint work with Thirunarayanan Balathandayuthapani
and was tested by Axel Schwenke and Matthias Leich.

The InnoDB monitor counter innodb_ibuf_merge_usec will be removed.

On slow shutdown (innodb_fast_shutdown=0), we will continue to
merge all buffered changes (and purge all undo log history).

Two InnoDB configuration parameters will be changed as follows:

innodb_disable_background_merge: Removed.
This parameter existed only in debug builds.
All change buffer merges will use synchronous reads.

innodb_force_recovery will be changed as follows:
* innodb_force_recovery=4 will be the same as innodb_force_recovery=3
(the change buffer merge cannot be disabled; it can only happen as
a result of an operation that accesses a secondary index leaf page).
The option used to be capable of corrupting secondary index leaf pages.
Now that capability is removed, and innodb_force_recovery=4 becomes 'safe'.
* innodb_force_recovery=5 (which essentially hard-wires
SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED)
becomes safe to use. Bogus data can be returned to SQL, but
persistent InnoDB data files will not be corrupted further.
* innodb_force_recovery=6 (ignore the redo log files)
will be the only option that can potentially cause
persistent corruption of InnoDB data files.

Code changes:

buf_page_t::ibuf_exist: New flag, to indicate whether buffered
changes exist for a buffer pool page. Pages with pending changes
can be returned by buf_page_get_gen(). Previously, the changes
were always merged inside buf_page_get_gen() if needed.

ibuf_page_exists(const buf_page_t&): Check if a buffered changes
exist for an X-latched or read-fixed page.

buf_page_get_gen(): Add the parameter allow_ibuf_merge=false.
All callers that know that they may be accessing a secondary index
leaf page must pass this parameter as allow_ibuf_merge=true,
unless it does not matter for that caller whether all buffered
changes have been applied. Assert that whenever allow_ibuf_merge
holds, the page actually is a leaf page. Attempt change buffer
merge only to secondary B-tree index leaf pages.

btr_block_get(): Add parameter 'bool merge'.
All callers of btr_block_get() should know whether the page could be
a secondary index leaf page. If it is not, we should avoid consulting
the change buffer bitmap to even consider a merge. This is the main
interface to requesting index pages from the buffer pool.

ibuf_merge_or_delete_for_page(), recv_recover_page(): Replace
buf_page_get_known_nowait() with much simpler logic, because
it is now guaranteed that that the block is x-latched or read-fixed.

mlog_init_t::mark_ibuf_exist(): Renamed from mlog_init_t::ibuf_merge().
On crash recovery, we will no longer merge any buffered changes
for the pages that we read into the buffer pool during the last batch
of applying log records.

buf_page_get_gen_known_nowait(), BUF_MAKE_YOUNG, BUF_KEEP_OLD: Remove.

btr_search_guess_on_hash(): Merge buf_page_get_gen_known_nowait()
to its only remaining caller.

buf_page_make_young_if_needed(): Define as an inline function.
Add the parameter buf_pool.

buf_page_peek_if_young(), buf_page_peek_if_too_old(): Add the
parameter buf_pool.

fil_space_validate_for_mtr_commit(): Remove a bogus comment
about background merge of the change buffer.

btr_cur_open_at_rnd_pos_func(), btr_cur_search_to_nth_level_func(),
btr_cur_open_at_index_side_func(): Use narrower data types and scopes.

ibuf_read_merge_pages(): Replaces buf_read_ibuf_merge_pages().
Merge the change buffer by invoking buf_page_get_gen().
2019-10-11 17:28:15 +03:00
Marko Mäkelä
d04f2de80a Merge 10.4 into 10.5 2019-10-11 08:41:36 +03:00
Marko Mäkelä
09afd3da1a Merge 10.3 into 10.4 2019-10-10 21:30:40 +03:00
Marko Mäkelä
7f84e3ad75 Merge 10.2 into 10.3 2019-10-10 20:38:44 +03:00
Marko Mäkelä
6fde0073bf Rename log_make_checkpoint_at() to log_make_checkpoint()
The function was always called with lsn=LSN_MAX.
Remove that redundant parameter.

Spotted by Thirunarayanan Balathandayuthapani.
2019-10-09 18:47:14 +03:00
Marko Mäkelä
9b5cdeeb0f Merge 10.3 into 10.4 2019-09-27 16:26:53 +03:00
Marko Mäkelä
2911a9a693 Merge 10.2 into 10.3 2019-09-27 15:56:15 +03:00
Thirunarayanan Balathandayuthapani
c76873f23d MDEV-20688 Recovery crashes after unnecessarily reading a corrupted page
The test encryption.innodb-redo-badkey was accidentally disabled
until commit 23657a2101 enabled
it recently. Once it was enabled, it started failing randomly.

recv_recover_corrupt_page(): Do not assume that any redo log exists
for the page. A page may be unnecessarily read by read-ahead.
When noting the corruption, reset recv_addr->state to RECV_PROCESSED,
so that even if the same page is re-read again, we will only
decrement recv_sys->n_addrs once.
2019-09-27 17:46:10 +05:30
Marko Mäkelä
67ddb6507d Merge 10.4 into 10.5 2019-08-16 14:35:32 +03:00
Marko Mäkelä
1d15a28e52 Merge 10.3 into 10.4 2019-08-14 18:06:51 +03:00
Alexander Barkov
c1599821a5 Merge remote-tracking branch 'origin/10.4' into 10.5 2019-08-13 23:49:10 +04:00
Marko Mäkelä
65d48b4a7b Merge 10.2 to 10.3 2019-08-13 19:28:51 +03:00
Marko Mäkelä
624dd71b94 Merge 10.4 into 10.5 2019-08-13 18:57:00 +03:00
Vlad Lesin
d39d5dd2bc MDEV-20060: Failing assertion: srv_log_file_size <= 512ULL << 30 while preparing backup
The general reason why innodb redo log file is limited by 512G is that
log_block_convert_lsn_to_no() returns value limited by 1G. But there is no
need to have unique log block numbers in log group. The fix removes 512G
limit and limits log group size by
(uint32_t maximum value) * (minimum page size), which, in turns, can be
removed if fil_io() is no longer used for innodb redo log io.
2019-08-07 17:26:44 +03:00
Vladislav Vaintroub
f61a980686 Update WolfSSL, remove older workarounds. 2019-07-28 13:45:15 +02:00
Marko Mäkelä
e9c1701e11 Merge 10.3 into 10.4 2019-07-25 18:42:06 +03:00
Marko Mäkelä
fdef9f9b89 Merge 10.2 into 10.3 2019-07-25 15:31:11 +03:00
Marko Mäkelä
b6ac67389d Merge 10.1 into 10.2 2019-07-25 12:14:27 +03:00
Marko Mäkelä
0c7c61019d Remove the wrappers ut_time(), ut_difftime(), ib_time_t 2019-07-24 21:59:26 +03:00
Marko Mäkelä
70b226d966 Merge 10.2 into 10.3 2019-07-22 17:37:04 +03:00
Eugene Kosov
53a3594b90 MDEV-19471 Add ASAN-poisoned redzones for mem_heap_t
Store REDZONE_SIZE poined bytes before every allocated chunk of memory
2019-07-19 13:28:03 +03:00
Marko Mäkelä
7a3d34d645 Merge 10.3 into 10.4 2019-07-02 21:44:58 +03:00