mariadb/storage/innobase/include/btr0bulk.h
Marko Mäkelä 0b47c126e3 MDEV-13542: Crashing on corrupted page is unhelpful
The approach to handling corruption that was chosen by Oracle in
commit 177d8b0c12
is not really useful. Not only did it actually fail to prevent InnoDB
from crashing, but it is making things worse by blocking attempts to
rescue data from or rebuild a partially readable table.

We will try to prevent crashes in a different way: by propagating
errors up the call stack. We will never mark the clustered index
persistently corrupted, so that data recovery may be attempted by
reading from the table, or by rebuilding the table.

This should also fix MDEV-13680 (crash on btr_page_alloc() failure);
it was extensively tested with innodb_file_per_table=0 and a
non-autoextend system tablespace.

We should now avoid crashes in many cases, such as when a page
cannot be read or allocated, or an inconsistency is detected when
attempting to update multiple pages. We will not crash on double-free,
such as on the recovery of DDL in system tablespace in case something
was corrupted.

Crashes on corrupted data are still possible. The fault injection mechanism
that is introduced in the subsequent commit may help catch more of them.

buf_page_import_corrupt_failure: Remove the fault injection, and instead
corrupt some pages using Perl code in the tests.

btr_cur_pessimistic_insert(): Always reserve extents (except for the
change buffer), in order to prevent a subsequent allocation failure.

btr_pcur_open_at_rnd_pos(): Merged to the only caller ibuf_merge_pages().

btr_assert_not_corrupted(), btr_corruption_report(): Remove.
Similar checks are already part of btr_block_get().

FSEG_MAGIC_N_BYTES: Replaces FSEG_MAGIC_N_VALUE.

dict_hdr_get(), trx_rsegf_get_new(), trx_undo_page_get(),
trx_undo_page_get_s_latched(): Replaced with error-checking calls.

trx_rseg_t::get(mtr_t*): Replaces trx_rsegf_get().

trx_rseg_header_create(): Let the caller update the TRX_SYS page if needed.

trx_sys_create_sys_pages(): Merged with trx_sysf_create().

dict_check_tablespaces_and_store_max_id(): Do not access
DICT_HDR_MAX_SPACE_ID, because it was already recovered in dict_boot().
Merge dict_check_sys_tables() with this function.

dir_pathname(): Replaces os_file_make_new_pathname().

row_undo_ins_remove_sec(): Do not modify the undo page by adding
a terminating NUL byte to the record.

btr_decryption_failed(): Report decryption failures

dict_set_corrupted_by_space(), dict_set_encrypted_by_space(),
dict_set_corrupted_index_cache_only(): Remove.

dict_set_corrupted(): Remove the constant parameter dict_locked=false.
Never flag the clustered index corrupted in SYS_INDEXES, because
that would deny further access to the table. It might be possible to
repair the table by executing ALTER TABLE or OPTIMIZE TABLE, in case
no B-tree leaf page is corrupted.

dict_table_skip_corrupt_index(), dict_table_next_uncorrupted_index(),
row_purge_skip_uncommitted_virtual_index(): Remove, and refactor
the callers to read dict_index_t::type only once.

dict_table_is_corrupted(): Remove.

dict_index_t::is_btree(): Determine if the index is a valid B-tree.

BUF_GET_NO_LATCH, BUF_EVICT_IF_IN_POOL: Remove.

UNIV_BTR_DEBUG: Remove. Any inconsistency will no longer trigger
assertion failures, but error codes being returned.

buf_corrupt_page_release(): Replaced with a direct call to
buf_pool.corrupted_evict().

fil_invalid_page_access_msg(): Never crash on an invalid read;
let the caller of buf_page_get_gen() decide.

btr_pcur_t::restore_position(): Propagate failure status to the caller
by returning CORRUPTED.

opt_search_plan_for_table(): Simplify the code.

row_purge_del_mark(), row_purge_upd_exist_or_extern_func(),
row_undo_ins_remove_sec_rec(), row_undo_mod_upd_del_sec(),
row_undo_mod_del_mark_sec(): Avoid mem_heap_create()/mem_heap_free()
when no secondary indexes exist.

row_undo_mod_upd_exist_sec(): Simplify the code.

row_upd_clust_step(), dict_load_table_one(): Return DB_TABLE_CORRUPT
if the clustered index (and therefore the table) is corrupted, similar
to what we do in row_insert_for_mysql().

fut_get_ptr(): Replace with buf_page_get_gen() calls.

buf_page_get_gen(): Return nullptr and *err=DB_CORRUPTION
if the page is marked as freed. For other modes than
BUF_GET_POSSIBLY_FREED or BUF_PEEK_IF_IN_POOL this will
trigger a debug assertion failure. For BUF_GET_POSSIBLY_FREED,
we will return nullptr for freed pages, so that the callers
can be simplified. The purge of transaction history will be
a new user of BUF_GET_POSSIBLY_FREED, to avoid crashes on
corrupted data.

buf_page_get_low(): Never crash on a corrupted page, but simply
return nullptr.

fseg_page_is_allocated(): Replaces fseg_page_is_free().

fts_drop_common_tables(): Return an error if the transaction
was rolled back.

fil_space_t::set_corrupted(): Report a tablespace as corrupted if
it was not reported already.

fil_space_t::io(): Invoke fil_space_t::set_corrupted() to report
out-of-bounds page access or other errors.

Clean up mtr_t::page_lock()

buf_page_get_low(): Validate the page identifier (to check for
recently read corrupted pages) after acquiring the page latch.

buf_page_t::read_complete(): Flag uninitialized (all-zero) pages
with DB_FAIL. Return DB_PAGE_CORRUPTED on page number mismatch.

mtr_t::defer_drop_ahi(): Renamed from mtr_defer_drop_ahi().

recv_sys_t::free_corrupted_page(): Only set_corrupt_fs()
if any log records exist for the page. We do not mind if read-ahead
produces corrupted (or all-zero) pages that were not actually needed
during recovery.

recv_recover_page(): Return whether the operation succeeded.

recv_sys_t::recover_low(): Simplify the logic. Check for recovery error.

Thanks to Matthias Leich for testing this extensively and to the
authors of https://rr-project.org for making it easy to diagnose
and fix any failures that were found during the testing.
2022-06-06 14:03:22 +03:00

371 lines
8.9 KiB
C++

/*****************************************************************************
Copyright (c) 2014, 2015, Oracle and/or its affiliates. All Rights Reserved.
Copyright (c) 2019, 2022, MariaDB Corporation.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
*****************************************************************************/
/********************************************************************//**
@file include/btr0bulk.h
The B-tree bulk load
Created 03/11/2014 Shaohua Wang
*************************************************************************/
#ifndef btr0bulk_h
#define btr0bulk_h
#include "dict0dict.h"
#include "rem0types.h"
#include "page0cur.h"
#include <vector>
/** Innodb B-tree index fill factor for bulk load. */
extern uint innobase_fill_factor;
/*
The proper function call sequence of PageBulk is as below:
-- PageBulk::init
-- PageBulk::insert
-- PageBulk::finish
-- PageBulk::compress(COMPRESSED table only)
-- PageBulk::pageSplit(COMPRESSED table only)
-- PageBulk::commit
*/
class PageBulk
{
public:
/** Constructor
@param[in] index B-tree index
@param[in] page_no page number
@param[in] level page level
@param[in] trx_id transaction id */
PageBulk(
dict_index_t* index,
trx_id_t trx_id,
uint32_t page_no,
ulint level)
:
m_heap(NULL),
m_index(index),
m_mtr(),
m_trx_id(trx_id),
m_block(NULL),
m_page(NULL),
m_page_zip(NULL),
m_cur_rec(NULL),
m_page_no(page_no),
m_level(level),
m_is_comp(dict_table_is_comp(index->table)),
m_heap_top(NULL),
m_rec_no(0),
m_free_space(0),
m_reserved_space(0),
#ifdef UNIV_DEBUG
m_total_data(0),
#endif /* UNIV_DEBUG */
m_modify_clock(0),
m_err(DB_SUCCESS)
{
ut_ad(!dict_index_is_spatial(m_index));
ut_ad(!m_index->table->is_temporary());
}
/** Deconstructor */
~PageBulk()
{
mem_heap_free(m_heap);
}
/** Initialize members and allocate page if needed and start mtr.
Note: must be called and only once right after constructor.
@return error code */
dberr_t init();
/** Insert a record in the page.
@param[in] rec record
@param[in] offsets record offsets */
inline void insert(const rec_t* rec, rec_offs* offsets);
private:
/** Page format */
enum format { REDUNDANT, DYNAMIC, COMPRESSED };
/** Mark end of insertion to the page. Scan all records to set page
dirs, and set page header members.
@tparam format the page format */
template<format> inline void finishPage();
/** Insert a record in the page.
@tparam format the page format
@param[in,out] rec record
@param[in] offsets record offsets */
template<format> inline void insertPage(rec_t* rec, rec_offs* offsets);
public:
/** Mark end of insertion to the page. Scan all records to set page
dirs, and set page header members. */
inline void finish();
/** @return whether finish() actually needs to do something */
inline bool needs_finish() const;
/** Commit mtr for a page
@param[in] success Flag whether all inserts succeed. */
void commit(bool success);
/** Compress if it is compressed table
@return true compress successfully or no need to compress
@return false compress failed. */
bool compress();
/** Check whether the record needs to be stored externally.
@return true
@return false */
bool needExt(const dtuple_t* tuple, ulint rec_size);
/** Store external record
@param[in] big_rec external recrod
@param[in] offsets record offsets
@return error code */
dberr_t storeExt(const big_rec_t* big_rec, rec_offs* offsets);
/** Get node pointer
@return node pointer */
dtuple_t* getNodePtr();
/** Get split rec in the page. We split a page in half when compresssion
fails, and the split rec should be copied to the new page.
@return split rec */
rec_t* getSplitRec();
/** Copy all records after split rec including itself.
@param[in] rec split rec */
void copyIn(rec_t* split_rec);
/** Remove all records after split rec including itself.
@param[in] rec split rec */
void copyOut(rec_t* split_rec);
/** Set next page
@param[in] next_page_no next page no */
inline void setNext(ulint next_page_no);
/** Set previous page
@param[in] prev_page_no previous page no */
inline void setPrev(ulint prev_page_no);
/** Release block by commiting mtr */
inline void release();
/** Start mtr and latch block */
inline void latch();
/** Check if required space is available in the page for the rec
to be inserted. We check fill factor & padding here.
@param[in] length required length
@return true if space is available */
inline bool isSpaceAvailable(ulint rec_size);
/** Get page no */
uint32_t getPageNo() const { return m_page_no; }
/** Get page level */
ulint getLevel()
{
return(m_level);
}
/** Get record no */
ulint getRecNo()
{
return(m_rec_no);
}
/** Get page */
page_t* getPage()
{
return(m_page);
}
/** Get page zip */
page_zip_des_t* getPageZip()
{
return(m_page_zip);
}
dberr_t getError()
{
return(m_err);
}
void set_modified() { m_mtr.set_modified(*m_block); }
/* Memory heap for internal allocation */
mem_heap_t* m_heap;
private:
/** The index B-tree */
dict_index_t* m_index;
/** The mini-transaction */
mtr_t m_mtr;
/** The transaction id */
trx_id_t m_trx_id;
/** The buffer block */
buf_block_t* m_block;
/** The page */
page_t* m_page;
/** The page zip descriptor */
page_zip_des_t* m_page_zip;
/** The current rec, just before the next insert rec */
rec_t* m_cur_rec;
/** The page no */
uint32_t m_page_no;
/** The page level in B-tree */
ulint m_level;
/** Flag: is page in compact format */
const bool m_is_comp;
/** The heap top in page for next insert */
byte* m_heap_top;
/** User record no */
ulint m_rec_no;
/** The free space left in the page */
ulint m_free_space;
/** The reserved space for fill factor */
ulint m_reserved_space;
/** The padding space for compressed page */
ulint m_padding_space;
#ifdef UNIV_DEBUG
/** Total data in the page */
ulint m_total_data;
#endif /* UNIV_DEBUG */
/** The modify clock value of the buffer block
when the block is re-pinned */
ib_uint64_t m_modify_clock;
/** Operation result DB_SUCCESS or error code */
dberr_t m_err;
};
typedef std::vector<PageBulk*, ut_allocator<PageBulk*> >
page_bulk_vector;
class BtrBulk
{
public:
/** Constructor
@param[in] index B-tree index
@param[in] trx transaction */
BtrBulk(
dict_index_t* index,
const trx_t* trx)
:
m_index(index),
m_trx(trx)
{
ut_ad(!dict_index_is_spatial(index));
}
/** Insert a tuple
@param[in] tuple tuple to insert.
@return error code */
dberr_t insert(dtuple_t* tuple)
{
return(insert(tuple, 0));
}
/** Btree bulk load finish. We commit the last page in each level
and copy the last page in top level to the root page of the index
if no error occurs.
@param[in] err whether bulk load was successful until now
@return error code */
dberr_t finish(dberr_t err);
/** Release all latches */
void release();
/** Re-latch all latches */
void latch();
table_name_t table_name() { return m_index->table->name; }
private:
/** Insert a tuple to a page in a level
@param[in] tuple tuple to insert
@param[in] level B-tree level
@return error code */
dberr_t insert(dtuple_t* tuple, ulint level);
/** Split a page
@param[in] page_bulk page to split
@param[in] next_page_bulk next page
@return error code */
dberr_t pageSplit(PageBulk* page_bulk,
PageBulk* next_page_bulk);
/** Commit(finish) a page. We set next/prev page no, compress a page of
compressed table and split the page if compression fails, insert a node
pointer to father page if needed, and commit mini-transaction.
@param[in] page_bulk page to commit
@param[in] next_page_bulk next page
@param[in] insert_father flag whether need to insert node ptr
@return error code */
dberr_t pageCommit(PageBulk* page_bulk,
PageBulk* next_page_bulk,
bool insert_father);
/** Abort a page when an error occurs
@param[in] page_bulk page bulk object
Note: we should call pageAbort for a PageBulk object, which is not in
m_page_bulks after pageCommit, and we will commit or abort PageBulk
objects in function "finish". */
void pageAbort(PageBulk* page_bulk)
{
page_bulk->commit(false);
}
/** Log free check */
inline void logFreeCheck();
private:
/** B-tree index */
dict_index_t*const m_index;
/** Transaction */
const trx_t*const m_trx;
/** Root page level */
ulint m_root_level;
/** Page cursor vector for all level */
page_bulk_vector m_page_bulks;
};
#endif