mariadb/storage/innobase/include/page0page.ic
Marko Mäkelä a4948dafcd MDEV-11369 Instant ADD COLUMN for InnoDB
For InnoDB tables, adding, dropping and reordering columns has
required a rebuild of the table and all its indexes. Since MySQL 5.6
(and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing
concurrent modification of the tables.

This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT
and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously,
with only minor changes performed to the table structure. The counter
innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS
is incremented whenever a table rebuild operation is converted into
an instant ADD COLUMN operation.

ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN.

Some usability limitations will be addressed in subsequent work:

MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY
and ALGORITHM=INSTANT
MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE

The format of the clustered index (PRIMARY KEY) is changed as follows:

(1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT,
and a new field PAGE_INSTANT will contain the original number of fields
in the clustered index ('core' fields).
If instant ADD COLUMN has not been used or the table becomes empty,
or the very first instant ADD COLUMN operation is rolled back,
the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset
to 0 and FIL_PAGE_INDEX.

(2) A special 'default row' record is inserted into the leftmost leaf,
between the page infimum and the first user record. This record is
distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the
same format as records that contain values for the instantly added
columns. This 'default row' always has the same number of fields as
the clustered index according to the table definition. The values of
'core' fields are to be ignored. For other fields, the 'default row'
will contain the default values as they were during the ALTER TABLE
statement. (If the column default values are changed later, those
values will only be stored in the .frm file. The 'default row' will
contain the original evaluated values, which must be the same for
every row.) The 'default row' must be completely hidden from
higher-level access routines. Assertions have been added to ensure
that no 'default row' is ever present in the adaptive hash index
or in locked records. The 'default row' is never delete-marked.

(3) In clustered index leaf page records, the number of fields must
reside between the number of 'core' fields (dict_index_t::n_core_fields
introduced in this work) and dict_index_t::n_fields. If the number
of fields is less than dict_index_t::n_fields, the missing fields
are replaced with the column value of the 'default row'.
Note: The number of fields in the record may shrink if some of the
last instantly added columns are updated to the value that is
in the 'default row'. The function btr_cur_trim() implements this
'compression' on update and rollback; dtuple::trim() implements it
on insert.

(4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new
status value REC_STATUS_COLUMNS_ADDED will indicate the presence of
a new record header that will encode n_fields-n_core_fields-1 in
1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header
always explicitly encodes the number of fields.)

We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for
covering the insert of the 'default row' record when instant ADD COLUMN
is used for the first time. Subsequent instant ADD COLUMN can use
TRX_UNDO_UPD_EXIST_REC.

This is joint work with Vin Chen (陈福荣) from Tencent. The design
that was discussed in April 2017 would not have allowed import or
export of data files, because instead of the 'default row' it would
have introduced a data dictionary table. The test
rpl.rpl_alter_instant is exactly as contributed in pull request #408.
The test innodb.instant_alter is based on a contributed test.

The redo log record format changes for ROW_FORMAT=DYNAMIC and
ROW_FORMAT=COMPACT are as contributed. (With this change present,
crash recovery from MariaDB 10.3.1 will fail in spectacular ways!)
Also the semantics of higher-level redo log records that modify the
PAGE_INSTANT field is changed. The redo log format version identifier
was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1.

Everything else has been rewritten by me. Thanks to Elena Stepanova,
the code has been tested extensively.

When rolling back an instant ADD COLUMN operation, we must empty the
PAGE_FREE list after deleting or shortening the 'default row' record,
by calling either btr_page_empty() or btr_page_reorganize(). We must
know the size of each entry in the PAGE_FREE list. If rollback left a
freed copy of the 'default row' in the PAGE_FREE list, we would be
unable to determine its size (if it is in ROW_FORMAT=COMPACT or
ROW_FORMAT=DYNAMIC) because it would contain more fields than the
rolled-back definition of the clustered index.

UNIV_SQL_DEFAULT: A new special constant that designates an instantly
added column that is not present in the clustered index record.

len_is_stored(): Check if a length is an actual length. There are
two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL.

dict_col_t::def_val: The 'default row' value of the column.  If the
column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT.

dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(),
instant_value().

dict_col_t::remove_instant(): Remove the 'instant ADD' status of
a column.

dict_col_t::name(const dict_table_t& table): Replaces
dict_table_get_col_name().

dict_index_t::n_core_fields: The original number of fields.
For secondary indexes and if instant ADD COLUMN has not been used,
this will be equal to dict_index_t::n_fields.

dict_index_t::n_core_null_bytes: Number of bytes needed to
represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable).

dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that
n_core_null_bytes was not initialized yet from the clustered index
root page.

dict_index_t: Add the accessors is_instant(), is_clust(),
get_n_nullable(), instant_field_value().

dict_index_t::instant_add_field(): Adjust clustered index metadata
for instant ADD COLUMN.

dict_index_t::remove_instant(): Remove the 'instant ADD' status
of a clustered index when the table becomes empty, or the very first
instant ADD COLUMN operation is rolled back.

dict_table_t: Add the accessors is_instant(), is_temporary(),
supports_instant().

dict_table_t::instant_add_column(): Adjust metadata for
instant ADD COLUMN.

dict_table_t::rollback_instant(): Adjust metadata on the rollback
of instant ADD COLUMN.

prepare_inplace_alter_table_dict(): First create the ctx->new_table,
and only then decide if the table really needs to be rebuilt.
We must split the creation of table or index metadata from the
creation of the dictionary table records and the creation of
the data. In this way, we can transform a table-rebuilding operation
into an instant ADD COLUMN operation. Dictionary objects will only
be added to cache when table rebuilding or index creation is needed.
The ctx->instant_table will never be added to cache.

dict_table_t::add_to_cache(): Modified and renamed from
dict_table_add_to_cache(). Do not modify the table metadata.
Let the callers invoke dict_table_add_system_columns() and if needed,
set can_be_evicted.

dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the
system columns (which will now exist in the dict_table_t object
already at this point).

dict_create_table_step(): Expect the callers to invoke
dict_table_add_system_columns().

pars_create_table(): Before creating the table creation execution
graph, invoke dict_table_add_system_columns().

row_create_table_for_mysql(): Expect all callers to invoke
dict_table_add_system_columns().

create_index_dict(): Replaces row_merge_create_index_graph().

innodb_update_n_cols(): Renamed from innobase_update_n_virtual().
Call my_error() if an error occurs.

btr_cur_instant_init(), btr_cur_instant_init_low(),
btr_cur_instant_root_init():
Load additional metadata from the clustered index and set
dict_index_t::n_core_null_bytes. This is invoked
when table metadata is first loaded into the data dictionary.

dict_boot(): Initialize n_core_null_bytes for the four hard-coded
dictionary tables.

dict_create_index_step(): Initialize n_core_null_bytes. This is
executed as part of CREATE TABLE.

dict_index_build_internal_clust(): Initialize n_core_null_bytes to
NO_CORE_NULL_BYTES if table->supports_instant().

row_create_index_for_mysql(): Initialize n_core_null_bytes for
CREATE TEMPORARY TABLE.

commit_cache_norebuild(): Call the code to rename or enlarge columns
in the cache only if instant ADD COLUMN is not being used.
(Instant ADD COLUMN would copy all column metadata from
instant_table to old_table, including the names and lengths.)

PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields.
This is repurposing the 16-bit field PAGE_DIRECTION, of which only the
least significant 3 bits were used. The original byte containing
PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B.

page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT.

page_ptr_get_direction(), page_get_direction(),
page_ptr_set_direction(): Accessors for PAGE_DIRECTION.

page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION.

page_direction_increment(): Increment PAGE_N_DIRECTION
and set PAGE_DIRECTION.

rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes,
and assume that heap_no is always set.
Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records,
even if the record contains fewer fields.

rec_offs_make_valid(): Add the parameter 'leaf'.

rec_copy_prefix_to_dtuple(): Assert that the tuple is only built
on the core fields. Instant ADD COLUMN only applies to the
clustered index, and we should never build a search key that has
more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR.
All these columns are always present.

dict_index_build_data_tuple(): Remove assertions that would be
duplicated in rec_copy_prefix_to_dtuple().

rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose
number of fields is between n_core_fields and n_fields.

cmp_rec_rec_with_match(): Implement the comparison between two
MIN_REC_FLAG records.

trx_t::in_rollback: Make the field available in non-debug builds.

trx_start_for_ddl_low(): Remove dangerous error-tolerance.
A dictionary transaction must be flagged as such before it has generated
any undo log records. This is because trx_undo_assign_undo() will mark
the transaction as a dictionary transaction in the undo log header
right before the very first undo log record is being written.

btr_index_rec_validate(): Account for instant ADD COLUMN

row_undo_ins_remove_clust_rec(): On the rollback of an insert into
SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the
last column from the table and the clustered index.

row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(),
trx_undo_update_rec_get_update(): Handle the 'default row'
as a special case.

dtuple_t::trim(index): Omit a redundant suffix of an index tuple right
before insert or update. After instant ADD COLUMN, if the last fields
of a clustered index tuple match the 'default row', there is no
need to store them. While trimming the entry, we must hold a page latch,
so that the table cannot be emptied and the 'default row' be deleted.

btr_cur_optimistic_update(), btr_cur_pessimistic_update(),
row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low():
Invoke dtuple_t::trim() if needed.

row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling
row_ins_clust_index_entry_low().

rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number
of fields to be between n_core_fields and n_fields. Do not support
infimum,supremum. They are never supposed to be stored in dtuple_t,
because page creation nowadays uses a lower-level method for initializing
them.

rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the
number of fields.

btr_cur_trim(): In an update, trim the index entry as needed. For the
'default row', handle rollback specially. For user records, omit
fields that match the 'default row'.

btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
Skip locking and adaptive hash index for the 'default row'.

row_log_table_apply_convert_mrec(): Replace 'default row' values if needed.
In the temporary file that is applied by row_log_table_apply(),
we must identify whether the records contain the extra header for
instantly added columns. For now, we will allocate an additional byte
for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table
has been subject to instant ADD COLUMN. The ROW_T_DELETE records are
fine, as they will be converted and will only contain 'core' columns
(PRIMARY KEY and some system columns) that are converted from dtuple_t.

rec_get_converted_size_temp(), rec_init_offsets_temp(),
rec_convert_dtuple_to_temp(): Add the parameter 'status'.

REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED:
An info_bits constant for distinguishing the 'default row' record.

rec_comp_status_t: An enum of the status bit values.

rec_leaf_format: An enum that replaces the bool parameter of
rec_init_offsets_comp_ordinary().
2017-10-06 09:50:10 +03:00

1153 lines
30 KiB
Text

/*****************************************************************************
Copyright (c) 1994, 2015, Oracle and/or its affiliates. All Rights Reserved.
Copyright (c) 2016, 2017, MariaDB Corporation.
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Suite 500, Boston, MA 02110-1335 USA
*****************************************************************************/
/**************************************************//**
@file include/page0page.ic
Index page routines
Created 2/2/1994 Heikki Tuuri
*******************************************************/
#ifndef page0page_ic
#define page0page_ic
#ifndef UNIV_INNOCHECKSUM
#include "mach0data.h"
#ifdef UNIV_DEBUG
# include "log0recv.h"
#endif /* !UNIV_DEBUG */
#include "rem0cmp.h"
#include "mtr0log.h"
#include "page0zip.h"
#ifdef UNIV_MATERIALIZE
#undef UNIV_INLINE
#define UNIV_INLINE
#endif
/*************************************************************//**
Returns the max trx id field value. */
UNIV_INLINE
trx_id_t
page_get_max_trx_id(
/*================*/
const page_t* page) /*!< in: page */
{
ut_ad(page);
return(mach_read_from_8(page + PAGE_HEADER + PAGE_MAX_TRX_ID));
}
/*************************************************************//**
Sets the max trx id field value if trx_id is bigger than the previous
value. */
UNIV_INLINE
void
page_update_max_trx_id(
/*===================*/
buf_block_t* block, /*!< in/out: page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL */
trx_id_t trx_id, /*!< in: transaction id */
mtr_t* mtr) /*!< in/out: mini-transaction */
{
ut_ad(block);
ut_ad(mtr_memo_contains(mtr, block, MTR_MEMO_PAGE_X_FIX));
/* During crash recovery, this function may be called on
something else than a leaf page of a secondary index or the
insert buffer index tree (dict_index_is_sec_or_ibuf() returns
TRUE for the dummy indexes constructed during redo log
application). In that case, PAGE_MAX_TRX_ID is unused,
and trx_id is usually zero. */
ut_ad(trx_id || recv_recovery_is_on());
ut_ad(page_is_leaf(buf_block_get_frame(block)));
if (page_get_max_trx_id(buf_block_get_frame(block)) < trx_id) {
page_set_max_trx_id(block, page_zip, trx_id, mtr);
}
}
/** Read the AUTO_INCREMENT value from a clustered index root page.
@param[in] page clustered index root page
@return the persisted AUTO_INCREMENT value */
UNIV_INLINE
ib_uint64_t
page_get_autoinc(const page_t* page)
{
ut_ad(page_is_root(page));
return(mach_read_from_8(PAGE_HEADER + PAGE_ROOT_AUTO_INC + page));
}
/*************************************************************//**
Returns the RTREE SPLIT SEQUENCE NUMBER (FIL_RTREE_SPLIT_SEQ_NUM).
@return SPLIT SEQUENCE NUMBER */
UNIV_INLINE
node_seq_t
page_get_ssn_id(
/*============*/
const page_t* page) /*!< in: page */
{
ut_ad(page);
return(static_cast<node_seq_t>(
mach_read_from_8(page + FIL_RTREE_SPLIT_SEQ_NUM)));
}
/*************************************************************//**
Sets the RTREE SPLIT SEQUENCE NUMBER field value */
UNIV_INLINE
void
page_set_ssn_id(
/*============*/
buf_block_t* block, /*!< in/out: page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL */
node_seq_t ssn_id, /*!< in: transaction id */
mtr_t* mtr) /*!< in/out: mini-transaction */
{
page_t* page = buf_block_get_frame(block);
ut_ad(!mtr || mtr_memo_contains_flagged(mtr, block,
MTR_MEMO_PAGE_SX_FIX
| MTR_MEMO_PAGE_X_FIX));
if (page_zip) {
mach_write_to_8(page + FIL_RTREE_SPLIT_SEQ_NUM, ssn_id);
page_zip_write_header(page_zip,
page + FIL_RTREE_SPLIT_SEQ_NUM,
8, mtr);
} else if (mtr) {
mlog_write_ull(page + FIL_RTREE_SPLIT_SEQ_NUM, ssn_id, mtr);
} else {
mach_write_to_8(page + FIL_RTREE_SPLIT_SEQ_NUM, ssn_id);
}
}
#endif /* !UNIV_INNOCHECKSUM */
/*************************************************************//**
Reads the given header field. */
UNIV_INLINE
uint16_t
page_header_get_field(
/*==================*/
const page_t* page, /*!< in: page */
ulint field) /*!< in: PAGE_LEVEL, ... */
{
ut_ad(page);
ut_ad(field <= PAGE_INDEX_ID);
return(mach_read_from_2(page + PAGE_HEADER + field));
}
#ifndef UNIV_INNOCHECKSUM
/*************************************************************//**
Sets the given header field. */
UNIV_INLINE
void
page_header_set_field(
/*==================*/
page_t* page, /*!< in/out: page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL */
ulint field, /*!< in: PAGE_N_DIR_SLOTS, ... */
ulint val) /*!< in: value */
{
ut_ad(page);
ut_ad(field <= PAGE_N_RECS);
ut_ad(field == PAGE_N_HEAP || val < UNIV_PAGE_SIZE);
ut_ad(field != PAGE_N_HEAP || (val & 0x7fff) < UNIV_PAGE_SIZE);
mach_write_to_2(page + PAGE_HEADER + field, val);
if (page_zip) {
page_zip_write_header(page_zip,
page + PAGE_HEADER + field, 2, NULL);
}
}
/*************************************************************//**
Returns the offset stored in the given header field.
@return offset from the start of the page, or 0 */
UNIV_INLINE
uint16_t
page_header_get_offs(
/*=================*/
const page_t* page, /*!< in: page */
ulint field) /*!< in: PAGE_FREE, ... */
{
ulint offs;
ut_ad((field == PAGE_FREE)
|| (field == PAGE_LAST_INSERT)
|| (field == PAGE_HEAP_TOP));
offs = page_header_get_field(page, field);
ut_ad((field != PAGE_HEAP_TOP) || offs);
return(offs);
}
/*************************************************************//**
Sets the pointer stored in the given header field. */
UNIV_INLINE
void
page_header_set_ptr(
/*================*/
page_t* page, /*!< in: page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL */
ulint field, /*!< in: PAGE_FREE, ... */
const byte* ptr) /*!< in: pointer or NULL*/
{
ulint offs;
ut_ad(page);
ut_ad((field == PAGE_FREE)
|| (field == PAGE_LAST_INSERT)
|| (field == PAGE_HEAP_TOP));
if (ptr == NULL) {
offs = 0;
} else {
offs = ulint(ptr - page);
}
ut_ad((field != PAGE_HEAP_TOP) || offs);
page_header_set_field(page, page_zip, field, offs);
}
/*************************************************************//**
Resets the last insert info field in the page header. Writes to mlog
about this operation. */
UNIV_INLINE
void
page_header_reset_last_insert(
/*==========================*/
page_t* page, /*!< in/out: page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL */
mtr_t* mtr) /*!< in: mtr */
{
ut_ad(page != NULL);
ut_ad(mtr != NULL);
if (page_zip) {
mach_write_to_2(page + (PAGE_HEADER + PAGE_LAST_INSERT), 0);
page_zip_write_header(page_zip,
page + (PAGE_HEADER + PAGE_LAST_INSERT),
2, mtr);
} else {
mlog_write_ulint(page + (PAGE_HEADER + PAGE_LAST_INSERT), 0,
MLOG_2BYTES, mtr);
}
}
/***************************************************************//**
Returns the heap number of a record.
@return heap number */
UNIV_INLINE
ulint
page_rec_get_heap_no(
/*=================*/
const rec_t* rec) /*!< in: the physical record */
{
if (page_rec_is_comp(rec)) {
return(rec_get_heap_no_new(rec));
} else {
return(rec_get_heap_no_old(rec));
}
}
/** Determine whether a page is an index root page.
@param[in] page page frame
@return true if the page is a root page of an index */
UNIV_INLINE
bool
page_is_root(
const page_t* page)
{
#if FIL_PAGE_PREV % 8
# error FIL_PAGE_PREV must be 64-bit aligned
#endif
#if FIL_PAGE_NEXT != FIL_PAGE_PREV + 4
# error FIL_PAGE_NEXT must be adjacent to FIL_PAGE_PREV
#endif
#if FIL_NULL != 0xffffffff
# error FIL_NULL != 0xffffffff
#endif
/* Check that this is an index page and both the PREV and NEXT
pointers are FIL_NULL, because the root page does not have any
siblings. */
return(fil_page_index_page_check(page)
&& *reinterpret_cast<const ib_uint64_t*>(page + FIL_PAGE_PREV)
== IB_UINT64_MAX);
}
/** Determine whether an index page record is a user record.
@param[in] rec record in an index page
@return true if a user record */
inline
bool
page_rec_is_user_rec(const rec_t* rec)
{
ut_ad(page_rec_check(rec));
return(page_rec_is_user_rec_low(page_offset(rec)));
}
/** Determine whether an index page record is the supremum record.
@param[in] rec record in an index page
@return true if the supremum record */
inline
bool
page_rec_is_supremum(const rec_t* rec)
{
ut_ad(page_rec_check(rec));
return(page_rec_is_supremum_low(page_offset(rec)));
}
/** Determine whether an index page record is the infimum record.
@param[in] rec record in an index page
@return true if the infimum record */
inline
bool
page_rec_is_infimum(const rec_t* rec)
{
ut_ad(page_rec_check(rec));
return(page_rec_is_infimum_low(page_offset(rec)));
}
/************************************************************//**
true if the record is the first user record on a page.
@return true if the first user record */
UNIV_INLINE
bool
page_rec_is_first(
/*==============*/
const rec_t* rec, /*!< in: record */
const page_t* page) /*!< in: page */
{
ut_ad(page_get_n_recs(page) > 0);
return(page_rec_get_next_const(page_get_infimum_rec(page)) == rec);
}
/************************************************************//**
true if the record is the second user record on a page.
@return true if the second user record */
UNIV_INLINE
bool
page_rec_is_second(
/*===============*/
const rec_t* rec, /*!< in: record */
const page_t* page) /*!< in: page */
{
ut_ad(page_get_n_recs(page) > 1);
return(page_rec_get_next_const(
page_rec_get_next_const(page_get_infimum_rec(page))) == rec);
}
/************************************************************//**
true if the record is the last user record on a page.
@return true if the last user record */
UNIV_INLINE
bool
page_rec_is_last(
/*=============*/
const rec_t* rec, /*!< in: record */
const page_t* page) /*!< in: page */
{
ut_ad(page_get_n_recs(page) > 0);
return(page_rec_get_next_const(rec) == page_get_supremum_rec(page));
}
/************************************************************//**
true if the record is the second last user record on a page.
@return true if the second last user record */
UNIV_INLINE
bool
page_rec_is_second_last(
/*====================*/
const rec_t* rec, /*!< in: record */
const page_t* page) /*!< in: page */
{
ut_ad(page_get_n_recs(page) > 1);
ut_ad(!page_rec_is_last(rec, page));
return(page_rec_get_next_const(
page_rec_get_next_const(rec)) == page_get_supremum_rec(page));
}
/************************************************************//**
Returns the nth record of the record list.
This is the inverse function of page_rec_get_n_recs_before().
@return nth record */
UNIV_INLINE
rec_t*
page_rec_get_nth(
/*=============*/
page_t* page, /*!< in: page */
ulint nth) /*!< in: nth record */
{
return((rec_t*) page_rec_get_nth_const(page, nth));
}
/************************************************************//**
Returns the middle record of the records on the page. If there is an
even number of records in the list, returns the first record of the
upper half-list.
@return middle record */
UNIV_INLINE
rec_t*
page_get_middle_rec(
/*================*/
page_t* page) /*!< in: page */
{
ulint middle = (page_get_n_recs(page) + PAGE_HEAP_NO_USER_LOW) / 2;
return(page_rec_get_nth(page, middle));
}
#endif /* !UNIV_INNOCHECKSUM */
/*************************************************************//**
Gets the page number.
@return page number */
UNIV_INLINE
ulint
page_get_page_no(
/*=============*/
const page_t* page) /*!< in: page */
{
ut_ad(page == page_align((page_t*) page));
return(mach_read_from_4(page + FIL_PAGE_OFFSET));
}
#ifndef UNIV_INNOCHECKSUM
/*************************************************************//**
Gets the tablespace identifier.
@return space id */
UNIV_INLINE
ulint
page_get_space_id(
/*==============*/
const page_t* page) /*!< in: page */
{
ut_ad(page == page_align((page_t*) page));
return(mach_read_from_4(page + FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID));
}
#endif /* !UNIV_INNOCHECKSUM */
/*************************************************************//**
Gets the number of user records on page (infimum and supremum records
are not user records).
@return number of user records */
UNIV_INLINE
uint16_t
page_get_n_recs(
/*============*/
const page_t* page) /*!< in: index page */
{
return(page_header_get_field(page, PAGE_N_RECS));
}
#ifndef UNIV_INNOCHECKSUM
/*************************************************************//**
Gets the number of dir slots in directory.
@return number of slots */
UNIV_INLINE
uint16_t
page_dir_get_n_slots(
/*=================*/
const page_t* page) /*!< in: index page */
{
return(page_header_get_field(page, PAGE_N_DIR_SLOTS));
}
/*************************************************************//**
Sets the number of dir slots in directory. */
UNIV_INLINE
void
page_dir_set_n_slots(
/*=================*/
page_t* page, /*!< in/out: page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL */
ulint n_slots)/*!< in: number of slots */
{
page_header_set_field(page, page_zip, PAGE_N_DIR_SLOTS, n_slots);
}
/*************************************************************//**
Gets the number of records in the heap.
@return number of user records */
UNIV_INLINE
uint16_t
page_dir_get_n_heap(
/*================*/
const page_t* page) /*!< in: index page */
{
return(page_header_get_field(page, PAGE_N_HEAP) & 0x7fff);
}
/*************************************************************//**
Sets the number of records in the heap. */
UNIV_INLINE
void
page_dir_set_n_heap(
/*================*/
page_t* page, /*!< in/out: index page */
page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be updated, or NULL.
Note that the size of the dense page directory
in the compressed page trailer is
n_heap * PAGE_ZIP_DIR_SLOT_SIZE. */
ulint n_heap) /*!< in: number of records */
{
ut_ad(n_heap < 0x8000);
ut_ad(!page_zip || uint16_t(n_heap)
== (page_header_get_field(page, PAGE_N_HEAP) & 0x7fff) + 1);
page_header_set_field(page, page_zip, PAGE_N_HEAP, n_heap
| (0x8000
& page_header_get_field(page, PAGE_N_HEAP)));
}
#ifdef UNIV_DEBUG
/*************************************************************//**
Gets pointer to nth directory slot.
@return pointer to dir slot */
UNIV_INLINE
page_dir_slot_t*
page_dir_get_nth_slot(
/*==================*/
const page_t* page, /*!< in: index page */
ulint n) /*!< in: position */
{
ut_ad(page_dir_get_n_slots(page) > n);
return((page_dir_slot_t*)
page + UNIV_PAGE_SIZE - PAGE_DIR
- (n + 1) * PAGE_DIR_SLOT_SIZE);
}
#endif /* UNIV_DEBUG */
/**************************************************************//**
Used to check the consistency of a record on a page.
@return TRUE if succeed */
UNIV_INLINE
ibool
page_rec_check(
/*===========*/
const rec_t* rec) /*!< in: record */
{
const page_t* page = page_align(rec);
ut_a(rec);
ut_a(page_offset(rec) <= page_header_get_field(page, PAGE_HEAP_TOP));
ut_a(page_offset(rec) >= PAGE_DATA);
return(TRUE);
}
/***************************************************************//**
Gets the record pointed to by a directory slot.
@return pointer to record */
UNIV_INLINE
const rec_t*
page_dir_slot_get_rec(
/*==================*/
const page_dir_slot_t* slot) /*!< in: directory slot */
{
return(page_align(slot) + mach_read_from_2(slot));
}
/***************************************************************//**
This is used to set the record offset in a directory slot. */
UNIV_INLINE
void
page_dir_slot_set_rec(
/*==================*/
page_dir_slot_t* slot, /*!< in: directory slot */
rec_t* rec) /*!< in: record on the page */
{
ut_ad(page_rec_check(rec));
mach_write_to_2(slot, page_offset(rec));
}
/***************************************************************//**
Gets the number of records owned by a directory slot.
@return number of records */
UNIV_INLINE
ulint
page_dir_slot_get_n_owned(
/*======================*/
const page_dir_slot_t* slot) /*!< in: page directory slot */
{
const rec_t* rec = page_dir_slot_get_rec(slot);
if (page_rec_is_comp(slot)) {
return(rec_get_n_owned_new(rec));
} else {
return(rec_get_n_owned_old(rec));
}
}
/***************************************************************//**
This is used to set the owned records field of a directory slot. */
UNIV_INLINE
void
page_dir_slot_set_n_owned(
/*======================*/
page_dir_slot_t*slot, /*!< in/out: directory slot */
page_zip_des_t* page_zip,/*!< in/out: compressed page, or NULL */
ulint n) /*!< in: number of records owned by the slot */
{
rec_t* rec = (rec_t*) page_dir_slot_get_rec(slot);
if (page_rec_is_comp(slot)) {
rec_set_n_owned_new(rec, page_zip, n);
} else {
ut_ad(!page_zip);
rec_set_n_owned_old(rec, n);
}
}
/************************************************************//**
Calculates the space reserved for directory slots of a given number of
records. The exact value is a fraction number n * PAGE_DIR_SLOT_SIZE /
PAGE_DIR_SLOT_MIN_N_OWNED, and it is rounded upwards to an integer. */
UNIV_INLINE
ulint
page_dir_calc_reserved_space(
/*=========================*/
ulint n_recs) /*!< in: number of records */
{
return((PAGE_DIR_SLOT_SIZE * n_recs + PAGE_DIR_SLOT_MIN_N_OWNED - 1)
/ PAGE_DIR_SLOT_MIN_N_OWNED);
}
/************************************************************//**
Gets the pointer to the next record on the page.
@return pointer to next record */
UNIV_INLINE
const rec_t*
page_rec_get_next_low(
/*==================*/
const rec_t* rec, /*!< in: pointer to record */
ulint comp) /*!< in: nonzero=compact page layout */
{
ulint offs;
const page_t* page;
ut_ad(page_rec_check(rec));
page = page_align(rec);
offs = rec_get_next_offs(rec, comp);
if (offs >= UNIV_PAGE_SIZE) {
fprintf(stderr,
"InnoDB: Next record offset is nonsensical %lu"
" in record at offset %lu\n"
"InnoDB: rec address %p, space id %lu, page %lu\n",
(ulong) offs, (ulong) page_offset(rec),
(void*) rec,
(ulong) page_get_space_id(page),
(ulong) page_get_page_no(page));
ut_error;
} else if (offs == 0) {
return(NULL);
}
return(page + offs);
}
/************************************************************//**
Gets the pointer to the next record on the page.
@return pointer to next record */
UNIV_INLINE
rec_t*
page_rec_get_next(
/*==============*/
rec_t* rec) /*!< in: pointer to record */
{
return((rec_t*) page_rec_get_next_low(rec, page_rec_is_comp(rec)));
}
/************************************************************//**
Gets the pointer to the next record on the page.
@return pointer to next record */
UNIV_INLINE
const rec_t*
page_rec_get_next_const(
/*====================*/
const rec_t* rec) /*!< in: pointer to record */
{
return(page_rec_get_next_low(rec, page_rec_is_comp(rec)));
}
/************************************************************//**
Gets the pointer to the next non delete-marked record on the page.
If all subsequent records are delete-marked, then this function
will return the supremum record.
@return pointer to next non delete-marked record or pointer to supremum */
UNIV_INLINE
const rec_t*
page_rec_get_next_non_del_marked(
/*=============================*/
const rec_t* rec) /*!< in: pointer to record */
{
const rec_t* r;
ulint page_is_compact = page_rec_is_comp(rec);
for (r = page_rec_get_next_const(rec);
!page_rec_is_supremum(r)
&& rec_get_deleted_flag(r, page_is_compact);
r = page_rec_get_next_const(r)) {
/* noop */
}
return(r);
}
/************************************************************//**
Sets the pointer to the next record on the page. */
UNIV_INLINE
void
page_rec_set_next(
/*==============*/
rec_t* rec, /*!< in: pointer to record,
must not be page supremum */
const rec_t* next) /*!< in: pointer to next record,
must not be page infimum */
{
ulint offs;
ut_ad(page_rec_check(rec));
ut_ad(!page_rec_is_supremum(rec));
ut_ad(rec != next);
ut_ad(!next || !page_rec_is_infimum(next));
ut_ad(!next || page_align(rec) == page_align(next));
offs = next != NULL ? page_offset(next) : 0;
if (page_rec_is_comp(rec)) {
rec_set_next_offs_new(rec, offs);
} else {
rec_set_next_offs_old(rec, offs);
}
}
/************************************************************//**
Gets the pointer to the previous record.
@return pointer to previous record */
UNIV_INLINE
const rec_t*
page_rec_get_prev_const(
/*====================*/
const rec_t* rec) /*!< in: pointer to record, must not be page
infimum */
{
const page_dir_slot_t* slot;
ulint slot_no;
const rec_t* rec2;
const rec_t* prev_rec = NULL;
const page_t* page;
ut_ad(page_rec_check(rec));
page = page_align(rec);
ut_ad(!page_rec_is_infimum(rec));
slot_no = page_dir_find_owner_slot(rec);
ut_a(slot_no != 0);
slot = page_dir_get_nth_slot(page, slot_no - 1);
rec2 = page_dir_slot_get_rec(slot);
if (page_is_comp(page)) {
while (rec != rec2) {
prev_rec = rec2;
rec2 = page_rec_get_next_low(rec2, TRUE);
}
} else {
while (rec != rec2) {
prev_rec = rec2;
rec2 = page_rec_get_next_low(rec2, FALSE);
}
}
ut_a(prev_rec);
return(prev_rec);
}
/************************************************************//**
Gets the pointer to the previous record.
@return pointer to previous record */
UNIV_INLINE
rec_t*
page_rec_get_prev(
/*==============*/
rec_t* rec) /*!< in: pointer to record, must not be page
infimum */
{
return((rec_t*) page_rec_get_prev_const(rec));
}
/***************************************************************//**
Looks for the record which owns the given record.
@return the owner record */
UNIV_INLINE
rec_t*
page_rec_find_owner_rec(
/*====================*/
rec_t* rec) /*!< in: the physical record */
{
ut_ad(page_rec_check(rec));
if (page_rec_is_comp(rec)) {
while (rec_get_n_owned_new(rec) == 0) {
rec = page_rec_get_next(rec);
}
} else {
while (rec_get_n_owned_old(rec) == 0) {
rec = page_rec_get_next(rec);
}
}
return(rec);
}
/**********************************************************//**
Returns the base extra size of a physical record. This is the
size of the fixed header, independent of the record size.
@return REC_N_NEW_EXTRA_BYTES or REC_N_OLD_EXTRA_BYTES */
UNIV_INLINE
ulint
page_rec_get_base_extra_size(
/*=========================*/
const rec_t* rec) /*!< in: physical record */
{
#if REC_N_NEW_EXTRA_BYTES + 1 != REC_N_OLD_EXTRA_BYTES
# error "REC_N_NEW_EXTRA_BYTES + 1 != REC_N_OLD_EXTRA_BYTES"
#endif
return(REC_N_NEW_EXTRA_BYTES + (ulint) !page_rec_is_comp(rec));
}
#endif /* UNIV_INNOCHECKSUM */
/************************************************************//**
Returns the sum of the sizes of the records in the record list, excluding
the infimum and supremum records.
@return data in bytes */
UNIV_INLINE
uint16_t
page_get_data_size(
/*===============*/
const page_t* page) /*!< in: index page */
{
uint16_t ret = page_header_get_field(page, PAGE_HEAP_TOP)
- (page_is_comp(page)
? PAGE_NEW_SUPREMUM_END
: PAGE_OLD_SUPREMUM_END)
- page_header_get_field(page, PAGE_GARBAGE);
ut_ad(ret < UNIV_PAGE_SIZE);
return(ret);
}
#ifndef UNIV_INNOCHECKSUM
/************************************************************//**
Allocates a block of memory from the free list of an index page. */
UNIV_INLINE
void
page_mem_alloc_free(
/*================*/
page_t* page, /*!< in/out: index page */
page_zip_des_t* page_zip,/*!< in/out: compressed page with enough
space available for inserting the record,
or NULL */
rec_t* next_rec,/*!< in: pointer to the new head of the
free record list */
ulint need) /*!< in: number of bytes allocated */
{
ulint garbage;
#ifdef UNIV_DEBUG
const rec_t* old_rec = page_header_get_ptr(page, PAGE_FREE);
ulint next_offs;
ut_ad(old_rec);
next_offs = rec_get_next_offs(old_rec, page_is_comp(page));
ut_ad(next_rec == (next_offs ? page + next_offs : NULL));
#endif
page_header_set_ptr(page, page_zip, PAGE_FREE, next_rec);
garbage = page_header_get_field(page, PAGE_GARBAGE);
ut_ad(garbage >= need);
page_header_set_field(page, page_zip, PAGE_GARBAGE, garbage - need);
}
/*************************************************************//**
Calculates free space if a page is emptied.
@return free space */
UNIV_INLINE
ulint
page_get_free_space_of_empty(
/*=========================*/
ulint comp) /*!< in: nonzero=compact page layout */
{
if (comp) {
return((ulint)(UNIV_PAGE_SIZE
- PAGE_NEW_SUPREMUM_END
- PAGE_DIR
- 2 * PAGE_DIR_SLOT_SIZE));
}
return((ulint)(UNIV_PAGE_SIZE
- PAGE_OLD_SUPREMUM_END
- PAGE_DIR
- 2 * PAGE_DIR_SLOT_SIZE));
}
/***********************************************************************//**
Write a 32-bit field in a data dictionary record. */
UNIV_INLINE
void
page_rec_write_field(
/*=================*/
rec_t* rec, /*!< in/out: record to update */
ulint i, /*!< in: index of the field to update */
ulint val, /*!< in: value to write */
mtr_t* mtr) /*!< in/out: mini-transaction */
{
byte* data;
ulint len;
data = rec_get_nth_field_old(rec, i, &len);
ut_ad(len == 4);
mlog_write_ulint(data, val, MLOG_4BYTES, mtr);
}
/************************************************************//**
Each user record on a page, and also the deleted user records in the heap
takes its size plus the fraction of the dir cell size /
PAGE_DIR_SLOT_MIN_N_OWNED bytes for it. If the sum of these exceeds the
value of page_get_free_space_of_empty, the insert is impossible, otherwise
it is allowed. This function returns the maximum combined size of records
which can be inserted on top of the record heap.
@return maximum combined size for inserted records */
UNIV_INLINE
ulint
page_get_max_insert_size(
/*=====================*/
const page_t* page, /*!< in: index page */
ulint n_recs) /*!< in: number of records */
{
ulint occupied;
ulint free_space;
if (page_is_comp(page)) {
occupied = page_header_get_field(page, PAGE_HEAP_TOP)
- PAGE_NEW_SUPREMUM_END
+ page_dir_calc_reserved_space(
n_recs + page_dir_get_n_heap(page) - 2);
free_space = page_get_free_space_of_empty(TRUE);
} else {
occupied = page_header_get_field(page, PAGE_HEAP_TOP)
- PAGE_OLD_SUPREMUM_END
+ page_dir_calc_reserved_space(
n_recs + page_dir_get_n_heap(page) - 2);
free_space = page_get_free_space_of_empty(FALSE);
}
/* Above the 'n_recs +' part reserves directory space for the new
inserted records; the '- 2' excludes page infimum and supremum
records */
if (occupied > free_space) {
return(0);
}
return(free_space - occupied);
}
/************************************************************//**
Returns the maximum combined size of records which can be inserted on top
of the record heap if a page is first reorganized.
@return maximum combined size for inserted records */
UNIV_INLINE
ulint
page_get_max_insert_size_after_reorganize(
/*======================================*/
const page_t* page, /*!< in: index page */
ulint n_recs) /*!< in: number of records */
{
ulint occupied;
ulint free_space;
occupied = page_get_data_size(page)
+ page_dir_calc_reserved_space(n_recs + page_get_n_recs(page));
free_space = page_get_free_space_of_empty(page_is_comp(page));
if (occupied > free_space) {
return(0);
}
return(free_space - occupied);
}
/************************************************************//**
Puts a record to free list. */
UNIV_INLINE
void
page_mem_free(
/*==========*/
page_t* page, /*!< in/out: index page */
page_zip_des_t* page_zip, /*!< in/out: compressed page,
or NULL */
rec_t* rec, /*!< in: pointer to the
(origin of) record */
const dict_index_t* index, /*!< in: index of rec */
const ulint* offsets) /*!< in: array returned by
rec_get_offsets() */
{
rec_t* free;
ulint garbage;
ut_ad(rec_offs_validate(rec, index, offsets));
free = page_header_get_ptr(page, PAGE_FREE);
if (srv_immediate_scrub_data_uncompressed) {
/* scrub record */
memset(rec, 0, rec_offs_data_size(offsets));
}
page_rec_set_next(rec, free);
page_header_set_ptr(page, page_zip, PAGE_FREE, rec);
garbage = page_header_get_field(page, PAGE_GARBAGE);
page_header_set_field(page, page_zip, PAGE_GARBAGE,
garbage + rec_offs_size(offsets));
if (page_zip) {
page_zip_dir_delete(page_zip, rec, index, offsets, free);
} else {
page_header_set_field(page, page_zip, PAGE_N_RECS,
page_get_n_recs(page) - 1);
}
}
/** Read the PAGE_DIRECTION field from a byte.
@param[in] ptr pointer to PAGE_DIRECTION_B
@return the value of the PAGE_DIRECTION field */
inline
byte
page_ptr_get_direction(const byte* ptr)
{
ut_ad(page_offset(ptr) == PAGE_HEADER + PAGE_DIRECTION_B);
return *ptr & ((1U << 3) - 1);
}
/** Set the PAGE_DIRECTION field.
@param[in] ptr pointer to PAGE_DIRECTION_B
@param[in] dir the value of the PAGE_DIRECTION field */
inline
void
page_ptr_set_direction(byte* ptr, byte dir)
{
ut_ad(page_offset(ptr) == PAGE_HEADER + PAGE_DIRECTION_B);
ut_ad(dir >= PAGE_LEFT);
ut_ad(dir <= PAGE_NO_DIRECTION);
*ptr = (*ptr & ~((1U << 3) - 1)) | dir;
}
/** Read the PAGE_INSTANT field.
@param[in] page index page
@return the value of the PAGE_INSTANT field */
inline
uint16_t
page_get_instant(const page_t* page)
{
uint16_t i = page_header_get_field(page, PAGE_INSTANT);
#ifdef UNIV_DEBUG
switch (fil_page_get_type(page)) {
case FIL_PAGE_TYPE_INSTANT:
ut_ad(page_get_direction(page) <= PAGE_NO_DIRECTION);
ut_ad(i >> 3);
break;
case FIL_PAGE_INDEX:
ut_ad(i <= PAGE_NO_DIRECTION || !page_is_comp(page));
break;
case FIL_PAGE_RTREE:
ut_ad(i == PAGE_NO_DIRECTION || i == 0);
break;
default:
ut_ad(!"invalid page type");
break;
}
#endif /* UNIV_DEBUG */
return(i >> 3);
}
/** Assign the PAGE_INSTANT field.
@param[in,out] page clustered index root page
@param[in] n original number of clustered index fields
@param[in,out] mtr mini-transaction */
inline
void
page_set_instant(page_t* page, unsigned n, mtr_t* mtr)
{
ut_ad(fil_page_get_type(page) == FIL_PAGE_TYPE_INSTANT);
ut_ad(n > 0);
ut_ad(n < REC_MAX_N_FIELDS);
uint16_t i = page_header_get_field(page, PAGE_INSTANT);
ut_ad(i <= PAGE_NO_DIRECTION);
i |= n << 3;
mlog_write_ulint(PAGE_HEADER + PAGE_INSTANT + page, i,
MLOG_2BYTES, mtr);
}
#endif /* !UNIV_INNOCHECKSUM */
#ifdef UNIV_MATERIALIZE
#undef UNIV_INLINE
#define UNIV_INLINE UNIV_INLINE_ORIGINAL
#endif
#endif