mariadb/include/row0merge.h

183 lines
6.3 KiB
C
Raw Normal View History

/******************************************************
Index build routines using a merge sort
(c) 2005 Innobase Oy
Created 13/06/2005 Jan Lindstrom
*******************************************************/
#ifndef row0merge_h
#define row0merge_h
#include "univ.i"
#include "data0data.h"
#include "dict0types.h"
#include "trx0types.h"
#include "que0types.h"
#include "mtr0mtr.h"
#include "rem0types.h"
#include "rem0rec.h"
#include "read0types.h"
#include "btr0types.h"
#include "row0mysql.h"
branches/zip: Fast index creation: Remove the ROW_PREBUILT_OBSOLETE nonsense. Active transactions must not switch table or index definitions on the fly, for several reasons, including the following: * copied indexes do not carry any history or locking information; that is, rollbacks, read views, and record locking would be broken * huge potential for race conditions, inconsistent reads and writes, loss of data, and corruption Instead of trying to track down if the table was changed during a transaction, acquire appropriate locks that protect the creation and dropping of indexes. innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX. Test that consistent reads work across dropped indexes. lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion. When inserting a record into an index, the table must be at least IX-locked. However, when an index is being created, an IS-lock on the table is sufficient. row_merge_lock_table(): Add the parameter enum lock_mode mode, which must be LOCK_X or LOCK_S. row_merge_drop_table(): Assert that n_mysql_handles_opened == 0. Unconditionally drop the table. ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate. After acquiring an X lock, assert that n_mysql_handles_opened == 1. Remove the comments about dropping tables in the background. ha_innobase::final_drop_index(): Acquire an X lock on the table. dict_table_t: Remove version_number, to_be_dropped, and prebuilts. ins_node_t: Remove table_version_number. enum lock_mode: Move the definition from lock0lock.h to lock0types.h. ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete(): Remove. row_prebuilt_t: Remove the declaration from row0types.h. row_drop_table_for_mysql_no_commit(): Always print a warning if a table was added to the background drop queue.
2007-12-17 15:49:59 +00:00
#include "lock0types.h"
/* This structure holds index field definitions */
struct merge_index_field_struct {
ulint prefix_len; /* Prefix len */
const char* field_name; /* Field name */
};
typedef struct merge_index_field_struct merge_index_field_t;
/* This structure holds index definitions */
struct merge_index_def_struct {
const char* name; /* Index name */
ulint ind_type; /* 0, DICT_UNIQUE,
or DICT_CLUSTERED */
ulint n_fields; /* Number of fields in index */
merge_index_field_t* fields; /* Field definitions */
};
typedef struct merge_index_def_struct merge_index_def_t;
/*************************************************************************
Sets an exclusive lock on a table, for the duration of creating indexes. */
UNIV_INTERN
ulint
row_merge_lock_table(
/*=================*/
/* out: error code or DB_SUCCESS */
trx_t* trx, /* in/out: transaction */
branches/zip: Fast index creation: Remove the ROW_PREBUILT_OBSOLETE nonsense. Active transactions must not switch table or index definitions on the fly, for several reasons, including the following: * copied indexes do not carry any history or locking information; that is, rollbacks, read views, and record locking would be broken * huge potential for race conditions, inconsistent reads and writes, loss of data, and corruption Instead of trying to track down if the table was changed during a transaction, acquire appropriate locks that protect the creation and dropping of indexes. innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX. Test that consistent reads work across dropped indexes. lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion. When inserting a record into an index, the table must be at least IX-locked. However, when an index is being created, an IS-lock on the table is sufficient. row_merge_lock_table(): Add the parameter enum lock_mode mode, which must be LOCK_X or LOCK_S. row_merge_drop_table(): Assert that n_mysql_handles_opened == 0. Unconditionally drop the table. ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate. After acquiring an X lock, assert that n_mysql_handles_opened == 1. Remove the comments about dropping tables in the background. ha_innobase::final_drop_index(): Acquire an X lock on the table. dict_table_t: Remove version_number, to_be_dropped, and prebuilts. ins_node_t: Remove table_version_number. enum lock_mode: Move the definition from lock0lock.h to lock0types.h. ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete(): Remove. row_prebuilt_t: Remove the declaration from row0types.h. row_drop_table_for_mysql_no_commit(): Always print a warning if a table was added to the background drop queue.
2007-12-17 15:49:59 +00:00
dict_table_t* table, /* in: table to lock */
enum lock_mode mode); /* in: LOCK_X or LOCK_S */
/*************************************************************************
Drop an index from the InnoDB system tables. The data dictionary must
have been locked exclusively by the caller, because the transaction
will not be committed. */
UNIV_INTERN
void
row_merge_drop_index(
/*=================*/
dict_index_t* index, /* in: index to be removed */
dict_table_t* table, /* in: table */
trx_t* trx); /* in: transaction handle */
/*************************************************************************
Drop those indexes which were created before an error occurred when
building an index. The data dictionary must have been locked
exclusively by the caller, because the transaction will not be
committed. */
UNIV_INTERN
void
row_merge_drop_indexes(
/*===================*/
trx_t* trx, /* in: transaction */
dict_table_t* table, /* in: table containing the indexes */
dict_index_t** index, /* in: indexes to drop */
ulint num_created); /* in: number of elements in index[] */
/*************************************************************************
Drop all partially created indexes during crash recovery. */
UNIV_INTERN
void
row_merge_drop_temp_indexes(void);
/*=============================*/
/*************************************************************************
Rename the tables in the data dictionary. The data dictionary must
have been locked exclusively by the caller, because the transaction
will not be committed. */
UNIV_INTERN
ulint
row_merge_rename_tables(
/*====================*/
/* out: error code or DB_SUCCESS */
dict_table_t* old_table, /* in/out: old table, renamed to
tmp_name */
dict_table_t* new_table, /* in/out: new table, renamed to
old_table->name */
const char* tmp_name, /* in: new name for old_table */
trx_t* trx); /* in: transaction handle */
/*************************************************************************
Create a temporary table for creating a primary key, using the definition
of an existing table. */
UNIV_INTERN
dict_table_t*
row_merge_create_temporary_table(
/*=============================*/
/* out: table,
or NULL on error */
const char* table_name, /* in: new table name */
const merge_index_def_t*index_def, /* in: the index definition
of the primary key */
const dict_table_t* table, /* in: old table definition */
trx_t* trx); /* in/out: transaction
(sets error_state) */
/*************************************************************************
Rename the temporary indexes in the dictionary to permanent ones. The
data dictionary must have been locked exclusively by the caller,
because the transaction will not be committed. */
UNIV_INTERN
ulint
row_merge_rename_indexes(
/*=====================*/
/* out: DB_SUCCESS if all OK */
trx_t* trx, /* in/out: transaction */
dict_table_t* table); /* in/out: table with new indexes */
/*************************************************************************
Create the index and load in to the dictionary. */
UNIV_INTERN
dict_index_t*
row_merge_create_index(
/*===================*/
/* out: index, or NULL on error */
trx_t* trx, /* in/out: trx (sets error_state) */
dict_table_t* table, /* in: the index is on this table */
const merge_index_def_t* /* in: the index definition */
index_def);
#ifdef ROW_MERGE_IS_INDEX_USABLE
/*************************************************************************
Check if a transaction can use an index. */
UNIV_INTERN
ibool
row_merge_is_index_usable(
/*======================*/
/* out: TRUE if index can be used by
the transaction else FALSE*/
const trx_t* trx, /* in: transaction */
const dict_index_t* index); /* in: index to check */
#endif /* ROW_MERGE_IS_INDEX_USABLE */
/*************************************************************************
If there are views that refer to the old table name then we "attach" to
the new instance of the table else we drop it immediately. */
UNIV_INTERN
ulint
row_merge_drop_table(
/*=================*/
/* out: DB_SUCCESS or error code */
trx_t* trx, /* in: transaction */
dict_table_t* table); /* in: table instance to drop */
/*************************************************************************
Build indexes on a table by reading a clustered index,
creating a temporary file containing index entries, merge sorting
these index entries and inserting sorted index entries to indexes. */
UNIV_INTERN
ulint
row_merge_build_indexes(
/*====================*/
/* out: DB_SUCCESS or error code */
trx_t* trx, /* in: transaction */
dict_table_t* old_table, /* in: table where rows are
read from */
dict_table_t* new_table, /* in: table where indexes are
created; identical to old_table
unless creating a PRIMARY KEY */
dict_index_t** indexes, /* in: indexes to be created */
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
2007-09-26 11:56:26 +00:00
ulint n_indexes, /* in: size of indexes[] */
TABLE* table); /* in/out: MySQL table, for
reporting erroneous key value
if applicable */
#endif /* row0merge.h */