acquiring the table lock. The data dictionary should not be locked for
long periods. Before this change, in the worst case, the dictionary
would be locked until the expiration of innodb_lock_wait_timeout.
Virtually, transaction-level locks (locks on database objects, such
as records and tables) have a latching order level of SYNC_USER_TRX_LOCK,
which is above any InnoDB rw-locks or mutexes. However, the latching
order of SYNC_USER_TRX_LOCK is never checked, not even by UNIV_SYNC_DEBUG.
ha_innobase::add_index(), ha_innobase::final_drop_index(): Invoke
row_mysql_lock_data_dictionary(trx) only after row_merge_lock_table().
row_merge_lock_table().
ha_innobase::final_drop_index(): Set the dictionary operation mode to
TRX_DICT_OP_INDEX_MAY_WAIT for the duration of the row_merge_lock_table()
call.
Active transactions must not switch table or index definitions on the fly,
for several reasons, including the following:
* copied indexes do not carry any history or locking information;
that is, rollbacks, read views, and record locking would be broken
* huge potential for race conditions, inconsistent reads and writes,
loss of data, and corruption
Instead of trying to track down if the table was changed during a transaction,
acquire appropriate locks that protect the creation and dropping of indexes.
innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX. Test
that consistent reads work across dropped indexes.
lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion.
When inserting a record into an index, the table must be at least IX-locked.
However, when an index is being created, an IS-lock on the table is
sufficient.
row_merge_lock_table(): Add the parameter enum lock_mode mode, which must
be LOCK_X or LOCK_S.
row_merge_drop_table(): Assert that n_mysql_handles_opened == 0.
Unconditionally drop the table.
ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate.
After acquiring an X lock, assert that n_mysql_handles_opened == 1.
Remove the comments about dropping tables in the background.
ha_innobase::final_drop_index(): Acquire an X lock on the table.
dict_table_t: Remove version_number, to_be_dropped, and prebuilts.
ins_node_t: Remove table_version_number.
enum lock_mode: Move the definition from lock0lock.h to lock0types.h.
ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete():
Remove.
row_prebuilt_t: Remove the declaration from row0types.h.
row_drop_table_for_mysql_no_commit(): Always print a warning if a table
was added to the background drop queue.
for dropping the index trees, and set the dictionary operation flag, similar
to what ha_innobase::add_index() does. This should ensure correct crash
recovery.
redefined so that the dynamic plugin can replace the builtin InnoDB
in MySQL 5.1.
ha_innodb.cc, handler0alter.cc: #include "univ.i" before any other InnoDB
header files or before defining any symbols
innodb_redefine.h: New file, to contain a mapping of symbols. The idea
is that this file will be replaced in the build process; because this
is a large file that can be generated automatically, it does not make sense
to keep it under version control.
univ.i: #include "innodb_redefine.h" and #define ha_innobase ha_innodb
Makefile.am (ha_innodb_la_CXXFLAGS): Remove -Dha_innobase=ha_innodb
NOTE: there are still some issues in the source code. One known issue is
the #undef mutex_free in sync0sync.h, which will cause the plugin to call the
function mutex_free in the builtin InnoDB. The preprocessor symbols defined
in innodb_redefine.h must not be undefined or redefined anywhere in the code.
enum trx_dict_op: dictionary operation modes
trx_get_dict_operation(), trx_set_dict_operation(): Accessors for
trx->dict_operation.
lock_table_enqueue_waiting(), lock_rec_enqueue_waiting(): Do not complain
about lock waits if the dictionary mode is TRX_DICT_OP_INDEX_MAY_WAIT.
row_merge_lock_table(): Remove the work-around for avoiding the warning
in lock_table_enqueue_waiting().
trx_undo_mark_as_dict_operation(): Do not write trx->table_id to the
undo log unless the dict_operation is TRX_DICT_OP_TABLE.
ha_innobase::add_index(): Set the dict_operation mode initially to
TRX_DICT_OP_INDEX_MAY_WAIT, then lock the table exclusively, and set the
mode to TRX_DICT_OP_INDEX, and optionally to TRX_DICT_OP_TABLE when
creating a temporary table.
ha_innobase::update_thd(void): New function, to call the inline function
ha_innobase::update_thd(THD*).
check_trx_exists(): Make static. handler0alter.cc does not need to call
this function.
innobase_col_to_mysql(): New function, adapted from
row_sel_field_store_in_mysql_format().
innobase_rec_to_mysql(): Correct the function comment, which was still
saying "clustered index record", although we can convert any record.
Make use of innobase_col_to_mysql(). Always call field->reset(),
so that innobase_col_to_mysql() will not have to pad anything.
innobase_rec_to_mysql(): New function, for converting an InnoDB clustered
index record to MySQL table->record[0]. TODO: convert integer fields.
Currently, integer fields are in big-endian byte order instead of
host byte order, and signed integer fields are offset by 0x80000000.
innobase_rec_reset(): New function, for resetting table->record[0].
row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table
handle) for reporting duplicate key values.
dtuple_from_fields(): New function, to convert an array of dfield_t* to
dtuple_t.
dtuple_get_n_ext(): New function, to compute the number of externally stored
fields.
row_merge_dup_t: Structure for counting and reporting duplicate records.
row_merge_dup_report(): Function for counting and reporting duplicate records.
row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup
parameter with row_merge_dup_t* dup.
row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is
NULL when sorting a non-unique index.
row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(),
row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(),
row_merge(), row_merge_sort(): Add const qualifiers.
row_merge_read_clustered_index(): Use a common error handling branch err_exit.
Invoke row_merge_buf_sort() differently on unique indexes.
row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql()
to report duplicate key values when creating a clustered index.