row_create_index_graph_for_mysql(): Move from row0mysql.c to row0merge.c
and rename to row_merge_create_index_graph(). Also change the function
comment to say that the function will create and execute the query graph
for creating the index.
row_merge_create_index(): Remove redundant assignment to trx->error_state.
acquiring the table lock. The data dictionary should not be locked for
long periods. Before this change, in the worst case, the dictionary
would be locked until the expiration of innodb_lock_wait_timeout.
Virtually, transaction-level locks (locks on database objects, such
as records and tables) have a latching order level of SYNC_USER_TRX_LOCK,
which is above any InnoDB rw-locks or mutexes. However, the latching
order of SYNC_USER_TRX_LOCK is never checked, not even by UNIV_SYNC_DEBUG.
ha_innobase::add_index(), ha_innobase::final_drop_index(): Invoke
row_mysql_lock_data_dictionary(trx) only after row_merge_lock_table().
innodb-index.test: Add a test with a large number of externally stored
columns. Check that there may not be prefix indexes on too many columns.
dict_index_too_big_for_undo(): New function: Check if the undo log may
overflow.
dict_index_add_to_cache(): Return DB_SUCCESS or DB_TOO_BIG_RECORD.
Postpone the creation and linking of some data structures, so that
when dict_index_too_big_for_undo() holds, it will be easier to clean up.
Check the return status in all callers.
dict_index_copy(): Remove the prototype, because this static function
will be defined before its first use. Add const qualifier to "table".
dict_index_build_internal_clust(), dict_index_build_internal_non_clust():
Add const qualifier to "table". Correct the comment about setting indexed[].
row_merge_lock_table().
ha_innobase::final_drop_index(): Set the dictionary operation mode to
TRX_DICT_OP_INDEX_MAY_WAIT for the duration of the row_merge_lock_table()
call.
Active transactions must not switch table or index definitions on the fly,
for several reasons, including the following:
* copied indexes do not carry any history or locking information;
that is, rollbacks, read views, and record locking would be broken
* huge potential for race conditions, inconsistent reads and writes,
loss of data, and corruption
Instead of trying to track down if the table was changed during a transaction,
acquire appropriate locks that protect the creation and dropping of indexes.
innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX. Test
that consistent reads work across dropped indexes.
lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion.
When inserting a record into an index, the table must be at least IX-locked.
However, when an index is being created, an IS-lock on the table is
sufficient.
row_merge_lock_table(): Add the parameter enum lock_mode mode, which must
be LOCK_X or LOCK_S.
row_merge_drop_table(): Assert that n_mysql_handles_opened == 0.
Unconditionally drop the table.
ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate.
After acquiring an X lock, assert that n_mysql_handles_opened == 1.
Remove the comments about dropping tables in the background.
ha_innobase::final_drop_index(): Acquire an X lock on the table.
dict_table_t: Remove version_number, to_be_dropped, and prebuilts.
ins_node_t: Remove table_version_number.
enum lock_mode: Move the definition from lock0lock.h to lock0types.h.
ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete():
Remove.
row_prebuilt_t: Remove the declaration from row0types.h.
row_drop_table_for_mysql_no_commit(): Always print a warning if a table
was added to the background drop queue.
kernel_mutex must be released before calling this function.
innobase_mysql_end_print_arbitrary_thd(),
innobase_mysql_prepare_print_arbitrary_thd(): Assert that the
kernel_mutex is not being held by the current thread.
Bugfix: Lock the MySQL mutex LOCK_thread_count before accessing
trx->mysql_query_str to avoid race conditions where MySQL sets it to
NULL after we have checked that it is not NULL and before we access it.
Approved by: Marko
Non-functional change:
Move the prototypes of
innobase_mysql_prepare_print_arbitrary_thd() and
innobase_mysql_end_print_arbitrary_thd() from lock0lock.c to
ha_prototypes.h
Suggested by: Marko
Approved by: Marko
is an overlap between BLOB pointers and the modification log or the
zlib stream.
page_zip_decompress_clust_ext(): Remove the improper check. The
d_stream->avail_in cannot be decremented here, because we do not know
at this point if the record is deleted. No space is reserved for the
BLOB pointers in deleted records.
page_zip_decompress_clust(): Check for the overlap here, right before
copying the BLOB pointers.
page_zip_decompress_clust(): Also check that the target column is long
enough, and return FALSE instead of ut_ad() failure.
some decompression functions.
page_zip_apply_log_ext(), page_zip_apply_log(): Call page_zip_fail()
with appropriate diagnostics before returning NULL.
page_zip_decompress_node_ptrs(), page_zip_decompress_sec(),
page_zip_decompress_clust(): When detecting that the zlib stream
followed by the modification log overlaps the trailer, do not
let an assertion fail, but invoke page_zip_fail() and return FALSE.
Corrupt data should never lead into assertion failures in decompression
functions.
allocating compressed page frames or their control blocks. Also note
that if buf_buddy_alloc() is used for allocating a control block,
it must be initialized before releasing buf_pool->mutex.
buf_page_init_for_read(): When the page hash check fails after
buf_buddy_alloc(), free the uninitialized control block before freeing
the compressed page frame. This fixes a potential error in
buf_buddy_relocate_block().
mutex is temporarily released.
buf_LRU_free_block(), buf_buddy_alloc_clean(): Add an output parameter that
will be assigned TRUE when the buffer pool mutex is released.
This bug was spotted by and fix provided by Sunny.
columns to be up to REC_MAX_INDEX_COL_LEN + BTR_EXTERN_FIELD_REF_SIZE
bytes in a debug assertion. This assertion could fail since r2159 in
trx_undo_prev_version_build(), because the undo log records for updates
and deletes would contain longer prefixes of externally stored columns.
The assertion failure was reported by Sunny.
value. Document this change in behaviour, and make all callers invoke
the function right after dtuple_create().
dict_create_sys_fields_tuple(): Add a missing "break" statement to the loop
that checks if there are any column prefixes in the index.
row_get_prebuilt_insert_row(): Do not set the fields to the SQL NULL value,
now that dict_table_copy_types() takes care of it.
enough prefixes of externally stored columns, so that purge will not have
to dereference any BLOB pointers, which may be invalid. This will not be
necessary for logging inserts, because inserts are no-ops in purge, and
the record will remain locked during transaction rollback.
TODO: in dict_build_table_def_step() or dict_build_index_def_step(),
prevent the creation of tables with too many columns for which a
prefix index is defined. This is because there is a size limit of undo
log records, and for each prefix-indexed column, the log must store
REC_MAX_INDEX_COL_LEN + BTR_EXTERN_FIELD_REF_SIZE bytes.
trx_undo_page_report_insert(): Assert that the index is clustered.
trx_undo_page_fetch_ext(): New function, for fetching the BLOB prefix
in trx_undo_page_report_modify().
trx_undo_page_report_modify(): Write long enough prefixes of the externally
stored columns to the undo log.
trx_undo_rec_get_partial_row(): Remove the parameter "ext". Assert that
the undo log contains long enough prefixes of the externally stored columns.
purge_node_t: Remove the field "ext".
innodb.result, innodb.test: Revert the changes in r2145.
The tests that were removed by MySQL
ChangeSet@1.2598.2.6 2007-11-06 15:42:58-07:00 tsmith@hindu.god
were moved to a new test, innodb_autoinc_lock_mode_zero, which is
kept in the MySQL BitKeeper tree.
when row_build() was changed to prefetch all externally stored column
prefixes that occur in ordering fields of an index.
row_build(): Add the parameter col_table for determining which
externally stored columns need to be fetched.
row_merge_read_clustered_index(): Pass new_table as the said parameter,
so that newly added indexes containing column prefix indexes of externally
stored columns will work.
* Change terminology:
wait lock -> requested lock
waited lock -> blocking lock
new: requesting transaction (the trx what owns the requested lock)
new: blocking transaction (the trx that owns the blocking lock)
* Add transaction ids to INFORMATION_SCHEMA.INNODB_LOCK_WAITS. This is
somewhat redundant because transaction ids can be found in INNODB_LOCKS
(which can be joined with INNODB_LOCK_WAITS) but would help users to
write shorter joins (one table less) in some cases where they want to
find which transaction is blocking which.
Suggested by: Ken
Approved by: Heikki
Only add indexed BLOBs to row_ext.
trx_undo_rec_get_partial_row(): Move the BLOB fetching to row_ext_create().
row_build(): Pass only those BLOBs to row_ext_create() that are referenced by
ordering columns of some indexes, similar to trx_undo_rec_get_partial_row().
row_ext_create(): Add the parameter "tuple". Move the implementation
from row0ext.ic to row0ext.c.
row_ext_lookup_ith(), row_ext_lookup(): Return a const pointer. Remove
the parameters "field" and "f_len". Make the row_ext_t* parameter const.
row_ext_t: Remove the field zip_size.
field_ref_zero[]: Declare in btr0types.h instead of btr0cur.h.
row_ext_lookup_low(): Rename to row_ext_cache_fill() and change the
signature.
univ.i: Do not define UNIV_DEBUG, UNIV_ZIP_DEBUG.
btr_cur_del_unmark_for_ibuf(): Use the same comment in both btr0cur.c and
btr0cur.h. Wrap long lines.
contents end up with conflicting versions of a record's state. The zipped
page record was not being marked as "(un)deleted" because we were not
passing the zipped page contents to the (un)delete function, which first
(un)delete marks the uncompressed version and then based on whether
page_zip is NULL or not (un)delete marks the record in the compressed page.
for dropping the index trees, and set the dictionary operation flag, similar
to what ha_innobase::add_index() does. This should ensure correct crash
recovery.
only for those externally stored columns that occur in the ordering columns
of indexes. Prefetch the prefixes of those columns, because the clustered
index record and the BLOBs may have been deleted by the time when the
purge thread needs to read the BLOB prefixes.
row_ext_create(): Add the debug assertion ut_ad(ut_is_2pow(zip_size)).
in signedness that were introduced in r2114.
row_upd_index_replace_new_col_vals_index_pos(),
row_upd_index_replace_new_col_vals(): Declare "data" as const byte*
instead of const char*, and add casts to the dtype_get_at_most_n_mbchars()
calls.
buf_block_is_uncompressed(): Check that the pointer is aligned. Use the
C modulus operator % instead of ut_align_offset(), because sizeof(buf_block_t)
is not guaranteed to be a power of 2.
point to a buffer pool chunk that has been released when resizing the
buffer pool.
buf_block_is_uncompressed(): Check that the pointer is aligned. Thanks
to this check, it is safe to pass an arbitrary pointer as a guess
to buf_page_get_gen().
buf_page_get_release_on_io(): Removed this unused function.
ibuf_build_entry_from_ibuf_rec(): Justify why it is not necessary to
add system columns to the dummy table pointed to by the dummy secondary index.
page_zip_rec_set_deleted(): Add a page_zip_validate() assertion.
inserted, uncommitted clustered index records when determining if a
secondary index record that contains a column prefix of an externally
stored column is referencing the clustered index record.
field_ref_zero[]: A BLOB pointer full of zero, for use in comparisons.
btr_copy_externally_stored_field_prefix(): Assert that the BLOB pointer is set.
row_ext_lookup_ith(), row_ext_lookup(), row_ext_lookup_low(): Document
that field_ref_zero is returned when the BLOB cannot be fetched.
row_ext_lookup_low(): Return field_ref_zero and *len = 0 when the
BLOB pointer is unset.
row_build_index_entry(): Return NULL when a needed BLOB pointer cannot
be dereferenced (row_ext_lookup returns field_ref_zero). Check the
return value for NULL in callers.
row_vers_impl_x_locked_off_kernel(): Avoid comparisons when
row_build_index_entry() returns NULL.
row_vers_old_has_index_entry(): Ignore records for which
row_build_index_entry() returns NULL. The entry should never be NULL
in rollback, but it may be NULL in purge.
row_merge_buf_add(): Assert that row_ext_lookup() does not return
field_ref_zero. The table will be locked during index creation.