FROM BUFFER POOL
rb://975
approved by: Marko Makela
There is a race in lock_validate() where we try to access a page
without ensuring that the tablespace stays valid during the operation
i.e.: it is not deleted. This patch tries to fix that by using an
existing flag (the flag is renamed to make it's name more generic
in line with it's new use).
IN OS_THREAD_EQ
rb://977
approved by: Marko Makela
rw_lock::writer_thread field contains the thread id of current x-holder
or wait-x thread. This field is un-initialized at lock creation and is
written to for the first time when an attempt is made to x-lock.
Current code considers ::writer_thread as valid memory region only when
the lock is held in x-mode (or there is an x-waiter). This is an
overkill and it generates valgrind warnings.
The fix is to consider ::writer_thread as valid memory region once it
has been written to.
Reasoning:
==========
The ::writer_thread can be safely considered valid because:
* We only ever do comparison with current calling threads id.
* We only ever do comparison when ::recursive flag is set
* We always unset ::recursive flag in x-unlock
* Same thread cannot be unlocking and attempting to lock at the same
time
* thread_id recycling is not an issue because before an id is recycled
the thread must leave innodb meaning it must release all locks meaning
it must unset ::recursive flag.
IN LOCK_VALIDATE()
rb://917
approved by: Marko Makela
In lock_validate() the limit is used to release the kernel_mutex during
the validation, to obey the latching order.
If we do the limit++ then we are rechecking the same lock most times on
each iteration because limit is being incremented by one and
<space, page_no> will nearly always be > limit. If we set the limit
correctly to (space, page+1) then we are actually making progress
during the iteration.
truncating, inserting the same set of rows. When a table is
re-created with the same set of rows, the data file size must
not grow.
rb:968
Approved by Marko.
This bug has been there at least since MySQL 4.0.9. (Before 4.0.9, the
code probably was even more severely broken.)
btr_pcur_restore_position(): When cursor restoration fails, before
invoking btr_pcur_store_position() move to the previous or next record
unless cursor->rel_pos==BTR_PCUR_ON or the record was not a user
record.
This bug can cause skipped records when btr_pcur_store_position() is
called on the last record of a page. A symptom would be record count
mismatch in CHECK TABLE, or failure to find a record to delete-mark or
update or purge. The following operations should be affected by the
bug:
* row_search_for_mysql(): SELECT, UPDATE, REPLACE, CHECK TABLE,
(almost anything else than INSERT)
* foreign key CASCADE operations
* row_merge_read_clustered_index(): index creation (since MySQL 5.1
InnoDB Plugin)
* multi-threaded purge (after MySQL 5.5): not sure, but it might fail
to purge some records
Not all callers of btr_pcur_restore_position() should be affected.
Anything that asserts or checks that restoration succeeds is
unaffected. For example, cursor restoration on the change buffer tree
should always succeed, because access is being protected by additional
latches. Likewise, rollback, or any code accesses data dictionary
tables while holding dict_sys->mutex should be safe.
rb:967 approved by Jimmy Yang
storage/innobase/include/sync0rw.ic:
Prerequisite for compiling with gcc4 on solaris: ignore result from
os_compare_and_swap_ulint
storage/myisam/mi_dynrec.c:
Prerequisite for compiling with gcc4 on solaris: cast to void*
also filed as Bug#13146269, Bug#13713178
btr_get_size(): Add mtr_t parameter. Require that the caller S-latches
index->lock. If index->page==FIL_NULL or the index is to be dropped,
return ULINT_UNDEFINED to indicate that the statistics are
unavailable.
dict_update_statistics(): If btr_get_size() returns ULINT_UNDEFINED,
fake the index cardinality statistics.
dict_index_set_page(): Unused function, remove.
row_drop_table_for_mysql(): Before starting to drop the table, mark
the indexes unavailable in the data dictionary cache while holding
index->lock X-latch.
ha_innobase::prepare_drop_index(), ha_innobase::final_drop_index():
When setting index->to_be_dropped, acquire the index->lock X-latch.
rb:960 approved by Jimmy Yang
This bug was originally filed and fixed as Bug#12612184. The original
fix was buggy, and it was patched by Bug#12704861. Also that patch was
buggy (potentially breaking crash recovery), and both fixes were
reverted.
This fix was not ported to the built-in InnoDB of MySQL 5.1, because
the function signatures of many core functions are different from
InnoDB Plugin and later versions. The block allocation routines and
their callers would have to changed so that they handle block
descriptors instead of page frames.
When a record is updated so that its size grows, non-updated columns
can be selected for external (off-page) storage. The bug is that the
initially inserted updated record contains an all-zero BLOB pointer to
the field that was not updated. Only after the BLOB pages have been
allocated and written, the valid pointer can be written to the record.
Between the release of the page latch in mtr_commit(mtr) after
btr_cur_pessimistic_update() and the re-latching of the page in
btr_pcur_restore_position(), other threads can see the invalid BLOB
pointer consisting of 20 zero bytes. Moreover, if the system crashes
at this point, the situation could persist after crash recovery, and
the contents of the non-updated column would be permanently lost.
The problem is amplified by the ROW_FORMAT=DYNAMIC and
ROW_FORMAT=COMPRESSED that were introduced in
innodb_file_format=barracuda in InnoDB Plugin, but the bug does exist
in all InnoDB versions.
The fix is as follows. After a pessimistic B-tree operation that needs
to write out off-page columns, allocate the pages for these columns in
the mini-transaction that performed the B-tree operation (btr_mtr),
but write the pages in a separate mini-transaction (blob_mtr). Do
mtr_commit(blob_mtr) before mtr_commit(btr_mtr). A quirk: Do not reuse
pages that were previously freed in btr_mtr. Only write the off-page
columns to 'fresh' pages.
In this way, crash recovery will see redo log entries for blob_mtr
before any redo log entry for btr_mtr. It will apply the BLOB page
writes to pages that were marked free at that point. If crash recovery
fails to see all of the btr_mtr redo log, there will be some
unreachable BLOB data in free pages, but the B-tree will be in a
consistent state.
btr_page_alloc_low(): Renamed from btr_page_alloc(). Add the parameter
init_mtr. Return an allocated block, or NULL. If init_mtr!=mtr but
the page was already X-latched in mtr, do not initialize the page.
btr_page_alloc(): Wrapper for btr_page_alloc_for_ibuf() and
btr_page_alloc_low().
btr_page_free(): Add a debug assertion that the page was a B-tree page.
btr_lift_page_up(): Return the father block.
btr_compress(), btr_cur_compress_if_useful(): Add the parameter ibool
adjust, for adjusting the cursor position.
btr_cur_pessimistic_update(): Preserve the cursor position when
big_rec will be written and the new flag BTR_KEEP_POS_FLAG is defined.
Remove a duplicate rec_get_offsets() call. Keep the X-latch on
index->lock when big_rec is needed.
btr_store_big_rec_extern_fields(): Replace update_inplace with
an operation code, and local_mtr with btr_mtr. When not doing a
fresh insert and btr_mtr has freed pages, put aside any pages that
were previously X-latched in btr_mtr, and free the pages after
writing out all data. The data must be written to 'fresh' pages,
because btr_mtr will be committed and written to the redo log after
the BLOB writes have been written to the redo log.
btr_blob_op_is_update(): Check if an operation passed to
btr_store_big_rec_extern_fields() is an update or insert-by-update.
fseg_alloc_free_page_low(), fsp_alloc_free_page(),
fseg_alloc_free_extent(), fseg_alloc_free_page_general(): Add the
parameter init_mtr. Return an allocated block, or NULL. If
init_mtr!=mtr but the page was already X-latched in mtr, do not
initialize the page.
xdes_get_descriptor_with_space_hdr(): Assert that the file space
header is being X-latched.
fsp_alloc_from_free_frag(): Refactored from fsp_alloc_free_page().
fsp_page_create(): New function, for allocating, X-latching and
potentially initializing a page. If init_mtr!=mtr but the page was
already X-latched in mtr, do not initialize the page.
fsp_free_page(): Add ut_ad(0) to the error outcomes.
fsp_free_page(), fseg_free_page_low(): Increment mtr->n_freed_pages.
fsp_alloc_seg_inode_page(), fseg_create_general(): Assert that the
page was not previously X-latched in the mini-transaction. A file
segment or inode page should never be allocated in the middle of an
mini-transaction that frees pages, such as btr_cur_pessimistic_delete().
fseg_alloc_free_page_low(): If the hinted page was allocated, skip the
check if the tablespace should be extended. Return NULL instead of
FIL_NULL on failure. Remove the flag frag_page_allocated. Instead,
return directly, because the page would already have been initialized.
fseg_find_free_frag_page_slot() would return ULINT_UNDEFINED on error,
not FIL_NULL. Correct a bogus assertion.
fseg_alloc_free_page(): Redefine as a wrapper macro around
fseg_alloc_free_page_general().
buf_block_buf_fix_inc(): Move the definition from the buf0buf.ic to
buf0buf.h, so that it can be called from other modules.
mtr_t: Add n_freed_pages (number of pages that have been freed).
page_rec_get_nth_const(), page_rec_get_nth(): The inverse function of
page_rec_get_n_recs_before(), get the nth record of the record
list. This is faster than iterating the linked list. Refactored from
page_get_middle_rec().
trx_undo_rec_copy(): Add a debug assertion for the length.
trx_undo_add_page(): Return a block descriptor or NULL instead of a
page number or FIL_NULL.
trx_undo_report_row_operation(): Add debug assertions.
trx_sys_create_doublewrite_buf(): Assert that each page was not
previously X-latched.
page_cur_insert_rec_zip_reorg(): Make use of page_rec_get_nth().
row_ins_clust_index_entry_by_modify(): Pass BTR_KEEP_POS_FLAG, so that
the repositioning of the cursor can be avoided.
row_ins_index_entry_low(): Add DEBUG_SYNC points before and after
writing off-page columns. If inserting by updating a delete-marked
record, do not reposition the cursor or commit the mini-transaction
before writing the off-page columns.
row_build(): Tighten a debug assertion about null BLOB pointers.
row_upd_clust_rec(): Add DEBUG_SYNC points before and after writing
off-page columns. Do not reposition the cursor or commit the
mini-transaction before writing the off-page columns.
rb:939 approved by Jimmy Yang
The problem was introduced in 5.5.20 by Bug 13116225. It tried to
protect against downgrading from a version 5.6.4 database that was
created with a page size other than 16k. Version 5.6.4 supports
page sizes 4k and 8k and stamps that page size into the header page
of each tablespace file. Version 5.5.20 attempts to read that page
size in the file header.
But it turns out that only the first system tablespace file has a
reliable flags field in the header. So only ibdata1 can be or needs
to be tested for another page size. Extra system tablespace files
like ibdata2, ibdata3, etc do not and should not be tested since the
flags field is unreliable.
OF WIDE RECORDS
row_ins_index_entry_low(), row_upd_clust_rec(): Make a redo log
checkpoint if a DEBUG flag is set. Add DEBUG_SYNC around
btr_store_big_rec_extern_fields().
rb:946 approved by Jimmy Yang
During FIC error handling the trx->error_state was not being set to DB_SUCCESS
after failure, before attempting the next DDL SQL operation. This reset to
DB_SUCCESS is somewhat of a requirement though not explicitly stated anywhere.
The fix is to reset it to DB_SUCCESS in row0merge.cc if row_merge_rename_indexes
or row_merge_drop_index functions fail, also reset to DB_SUCCESS at trx commit.
rb://935 Approved by Jimmy Yang.
During FIC error handling the trx->error_state was not being set to DB_SUCCESS
after failure, before attempting the next DDL SQL operation. This reset to
DB_SUCCESS is somewhat of a requirement though not explicitly stated anywhere.
The fix is to reset it to DB_SUCCESS in row0merge.cc if row_merge_rename_indexes
or row_merge_drop_index functions fail, also reset to DB_SUCCESS at trx commit.
rb://935 Approved by Jimmy Yang.
The actual Bug#11754376 does not exist in MySQL 5.5 because at startup
we drop entries for temporary tables from InnoDB dictionary cache (only
if ROW_FORMAT is not REDUNDANT). But nevertheless the bug in
normalize_table_name_low() is present so we fix it.
GRACEFUL SHUTDOWN
During startup mysql picks up .frm files from the tmpdir directory and
tries to drop those tables in the storage engine.
The problem is that when tmpdir ends in / then ha_innobase::delete_table()
is passed a string like "/var/tmp//#sql123", then it wrongly normalizes it
to "/#sql123" and calls row_drop_table_for_mysql() which of course fails
to delete the table entry from the InnoDB dictionary cache.
ha_innobase::delete_table() returns an error but nevertheless mysql wipes
away the .frm file and the entry in the InnoDB dictionary cache remains
orphaned with no easy way to remove it.
The "no easy" way to remove it is to create a similar temporary table again,
copy its .frm file to tmpdir under "#sql123.frm" and restart mysqld with
tmpdir=/var/tmp (no trailing slash) - this way mysql will pick the .frm file
after restart and will try to issue drop table for "/var/tmp/#sql123"
(notice do double slash), ha_innobase::delete_table() will normalize it to
"tmp/#sql123" and row_drop_table_for_mysql() will successfully remove the
table entry from the dictionary cache.
The solution is to fix normalize_table_name_low() to normalize things like
"/var/tmp//table" correctly to "tmp/table".
This patch also adds a test function which invokes
normalize_table_name_low() with various inputs to make sure it works
correctly and a mtr test that calls this test function.
Reviewed by: Marko (http://bur03.no.oracle.com/rb/r/929/)
CORRUPTED WHEN RUN CONCURRENTLY WITH
ISSUE: Table corruption due to concurrent queries.
Different threads running check, repair query
along with insert. Locks not properly acquired
in repair query. Rows are inserted inbetween
repair query.
SOLUTION: Mutex lock is acquired before the
repair call. Concurrent queries wont
effect the call to repair.
ON 64 BIT MACHINES
PROBLEM: When sorting index during repair of
myisam tables, due to improper casting
of buffer size variables value of myisam_
sort_buffer_size is not set greater than
4GB.
SOLUTION: Proper casting of buffer size variable.
myisam_buffer_size changed to unsigned
long long to handle size > 4GB on
linux as well as windows.