is not indexed.
btr_search_update_hash_ref(), btr_search_drop_page_hash_index(),
btr_search_build_page_hash_index(), btr_search_update_hash_on_delete(),
btr_search_update_hash_node_on_insert(), btr_search_update_hash_on_insert(),
btr_search_validate():
Assert that hashed blocks do not belong to the insert buffer tree.
btr_search_move_or_delete_hash_entries():
When invoked on the insert buffer tree, assert that neither block is hashed.
buf_pool->mutex: Rename to buf_pool_mutex, so that the wrappers will have
to be used when changes are merged from other source trees.
buf_pool->zip_mutex: Rename to buf_pool_zip_mutex.
buf_pool_mutex_own(), buf_pool_mutex_enter(), buf_pool_mutex_exit():
Wrappers for buf_pool_mutex.
For some reason, GCC 4.2.1 ignores casts (for removing constness)
in calls to inline functions.
page_align(), ut_align_down(): Make the parameter const void*, but still
return a non-const pointer. This is ugly, but these functions cannot be
replaced with a const-preserving macro in a portable way, given that
the pointer argument is not always pointing to bytes.
buf_block_get_page_zip(): Implement as a const-preserving macro.
buf_frame_get_page_zip(), buf_block_align(): Add const qualifiers.
lock_rec_get_prev(): Silence GCC 4.2.1 warnings.
mlog_write_initial_log_record(), mlog_write_initial_log_record_fast(),
mtr_memo_contains(): Add const qualifier to the pointer.
page_header_get_ptr(): Rewrite as page_header_get_offs(), and
implement as a macro that calls this function.
offsets_[] arrays, as suggested by Vasil.
rec_offs_set_n_alloc(): Declare as a public function. Assert that
n_alloc > REC_OFFS_HEADER_SIZE.
rec_offs_get_n_alloc(): Assert that n_alloc > REC_OFFS_HEADER_SIZE.
buf_page_get_gen(). This saves one mutex operation per block request.
buf_page_get_gen(), various macros and functions: Add parameter zip_size.
btr_node_ptr_get_child(): Add parameter index.
fil_space_get_latch(): Add optional output parameter zip_size.
fil_space_get_zip_size(): Return 0 for space id==0, because the
system tablespace is never compressed.
fsp_header_init(): Remove the parameter zip_size.
ibuf_free_excess_pages(): Remove the parameter zip_size.
trx_rseg_t, trx_undo_t: Add field zip_size.
xdes_lst_get_next(): Remove, unused.
buf_block_t. Move the fields "hash" and "file_page_was_freed" from
buf_block_t to buf_page_t.
buf_page_in_file(): New function, for checking block state in assertions.
and block->space with buf_block_get_state(block), buf_block_get_page_no(block),
and buf_block_get_space(block).
enum buf_block_state: Replaces the #define'd buf_block_t.state values.
buf_block_get_state(): New function.
buf_block_get_frame(): Add __attribute__((const)).
hash index, because it might occupy the chunk we would like to free.
TODO: In btr_search_check_free_space_in_heap(), release the block if
btr_search_latch is not immediately available.
buf_pool_shrink(): Split from buf_pool_resize().
btr_search_disabled: New variable, similar to srv_use_adaptive_hash_indexes
that was removed earlier.
btr_search_disable(): New function: disable and purge the adaptive hash index.
btr_search_enable(): New function: enable the adaptive hash index.
ha_clear(): New function: Empty a hash table and free the memory heaps.
when assigning node->data.
ha_delete(), ha_search_and_delete_if_found(), ha_remove_all_nodes_to_page():
Remove the parameter buf_block_t* block, now that it is stored within the
hash data structure in debug builds.
an assertion failure in debug builds when a context switch occurred in
buf_LRU_search_and_free_block() before the call to
btr_search_drop_page_hash_index() managed to acquire the mutexes again.
ha_node_t: Add the field buf_block_t* block.
ha_search_and_update_if_found(): Rename to ha_search_and_update_if_found_func()
with added buf_block_t* parameter in debug builds. Define the wrapper macro
ha_search_and_update_if_found() that always takes the buf_block_t* parameter.
ha_insert_for_fold(): Rename to ha_insert_for_fold_func()
with added buf_block_t* parameter in debug builds. Define the wrapper macro
ha_insert_for_fold() that always takes the buf_block_t* parameter.
Instead, get buf_block_t* as a parameter.
Without this patch, buf_page_hash_get() would return NULL in
buf_block_align(). The function buf_LRU_search_and_free_block()
invokes buf_LRU_block_remove_hashed_page(), which removes the
hash mapping needed by buf_page_hash_get().
btr_cur_t: Move page_block to page_cur_t::block.
page_cur_get_block(), page_cur_get_page_zip(): New functions.
page_cur_position(): Add parameter block.
Remove many page_zip parameters, now that there is page_cur_get_page_zip().
Replace some page, page_zip parameters with block.
Add some const qualifiers to function parameters and remove casts.
PAGE_HEAP_NO_INFIMUM, PAGE_HEAP_NO_SUPREMUM, PAGE_HEAP_NO_USER_LOW:
New constants.
Replace some cursor code in low-level diagnostic functions with
direct management of rec, because buf_block_t::buf_fix_count may be 0
when the functions are called, and debug assertions would fail.
accessors returning pointers with macros that preserve const qualifiers.
In UNIV_DEBUG builds, retain the accessors and cast away constness there.
dfield_get_type(), dfield_get_data(), dtuple_get_nth_field(),
dict_table_get_nth_col(), dict_table_get_sys_col(): Implement as macro
unless #ifdef UNIV_DEBUG.
rec_get_nth_field(): Replace with rec_get_nth_field_offs() that does not
do pointer arithmetics. Implement rec_get_nth_field() as a macro.
passed as TRUE.
Enclose hash_table_t::adaptive and buf_block_t::n_pointers in
#ifdef UNIV_DEBUG.
btr_search_drop_page_hash_index(): Enclose the corruption check
(which depends on buf_block_t::n_pointers) in #ifdef UNIV_DEBUG.
ibuf_reset_free_bits(): Remove, as there already is a similar function
ibuf_reset_free_bits_with_type().
ibuf_reset_free_bits_with_type(), ibuf_set_free_bits(),
ibuf_update_free_bits_if_full(), btr_leaf_page_release(),
buf_page_make_young(): Replace page_t with buf_block_t.
btr_compress(): Replace btr_page_get() with btr_block_get().
with page_get_page_no() and page_get_space_id(). We want to avoid
buf_block_align() calls, and the page_no and space_id are now stamped
on the pages early on.
Replace ut_ad(mtr_memo_contains(mtr, buf_block_align(ptr), ...))
with ut_ad(mtr_memo_contains_page(mtr, ptr, ...)) in order to reduce the
number of buf_block_align() calls.
buf_page_print(): Print also compressed pages. Add parameter zip_size.
buf_flush_init_for_writing(): Stamp the fields on a compressed B-tree index
page.
Add the header field FIL_PAGE_ZBLOB_SPACE_ID as an alias of FIL_PAGE_PREV.
page_zip_calc_checksum(): New function.
page_zip_compress(): Avoid copying the fields that are written in
buf_flush_init_for_writing().
page_zip_header_cmp(): New function for comparing those fields of the
page header that will not be written in buf_flush_init_for_writing().