mirror of
https://github.com/MariaDB/server.git
synced 2025-01-21 06:22:28 +01:00
86b8525254
------------------------------------------------------------------------ r3607 | marko | 2008-12-30 22:33:31 +0200 (Tue, 30 Dec 2008) | 20 lines branches/zip: Remove the dependency on the MySQL HASH table implementation. Use the InnoDB hash table for keeping track of INNOBASE_SHARE objects. struct st_innobase_share: Make table_name const uchar*. Add the member table_name_hash. innobase_open_tables: Change the type from HASH to hash_table_t*. innobase_get_key(): Remove. innobase_fold_name(): New function, for computing the fold value for the InnoDB hash table. get_share(), free_share(): Use the InnoDB hash functions. innobase_end(): Free innobase_open_tables before shutting down InnoDB. Shutting down InnoDB will invalidate all memory allocated via InnoDB. rb://65 approved by Heikki Tuuri. This addresses Issue #104. ------------------------------------------------------------------------ r3608 | marko | 2008-12-30 22:45:04 +0200 (Tue, 30 Dec 2008) | 22 lines branches/zip: When setting the PAGE_LEVEL of a compressed B-tree page from or to 0, compress the page at the same time. This is necessary, because the column information stored on the compressed page will differ between leaf and non-leaf pages. Leaf pages are identified by PAGE_LEVEL=0. This bug was reported as Issue #150. Document the similarity between btr_page_create() and btr_page_empty(). Make the function signature of btr_page_empty() identical with btr_page_create(). (This will add the parameter "level".) btr_root_raise_and_insert(): Replace some code with a call to btr_page_empty(). btr_attach_half_pages(): Assert that the page level has already been set on both block and new_block. Do not set it again. btr_discard_only_page_on_level(): Document that this function is probably never called. Make it work on any height tree. (Tested on 2-high tree by disabling btr_lift_page_up().) rb://68 ------------------------------------------------------------------------ r3612 | marko | 2009-01-02 11:02:44 +0200 (Fri, 02 Jan 2009) | 14 lines branches/zip: Merge c2998 from branches/6.0, so that the same InnoDB Plugin source tree will work both under 5.1 and 6.0. Do not add the test case innodb_ctype_ldml.test, because it would not work under MySQL 5.1. Refuse to create tables whose columns contain collation IDs above 255. This removes an assertion failure that was introduced in WL#4164 (Two-byte collation IDs). create_table_def(): Do not fail an assertion if a column contains a charset-collation ID greater than 256. Instead, issue an error and refuse to create the table. The original change (branches/6.0 r2998) was rb://51 approved by Calvin Sun. ------------------------------------------------------------------------ r3613 | inaam | 2009-01-02 15:10:50 +0200 (Fri, 02 Jan 2009) | 6 lines branches/zip: Implement the parameter innodb_use_sys_malloc (false by default), for disabling InnoDB's internal memory allocator and using system malloc/free instead. rb://62 approved by Marko ------------------------------------------------------------------------ r3614 | marko | 2009-01-02 15:55:12 +0200 (Fri, 02 Jan 2009) | 1 line branches/zip: ChangeLog: Document r3608 and r3613. ------------------------------------------------------------------------ r3615 | marko | 2009-01-02 15:57:51 +0200 (Fri, 02 Jan 2009) | 1 line branches/zip: ChangeLog: Clarify the impact of r3608. ------------------------------------------------------------------------ r3616 | marko | 2009-01-03 00:23:30 +0200 (Sat, 03 Jan 2009) | 1 line branches/zip: srv_suspend_mysql_thread(): Add some clarifying comments. ------------------------------------------------------------------------ r3618 | marko | 2009-01-05 12:54:53 +0200 (Mon, 05 Jan 2009) | 15 lines branches/zip: Merge revisions 3598:3601 from branches/5.1: ------------------------------------------------------------------------ r3601 | marko | 2008-12-22 16:05:19 +0200 (Mon, 22 Dec 2008) | 9 lines branches/5.1: Make SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED a true replacement of SET GLOBAL INNODB_LOCKS_UNSAFE_FOR_BINLOG=1. This fixes an error that was introduced in r370, causing semi-consistent read not to not unlock rows in READ COMMITTED mode. (Bug #41671, Issue #146) rb://67 approved by Heikki Tuuri ------------------------------------------------------------------------ ------------------------------------------------------------------------ r3623 | vasil | 2009-01-06 09:56:32 +0200 (Tue, 06 Jan 2009) | 7 lines branches/zip: Add patch to fix the failing main.variables mysql-test. It started failing after the variable innodb_use_sys_malloc was added because it matches '%alloc%' and the test is badly written and expects that no new variables like that will ever be added. ------------------------------------------------------------------------ r3795 | marko | 2009-01-07 16:17:47 +0200 (Wed, 07 Jan 2009) | 7 lines branches/zip: row_merge_tuple_cmp(): Do not report a duplicate key value if any of the fields are NULL. While the tuples are equal in the sorting order, SQL NULL is defined to be logically inequal to anything else. (Bug #41904) rb://70 approved by Heikki Tuuri ------------------------------------------------------------------------ r3796 | marko | 2009-01-07 16:19:32 +0200 (Wed, 07 Jan 2009) | 1 line branches/zip: Add the tests that were forgotten from r3795. ------------------------------------------------------------------------ r3797 | marko | 2009-01-07 16:22:18 +0200 (Wed, 07 Jan 2009) | 22 lines branches/zip: Do not call trx_allocate_for_mysql() directly, but use helper functions that initialize some members of the transaction struct. (Bug #41680) innobase_trx_init(): New function: initialize some fields of a transaction struct from a MySQL THD object. innobase_trx_allocate(): New function: allocate and initialize a transaction struct. check_trx_exists(): Use the above two functions. ha_innobase::delete_table(), ha_innobase::rename_table(), ha_innobase::add_index(), ha_innobase::final_drop_index(): Use innobase_trx_allocate(). innobase_drop_database(): In the Windows plugin, initialize the trx_t specially, because the THD is not available. Otherwise, use innobase_trx_allocate(). rb://69 accepted by Heikki Tuuri ------------------------------------------------------------------------ r3798 | marko | 2009-01-07 16:42:42 +0200 (Wed, 07 Jan 2009) | 8 lines branches/zip: row_merge_drop_temp_indexes(): Do not lock the rows of SYS_INDEXES when looking for partially created indexes. Use the transaction isolation level READ UNCOMMITTED to avoid interfering with locks held by incomplete transactions that will be rolled back in a subsequent step in the recovery. (Issue #152) Approved by Heikki Tuuri ------------------------------------------------------------------------ r3852 | vasil | 2009-01-08 22:10:10 +0200 (Thu, 08 Jan 2009) | 4 lines branches/zip: Add ChangeLog entries for r3795 r3796 r3797 r3798. ------------------------------------------------------------------------ r3866 | marko | 2009-01-09 15:09:51 +0200 (Fri, 09 Jan 2009) | 2 lines branches/zip: buf_flush_try_page(): Move some common code from each switch case before the switch block. ------------------------------------------------------------------------ r3867 | marko | 2009-01-09 15:13:14 +0200 (Fri, 09 Jan 2009) | 2 lines branches/zip: buf_flush_try_page(): Introduce the variable is_compressed for caching the result of buf_page_get_state(bpage) == BUF_BLOCK_FILE_PAGE. ------------------------------------------------------------------------ r3868 | marko | 2009-01-09 15:40:11 +0200 (Fri, 09 Jan 2009) | 4 lines branches/zip: buf_flush_insert_into_flush_list(), buf_flush_insert_sorted_into_flush_list(): Remove unused code. Change the parameter to buf_block_t* block and assert that block->state == BUF_BLOCK_FILE_PAGE. This is part of Issue #155. ------------------------------------------------------------------------ r3873 | marko | 2009-01-09 22:27:40 +0200 (Fri, 09 Jan 2009) | 17 lines branches/zip: Some non-functional changes related to Issue #155. buf_page_struct: Note that space and offset are also protected by buf_pool_mutex. They are only assigned to by buf_block_set_file_page(). Thus, it suffices for buf_flush_batch() to hold just buf_pool_mutex when checking these fields. buf_flush_try_page(): Rename "locked" to "is_s_latched", per Heikki's request. buf_flush_batch(): Move the common statement mutex_exit(block_mutex) from all if-else if-else branches before the if block. Remove the redundant test (buf_pool->init_flush[flush_type] == FALSE) that was apparently copied from buf_flush_write_complete(). buf_flush_write_block_low(): Note why it is safe not to hold buf_pool_mutex or block_mutex. Enumerate the assumptions in debug assertions. ------------------------------------------------------------------------ r3874 | marko | 2009-01-09 23:09:06 +0200 (Fri, 09 Jan 2009) | 4 lines branches/zip: Add comments related to Issue #155. buf_flush_try_page(): Note why it is safe to access bpage without holding buf_pool_mutex or block_mutex. ------------------------------------------------------------------------ r3875 | marko | 2009-01-09 23:15:12 +0200 (Fri, 09 Jan 2009) | 11 lines branches/zip: Non-functional change: Tighten debug assertions and remove dead code. buf_flush_ready_for_flush(), buf_flush_try_page(): Assert that flush_type is one of BUF_FLUSH_LRU or BUF_FLUSH_LIST. The flush_type comes from buf_flush_batch(), which already asserts this. The assertion holds for all calls in the source code. buf_flush_try_page(): Remove the dead case BUF_FLUSH_SINGLE_PAGE of switch (flush_type). ------------------------------------------------------------------------ r3879 | marko | 2009-01-12 12:46:44 +0200 (Mon, 12 Jan 2009) | 14 lines branches/zip: Simplify the flushing of dirty pages from the buffer pool. buf_flush_try_page(): Rename to buf_flush_page(), and change the return type to void. Replace the parameters space, offset with bpage, and remove the second page hash lookup. Note and assert that both buf_pool_mutex and block_mutex must now be held upon entering the function. They will still be released by this function. buf_flush_try_neighbors(): Replace buf_flush_try_page() with buf_flush_page(). Make the logic easier to follow by not negating the precondition of buf_flush_page(). rb://73 approved by Sunny Bains. This is related to Issue #157. ------------------------------------------------------------------------ r3880 | marko | 2009-01-12 13:24:37 +0200 (Mon, 12 Jan 2009) | 2 lines branches/zip: buf_flush_page(): Fix a comment that should have been fixed in r3879. Spotted by Sunny. ------------------------------------------------------------------------ r3881 | marko | 2009-01-12 14:25:22 +0200 (Mon, 12 Jan 2009) | 2 lines branches/zip: buf_page_get_newest_modification(): Use the block mutex instead of the buffer pool mutex. This is related to Issue #157. ------------------------------------------------------------------------ r3882 | marko | 2009-01-12 14:40:08 +0200 (Mon, 12 Jan 2009) | 3 lines branches/zip: struct mtr_struct: Remove the unused field magic_n unless UNIV_DEBUG is defined. mtr->magic_n is only assigned to and checked in UNIV_DEBUG builds. ------------------------------------------------------------------------ r3883 | marko | 2009-01-12 14:48:59 +0200 (Mon, 12 Jan 2009) | 1 line branches/zip: Non-functional change: Use ut_d when assigning to mtr->state. ------------------------------------------------------------------------ r3884 | marko | 2009-01-12 18:56:11 +0200 (Mon, 12 Jan 2009) | 16 lines branches/zip: Non-functional change: Add some debug assertions and comments. buf_page_t: Note that the LRU fields are protected by buf_pool_mutex only, not block->mutex or buf_pool_zip_mutex. buf_page_get_freed_page_clock(): Note that this is sometimes invoked without mutex protection. buf_pool_get_oldest_modification(): Note that the result may be out of date. buf_page_get_LRU_position(), buf_page_is_old(): Assert that the buffer pool mutex is being held. buf_page_release(): Assert that dirty blocks are in the flush list. ------------------------------------------------------------------------ r3896 | marko | 2009-01-13 09:30:26 +0200 (Tue, 13 Jan 2009) | 2 lines branches/zip: buf_flush_try_neighbors(): Fix a bug that was introduced in r3879 (rb://73). ------------------------------------------------------------------------ r3900 | marko | 2009-01-13 10:32:24 +0200 (Tue, 13 Jan 2009) | 1 line branches/zip: Fix some comments to say buf_pool_mutex. ------------------------------------------------------------------------ r3907 | marko | 2009-01-13 11:54:01 +0200 (Tue, 13 Jan 2009) | 3 lines branches/zip: row_merge_create_temporary_table(): On error, row_create_table_for_mysql() already frees new_table. Do not attempt to free it again. ------------------------------------------------------------------------ r3908 | marko | 2009-01-13 12:34:32 +0200 (Tue, 13 Jan 2009) | 1 line branches/zip: Enable HASH_ASSERT_OWNED independently of UNIV_SYNC_DEBUG. ------------------------------------------------------------------------ r3914 | marko | 2009-01-13 21:46:22 +0200 (Tue, 13 Jan 2009) | 37 lines branches/zip: In hash table lookups, assert that the traversed items satisfy some conditions when UNIV_DEBUG is defined. HASH_SEARCH(): New parameter: ASSERTION. All users will pass an appropriate ut_ad() or nothing. dict_table_add_to_columns(): Assert that the table being added to the data dictionary cache is not already being pointed to by the name_hash and id_hash tables. HASH_SEARCH_ALL(): New macro, for use in dict_table_add_to_columns(). dict_mem_table_free(): Set ut_d(table->cached = FALSE), so that we can check ut_ad(table->cached) when traversing the hash tables, as in HASH_SEARCH(name_hash, dict_sys->table_hash, ...) and HASH_SEARCH(id_hash, dict_sys->table_id_hash, ...). dict_table_get_low(), dict_table_get_on_id_low(): Assert ut_ad(!table || table->cached). fil_space_get_by_id(): Check ut_ad(space->magic_n == FIL_SPACE_MAGIC_N) in HASH_SEARCH(hash, fil_system->spaces, ...). fil_space_get_by_name(): Check ut_ad(space->magic_n == FIL_SPACE_MAGIC_N) in HASH_SEARCH(name_hash, fil_system->name_hash, ...). buf_buddy_block_free(): Check that the blocks are in valid state in HASH_SEARCH(hash, buf_pool->zip_hash, ...). buf_page_hash_get(): Check that the blocks are in valid state in HASH_SEARCH(hash, buf_pool->page_hash, ...). get_share(), free_share(): Check ut_ad(share->use_count > 0) in HASH_SEARCH(table_name_hash, innobase_open_tables, ...). This was posted as rb://75 for tracking down errors similar to Issue #153. ------------------------------------------------------------------------ r3931 | marko | 2009-01-14 16:06:22 +0200 (Wed, 14 Jan 2009) | 26 lines branches/zip: Merge revisions 3601:3930 from branches/5.1: ------------------------------------------------------------------------ r3911 | sunny | 2009-01-13 14:15:24 +0200 (Tue, 13 Jan 2009) | 13 lines branches/5.1: Fix Bug#38187 Error 153 when creating savepoints InnoDB previously treated savepoints as a stack e.g., SAVEPOINT a; SAVEPOINT b; SAVEPOINT c; SAVEPOINT b; <- This would delete b and c. This fix changes the behavior to: SAVEPOINT a; SAVEPOINT b; SAVEPOINT c; SAVEPOINT b; <- Does not delete savepoint c ------------------------------------------------------------------------ r3930 | marko | 2009-01-14 15:51:30 +0200 (Wed, 14 Jan 2009) | 4 lines branches/5.1: dict_load_table(): If dict_load_indexes() fails, invoke dict_table_remove_from_cache() instead of dict_mem_table_free(), so that the data dictionary will not point to freed data. (Bug #42075, Issue #153, rb://76 approved by Heikki Tuuri) ------------------------------------------------------------------------ ------------------------------------------------------------------------
668 lines
18 KiB
C
668 lines
18 KiB
C
/******************************************************
|
|
Binary buddy allocator for compressed pages
|
|
|
|
(c) 2006 Innobase Oy
|
|
|
|
Created December 2006 by Marko Makela
|
|
*******************************************************/
|
|
|
|
#define THIS_MODULE
|
|
#include "buf0buddy.h"
|
|
#ifdef UNIV_NONINL
|
|
# include "buf0buddy.ic"
|
|
#endif
|
|
#undef THIS_MODULE
|
|
#include "buf0buf.h"
|
|
#include "buf0lru.h"
|
|
#include "buf0flu.h"
|
|
#include "page0zip.h"
|
|
|
|
/* Statistic counters */
|
|
|
|
#ifdef UNIV_DEBUG
|
|
/** Number of frames allocated from the buffer pool to the buddy system.
|
|
Protected by buf_pool_mutex. */
|
|
static ulint buf_buddy_n_frames;
|
|
#endif /* UNIV_DEBUG */
|
|
/** Statistics of the buddy system, indexed by block size.
|
|
Protected by buf_pool_mutex. */
|
|
UNIV_INTERN buf_buddy_stat_t buf_buddy_stat[BUF_BUDDY_SIZES + 1];
|
|
|
|
/**************************************************************************
|
|
Get the offset of the buddy of a compressed page frame. */
|
|
UNIV_INLINE
|
|
byte*
|
|
buf_buddy_get(
|
|
/*==========*/
|
|
/* out: the buddy relative of page */
|
|
byte* page, /* in: compressed page */
|
|
ulint size) /* in: page size in bytes */
|
|
{
|
|
ut_ad(ut_is_2pow(size));
|
|
ut_ad(size >= BUF_BUDDY_LOW);
|
|
ut_ad(size < BUF_BUDDY_HIGH);
|
|
ut_ad(!ut_align_offset(page, size));
|
|
|
|
if (((ulint) page) & size) {
|
|
return(page - size);
|
|
} else {
|
|
return(page + size);
|
|
}
|
|
}
|
|
|
|
/**************************************************************************
|
|
Add a block to the head of the appropriate buddy free list. */
|
|
UNIV_INLINE
|
|
void
|
|
buf_buddy_add_to_free(
|
|
/*==================*/
|
|
buf_page_t* bpage, /* in,own: block to be freed */
|
|
ulint i) /* in: index of buf_pool->zip_free[] */
|
|
{
|
|
#ifdef UNIV_DEBUG_VALGRIND
|
|
buf_page_t* b = UT_LIST_GET_FIRST(buf_pool->zip_free[i]);
|
|
|
|
if (b) UNIV_MEM_VALID(b, BUF_BUDDY_LOW << i);
|
|
#endif /* UNIV_DEBUG_VALGRIND */
|
|
|
|
ut_ad(buf_pool->zip_free[i].start != bpage);
|
|
UT_LIST_ADD_FIRST(list, buf_pool->zip_free[i], bpage);
|
|
|
|
#ifdef UNIV_DEBUG_VALGRIND
|
|
if (b) UNIV_MEM_FREE(b, BUF_BUDDY_LOW << i);
|
|
UNIV_MEM_ASSERT_AND_FREE(bpage, BUF_BUDDY_LOW << i);
|
|
#endif /* UNIV_DEBUG_VALGRIND */
|
|
}
|
|
|
|
/**************************************************************************
|
|
Remove a block from the appropriate buddy free list. */
|
|
UNIV_INLINE
|
|
void
|
|
buf_buddy_remove_from_free(
|
|
/*=======================*/
|
|
buf_page_t* bpage, /* in: block to be removed */
|
|
ulint i) /* in: index of buf_pool->zip_free[] */
|
|
{
|
|
#ifdef UNIV_DEBUG_VALGRIND
|
|
buf_page_t* prev = UT_LIST_GET_PREV(list, bpage);
|
|
buf_page_t* next = UT_LIST_GET_NEXT(list, bpage);
|
|
|
|
if (prev) UNIV_MEM_VALID(prev, BUF_BUDDY_LOW << i);
|
|
if (next) UNIV_MEM_VALID(next, BUF_BUDDY_LOW << i);
|
|
|
|
ut_ad(!prev || buf_page_get_state(prev) == BUF_BLOCK_ZIP_FREE);
|
|
ut_ad(!next || buf_page_get_state(next) == BUF_BLOCK_ZIP_FREE);
|
|
#endif /* UNIV_DEBUG_VALGRIND */
|
|
|
|
ut_ad(buf_page_get_state(bpage) == BUF_BLOCK_ZIP_FREE);
|
|
UT_LIST_REMOVE(list, buf_pool->zip_free[i], bpage);
|
|
|
|
#ifdef UNIV_DEBUG_VALGRIND
|
|
if (prev) UNIV_MEM_FREE(prev, BUF_BUDDY_LOW << i);
|
|
if (next) UNIV_MEM_FREE(next, BUF_BUDDY_LOW << i);
|
|
#endif /* UNIV_DEBUG_VALGRIND */
|
|
}
|
|
|
|
/**************************************************************************
|
|
Try to allocate a block from buf_pool->zip_free[]. */
|
|
static
|
|
void*
|
|
buf_buddy_alloc_zip(
|
|
/*================*/
|
|
/* out: allocated block, or NULL
|
|
if buf_pool->zip_free[] was empty */
|
|
ulint i) /* in: index of buf_pool->zip_free[] */
|
|
{
|
|
buf_page_t* bpage;
|
|
|
|
ut_ad(buf_pool_mutex_own());
|
|
ut_a(i < BUF_BUDDY_SIZES);
|
|
|
|
#if defined UNIV_DEBUG && !defined UNIV_DEBUG_VALGRIND
|
|
/* Valgrind would complain about accessing free memory. */
|
|
UT_LIST_VALIDATE(list, buf_page_t, buf_pool->zip_free[i]);
|
|
#endif /* UNIV_DEBUG && !UNIV_DEBUG_VALGRIND */
|
|
bpage = UT_LIST_GET_FIRST(buf_pool->zip_free[i]);
|
|
|
|
if (bpage) {
|
|
UNIV_MEM_VALID(bpage, BUF_BUDDY_LOW << i);
|
|
ut_a(buf_page_get_state(bpage) == BUF_BLOCK_ZIP_FREE);
|
|
|
|
buf_buddy_remove_from_free(bpage, i);
|
|
} else if (i + 1 < BUF_BUDDY_SIZES) {
|
|
/* Attempt to split. */
|
|
bpage = buf_buddy_alloc_zip(i + 1);
|
|
|
|
if (bpage) {
|
|
buf_page_t* buddy = (buf_page_t*)
|
|
(((char*) bpage) + (BUF_BUDDY_LOW << i));
|
|
|
|
ut_ad(!buf_pool_contains_zip(buddy));
|
|
ut_d(memset(buddy, i, BUF_BUDDY_LOW << i));
|
|
buddy->state = BUF_BLOCK_ZIP_FREE;
|
|
buf_buddy_add_to_free(buddy, i);
|
|
}
|
|
}
|
|
|
|
#ifdef UNIV_DEBUG
|
|
if (bpage) {
|
|
memset(bpage, ~i, BUF_BUDDY_LOW << i);
|
|
}
|
|
#endif /* UNIV_DEBUG */
|
|
|
|
UNIV_MEM_ALLOC(bpage, BUF_BUDDY_SIZES << i);
|
|
|
|
return(bpage);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Deallocate a buffer frame of UNIV_PAGE_SIZE. */
|
|
static
|
|
void
|
|
buf_buddy_block_free(
|
|
/*=================*/
|
|
void* buf) /* in: buffer frame to deallocate */
|
|
{
|
|
const ulint fold = BUF_POOL_ZIP_FOLD_PTR(buf);
|
|
buf_page_t* bpage;
|
|
buf_block_t* block;
|
|
|
|
ut_ad(buf_pool_mutex_own());
|
|
ut_ad(!mutex_own(&buf_pool_zip_mutex));
|
|
ut_a(!ut_align_offset(buf, UNIV_PAGE_SIZE));
|
|
|
|
HASH_SEARCH(hash, buf_pool->zip_hash, fold, buf_page_t*, bpage,
|
|
ut_ad(buf_page_get_state(bpage) == BUF_BLOCK_MEMORY
|
|
&& bpage->in_zip_hash && !bpage->in_page_hash),
|
|
((buf_block_t*) bpage)->frame == buf);
|
|
ut_a(bpage);
|
|
ut_a(buf_page_get_state(bpage) == BUF_BLOCK_MEMORY);
|
|
ut_ad(!bpage->in_page_hash);
|
|
ut_ad(bpage->in_zip_hash);
|
|
ut_d(bpage->in_zip_hash = FALSE);
|
|
HASH_DELETE(buf_page_t, hash, buf_pool->zip_hash, fold, bpage);
|
|
|
|
ut_d(memset(buf, 0, UNIV_PAGE_SIZE));
|
|
UNIV_MEM_INVALID(buf, UNIV_PAGE_SIZE);
|
|
|
|
block = (buf_block_t*) bpage;
|
|
mutex_enter(&block->mutex);
|
|
buf_LRU_block_free_non_file_page(block);
|
|
mutex_exit(&block->mutex);
|
|
|
|
ut_ad(buf_buddy_n_frames > 0);
|
|
ut_d(buf_buddy_n_frames--);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Allocate a buffer block to the buddy allocator. */
|
|
static
|
|
void
|
|
buf_buddy_block_register(
|
|
/*=====================*/
|
|
buf_block_t* block) /* in: buffer frame to allocate */
|
|
{
|
|
const ulint fold = BUF_POOL_ZIP_FOLD(block);
|
|
ut_ad(buf_pool_mutex_own());
|
|
ut_ad(!mutex_own(&buf_pool_zip_mutex));
|
|
|
|
buf_block_set_state(block, BUF_BLOCK_MEMORY);
|
|
|
|
ut_a(block->frame);
|
|
ut_a(!ut_align_offset(block->frame, UNIV_PAGE_SIZE));
|
|
|
|
ut_ad(!block->page.in_page_hash);
|
|
ut_ad(!block->page.in_zip_hash);
|
|
ut_d(block->page.in_zip_hash = TRUE);
|
|
HASH_INSERT(buf_page_t, hash, buf_pool->zip_hash, fold, &block->page);
|
|
|
|
ut_d(buf_buddy_n_frames++);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Allocate a block from a bigger object. */
|
|
static
|
|
void*
|
|
buf_buddy_alloc_from(
|
|
/*=================*/
|
|
/* out: allocated block */
|
|
void* buf, /* in: a block that is free to use */
|
|
ulint i, /* in: index of buf_pool->zip_free[] */
|
|
ulint j) /* in: size of buf as an index
|
|
of buf_pool->zip_free[] */
|
|
{
|
|
ulint offs = BUF_BUDDY_LOW << j;
|
|
ut_ad(j <= BUF_BUDDY_SIZES);
|
|
ut_ad(j >= i);
|
|
ut_ad(!ut_align_offset(buf, offs));
|
|
|
|
/* Add the unused parts of the block to the free lists. */
|
|
while (j > i) {
|
|
buf_page_t* bpage;
|
|
|
|
offs >>= 1;
|
|
j--;
|
|
|
|
bpage = (buf_page_t*) ((byte*) buf + offs);
|
|
ut_d(memset(bpage, j, BUF_BUDDY_LOW << j));
|
|
bpage->state = BUF_BLOCK_ZIP_FREE;
|
|
#if defined UNIV_DEBUG && !defined UNIV_DEBUG_VALGRIND
|
|
/* Valgrind would complain about accessing free memory. */
|
|
UT_LIST_VALIDATE(list, buf_page_t, buf_pool->zip_free[j]);
|
|
#endif /* UNIV_DEBUG && !UNIV_DEBUG_VALGRIND */
|
|
buf_buddy_add_to_free(bpage, j);
|
|
}
|
|
|
|
return(buf);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Allocate a block. The thread calling this function must hold
|
|
buf_pool_mutex and must not hold buf_pool_zip_mutex or any block->mutex.
|
|
The buf_pool_mutex may only be released and reacquired if lru != NULL. */
|
|
UNIV_INTERN
|
|
void*
|
|
buf_buddy_alloc_low(
|
|
/*================*/
|
|
/* out: allocated block,
|
|
possibly NULL if lru==NULL */
|
|
ulint i, /* in: index of buf_pool->zip_free[],
|
|
or BUF_BUDDY_SIZES */
|
|
ibool* lru) /* in: pointer to a variable that will be assigned
|
|
TRUE if storage was allocated from the LRU list
|
|
and buf_pool_mutex was temporarily released,
|
|
or NULL if the LRU list should not be used */
|
|
{
|
|
buf_block_t* block;
|
|
|
|
ut_ad(buf_pool_mutex_own());
|
|
ut_ad(!mutex_own(&buf_pool_zip_mutex));
|
|
|
|
if (i < BUF_BUDDY_SIZES) {
|
|
/* Try to allocate from the buddy system. */
|
|
block = buf_buddy_alloc_zip(i);
|
|
|
|
if (block) {
|
|
|
|
goto func_exit;
|
|
}
|
|
}
|
|
|
|
/* Try allocating from the buf_pool->free list. */
|
|
block = buf_LRU_get_free_only();
|
|
|
|
if (block) {
|
|
|
|
goto alloc_big;
|
|
}
|
|
|
|
if (!lru) {
|
|
|
|
return(NULL);
|
|
}
|
|
|
|
/* Try replacing an uncompressed page in the buffer pool. */
|
|
buf_pool_mutex_exit();
|
|
block = buf_LRU_get_free_block(0);
|
|
*lru = TRUE;
|
|
buf_pool_mutex_enter();
|
|
|
|
alloc_big:
|
|
buf_buddy_block_register(block);
|
|
|
|
block = buf_buddy_alloc_from(block->frame, i, BUF_BUDDY_SIZES);
|
|
|
|
func_exit:
|
|
buf_buddy_stat[i].used++;
|
|
return(block);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Try to relocate the control block of a compressed page. */
|
|
static
|
|
ibool
|
|
buf_buddy_relocate_block(
|
|
/*=====================*/
|
|
/* out: TRUE if relocated */
|
|
buf_page_t* bpage, /* in: block to relocate */
|
|
buf_page_t* dpage) /* in: free block to relocate to */
|
|
{
|
|
buf_page_t* b;
|
|
|
|
ut_ad(buf_pool_mutex_own());
|
|
|
|
switch (buf_page_get_state(bpage)) {
|
|
case BUF_BLOCK_ZIP_FREE:
|
|
case BUF_BLOCK_NOT_USED:
|
|
case BUF_BLOCK_READY_FOR_USE:
|
|
case BUF_BLOCK_FILE_PAGE:
|
|
case BUF_BLOCK_MEMORY:
|
|
case BUF_BLOCK_REMOVE_HASH:
|
|
ut_error;
|
|
case BUF_BLOCK_ZIP_DIRTY:
|
|
/* Cannot relocate dirty pages. */
|
|
return(FALSE);
|
|
|
|
case BUF_BLOCK_ZIP_PAGE:
|
|
break;
|
|
}
|
|
|
|
mutex_enter(&buf_pool_zip_mutex);
|
|
|
|
if (!buf_page_can_relocate(bpage)) {
|
|
mutex_exit(&buf_pool_zip_mutex);
|
|
return(FALSE);
|
|
}
|
|
|
|
buf_relocate(bpage, dpage);
|
|
ut_d(bpage->state = BUF_BLOCK_ZIP_FREE);
|
|
|
|
/* relocate buf_pool->zip_clean */
|
|
b = UT_LIST_GET_PREV(list, dpage);
|
|
UT_LIST_REMOVE(list, buf_pool->zip_clean, dpage);
|
|
|
|
if (b) {
|
|
UT_LIST_INSERT_AFTER(list, buf_pool->zip_clean, b, dpage);
|
|
} else {
|
|
UT_LIST_ADD_FIRST(list, buf_pool->zip_clean, dpage);
|
|
}
|
|
|
|
UNIV_MEM_INVALID(bpage, sizeof *bpage);
|
|
|
|
mutex_exit(&buf_pool_zip_mutex);
|
|
return(TRUE);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Try to relocate a block. */
|
|
static
|
|
ibool
|
|
buf_buddy_relocate(
|
|
/*===============*/
|
|
/* out: TRUE if relocated */
|
|
void* src, /* in: block to relocate */
|
|
void* dst, /* in: free block to relocate to */
|
|
ulint i) /* in: index of buf_pool->zip_free[] */
|
|
{
|
|
buf_page_t* bpage;
|
|
const ulint size = BUF_BUDDY_LOW << i;
|
|
ullint usec = ut_time_us(NULL);
|
|
|
|
ut_ad(buf_pool_mutex_own());
|
|
ut_ad(!mutex_own(&buf_pool_zip_mutex));
|
|
ut_ad(!ut_align_offset(src, size));
|
|
ut_ad(!ut_align_offset(dst, size));
|
|
UNIV_MEM_ASSERT_W(dst, size);
|
|
|
|
/* We assume that all memory from buf_buddy_alloc()
|
|
is used for either compressed pages or buf_page_t
|
|
objects covering compressed pages. */
|
|
|
|
/* We look inside the allocated objects returned by
|
|
buf_buddy_alloc() and assume that anything of
|
|
PAGE_ZIP_MIN_SIZE or larger is a compressed page that contains
|
|
a valid space_id and page_no in the page header. Should the
|
|
fields be invalid, we will be unable to relocate the block.
|
|
We also assume that anything that fits sizeof(buf_page_t)
|
|
actually is a properly initialized buf_page_t object. */
|
|
|
|
if (size >= PAGE_ZIP_MIN_SIZE) {
|
|
/* This is a compressed page. */
|
|
mutex_t* mutex;
|
|
|
|
/* The src block may be split into smaller blocks,
|
|
some of which may be free. Thus, the
|
|
mach_read_from_4() calls below may attempt to read
|
|
from free memory. The memory is "owned" by the buddy
|
|
allocator (and it has been allocated from the buffer
|
|
pool), so there is nothing wrong about this. The
|
|
mach_read_from_4() calls here will only trigger bogus
|
|
Valgrind memcheck warnings in UNIV_DEBUG_VALGRIND builds. */
|
|
bpage = buf_page_hash_get(
|
|
mach_read_from_4((const byte*) src
|
|
+ FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID),
|
|
mach_read_from_4((const byte*) src
|
|
+ FIL_PAGE_OFFSET));
|
|
|
|
if (!bpage || bpage->zip.data != src) {
|
|
/* The block has probably been freshly
|
|
allocated by buf_LRU_get_free_block() but not
|
|
added to buf_pool->page_hash yet. Obviously,
|
|
it cannot be relocated. */
|
|
|
|
return(FALSE);
|
|
}
|
|
|
|
if (page_zip_get_size(&bpage->zip) != size) {
|
|
/* The block is of different size. We would
|
|
have to relocate all blocks covered by src.
|
|
For the sake of simplicity, give up. */
|
|
ut_ad(page_zip_get_size(&bpage->zip) < size);
|
|
|
|
return(FALSE);
|
|
}
|
|
|
|
/* The block must have been allocated, but it may
|
|
contain uninitialized data. */
|
|
UNIV_MEM_ASSERT_W(src, size);
|
|
|
|
mutex = buf_page_get_mutex(bpage);
|
|
|
|
mutex_enter(mutex);
|
|
|
|
if (buf_page_can_relocate(bpage)) {
|
|
/* Relocate the compressed page. */
|
|
ut_a(bpage->zip.data == src);
|
|
memcpy(dst, src, size);
|
|
bpage->zip.data = dst;
|
|
mutex_exit(mutex);
|
|
success:
|
|
UNIV_MEM_INVALID(src, size);
|
|
{
|
|
buf_buddy_stat_t* buddy_stat
|
|
= &buf_buddy_stat[i];
|
|
buddy_stat->relocated++;
|
|
buddy_stat->relocated_usec
|
|
+= ut_time_us(NULL) - usec;
|
|
}
|
|
return(TRUE);
|
|
}
|
|
|
|
mutex_exit(mutex);
|
|
} else if (i == buf_buddy_get_slot(sizeof(buf_page_t))) {
|
|
/* This must be a buf_page_t object. */
|
|
UNIV_MEM_ASSERT_RW(src, size);
|
|
if (buf_buddy_relocate_block(src, dst)) {
|
|
|
|
goto success;
|
|
}
|
|
}
|
|
|
|
return(FALSE);
|
|
}
|
|
|
|
/**************************************************************************
|
|
Deallocate a block. */
|
|
UNIV_INTERN
|
|
void
|
|
buf_buddy_free_low(
|
|
/*===============*/
|
|
void* buf, /* in: block to be freed, must not be
|
|
pointed to by the buffer pool */
|
|
ulint i) /* in: index of buf_pool->zip_free[] */
|
|
{
|
|
buf_page_t* bpage;
|
|
buf_page_t* buddy;
|
|
|
|
ut_ad(buf_pool_mutex_own());
|
|
ut_ad(!mutex_own(&buf_pool_zip_mutex));
|
|
ut_ad(i <= BUF_BUDDY_SIZES);
|
|
ut_ad(buf_buddy_stat[i].used > 0);
|
|
|
|
buf_buddy_stat[i].used--;
|
|
recombine:
|
|
UNIV_MEM_ASSERT_AND_ALLOC(buf, BUF_BUDDY_LOW << i);
|
|
ut_d(((buf_page_t*) buf)->state = BUF_BLOCK_ZIP_FREE);
|
|
|
|
if (i == BUF_BUDDY_SIZES) {
|
|
buf_buddy_block_free(buf);
|
|
return;
|
|
}
|
|
|
|
ut_ad(i < BUF_BUDDY_SIZES);
|
|
ut_ad(buf == ut_align_down(buf, BUF_BUDDY_LOW << i));
|
|
ut_ad(!buf_pool_contains_zip(buf));
|
|
|
|
/* Try to combine adjacent blocks. */
|
|
|
|
buddy = (buf_page_t*) buf_buddy_get(((byte*) buf), BUF_BUDDY_LOW << i);
|
|
|
|
#ifndef UNIV_DEBUG_VALGRIND
|
|
/* Valgrind would complain about accessing free memory. */
|
|
|
|
if (buddy->state != BUF_BLOCK_ZIP_FREE) {
|
|
|
|
goto buddy_nonfree;
|
|
}
|
|
|
|
/* The field buddy->state can only be trusted for free blocks.
|
|
If buddy->state == BUF_BLOCK_ZIP_FREE, the block is free if
|
|
it is in the free list. */
|
|
#endif /* !UNIV_DEBUG_VALGRIND */
|
|
|
|
for (bpage = UT_LIST_GET_FIRST(buf_pool->zip_free[i]); bpage; ) {
|
|
UNIV_MEM_VALID(bpage, BUF_BUDDY_LOW << i);
|
|
ut_ad(buf_page_get_state(bpage) == BUF_BLOCK_ZIP_FREE);
|
|
|
|
if (bpage == buddy) {
|
|
buddy_free:
|
|
/* The buddy is free: recombine */
|
|
buf_buddy_remove_from_free(bpage, i);
|
|
buddy_free2:
|
|
ut_ad(buf_page_get_state(buddy) == BUF_BLOCK_ZIP_FREE);
|
|
ut_ad(!buf_pool_contains_zip(buddy));
|
|
i++;
|
|
buf = ut_align_down(buf, BUF_BUDDY_LOW << i);
|
|
|
|
goto recombine;
|
|
}
|
|
|
|
ut_a(bpage != buf);
|
|
|
|
{
|
|
buf_page_t* next = UT_LIST_GET_NEXT(list, bpage);
|
|
UNIV_MEM_ASSERT_AND_FREE(bpage, BUF_BUDDY_LOW << i);
|
|
bpage = next;
|
|
}
|
|
}
|
|
|
|
#ifndef UNIV_DEBUG_VALGRIND
|
|
buddy_nonfree:
|
|
/* Valgrind would complain about accessing free memory. */
|
|
ut_d(UT_LIST_VALIDATE(list, buf_page_t, buf_pool->zip_free[i]));
|
|
#endif /* UNIV_DEBUG_VALGRIND */
|
|
|
|
/* The buddy is not free. Is there a free block of this size? */
|
|
bpage = UT_LIST_GET_FIRST(buf_pool->zip_free[i]);
|
|
|
|
if (bpage) {
|
|
/* Remove the block from the free list, because a successful
|
|
buf_buddy_relocate() will overwrite bpage->list. */
|
|
|
|
UNIV_MEM_VALID(bpage, BUF_BUDDY_LOW << i);
|
|
buf_buddy_remove_from_free(bpage, i);
|
|
|
|
/* Try to relocate the buddy of buf to the free block. */
|
|
if (buf_buddy_relocate(buddy, bpage, i)) {
|
|
|
|
ut_d(buddy->state = BUF_BLOCK_ZIP_FREE);
|
|
goto buddy_free2;
|
|
}
|
|
|
|
buf_buddy_add_to_free(bpage, i);
|
|
|
|
/* Try to relocate the buddy of the free block to buf. */
|
|
buddy = (buf_page_t*) buf_buddy_get(((byte*) bpage),
|
|
BUF_BUDDY_LOW << i);
|
|
|
|
#if defined UNIV_DEBUG && !defined UNIV_DEBUG_VALGRIND
|
|
{
|
|
const buf_page_t* b;
|
|
|
|
/* The buddy must not be (completely) free, because
|
|
we always recombine adjacent free blocks.
|
|
(Parts of the buddy can be free in
|
|
buf_pool->zip_free[j] with j < i.)*/
|
|
for (b = UT_LIST_GET_FIRST(buf_pool->zip_free[i]);
|
|
b; b = UT_LIST_GET_NEXT(list, b)) {
|
|
|
|
ut_a(b != buddy);
|
|
}
|
|
}
|
|
#endif /* UNIV_DEBUG && !UNIV_DEBUG_VALGRIND */
|
|
|
|
if (buf_buddy_relocate(buddy, buf, i)) {
|
|
|
|
buf = bpage;
|
|
UNIV_MEM_VALID(bpage, BUF_BUDDY_LOW << i);
|
|
ut_d(buddy->state = BUF_BLOCK_ZIP_FREE);
|
|
goto buddy_free;
|
|
}
|
|
}
|
|
|
|
/* Free the block to the buddy list. */
|
|
bpage = buf;
|
|
#ifdef UNIV_DEBUG
|
|
if (i < buf_buddy_get_slot(PAGE_ZIP_MIN_SIZE)) {
|
|
/* This area has most likely been allocated for at
|
|
least one compressed-only block descriptor. Check
|
|
that there are no live objects in the area. This is
|
|
not a complete check: it may yield false positives as
|
|
well as false negatives. Also, due to buddy blocks
|
|
being recombined, it is possible (although unlikely)
|
|
that this branch is never reached. */
|
|
|
|
char* c;
|
|
|
|
# ifndef UNIV_DEBUG_VALGRIND
|
|
/* Valgrind would complain about accessing
|
|
uninitialized memory. Besides, Valgrind performs a
|
|
more exhaustive check, at every memory access. */
|
|
const buf_page_t* b = buf;
|
|
const buf_page_t* const b_end = (buf_page_t*)
|
|
((char*) b + (BUF_BUDDY_LOW << i));
|
|
|
|
for (; b < b_end; b++) {
|
|
/* Avoid false positives (and cause false
|
|
negatives) by checking for b->space < 1000. */
|
|
|
|
if ((b->state == BUF_BLOCK_ZIP_PAGE
|
|
|| b->state == BUF_BLOCK_ZIP_DIRTY)
|
|
&& b->space > 0 && b->space < 1000) {
|
|
fprintf(stderr,
|
|
"buddy dirty %p %u (%u,%u) %p,%lu\n",
|
|
(void*) b,
|
|
b->state, b->space, b->offset,
|
|
buf, i);
|
|
}
|
|
}
|
|
# endif /* !UNIV_DEBUG_VALGRIND */
|
|
|
|
/* Scramble the block. This should make any pointers
|
|
invalid and trigger a segmentation violation. Because
|
|
the scrambling can be reversed, it may be possible to
|
|
track down the object pointing to the freed data by
|
|
dereferencing the unscrambled bpage->LRU or
|
|
bpage->list pointers. */
|
|
for (c = (char*) buf + (BUF_BUDDY_LOW << i);
|
|
c-- > (char*) buf; ) {
|
|
*c = ~*c ^ i;
|
|
}
|
|
} else {
|
|
/* Fill large blocks with a constant pattern. */
|
|
memset(bpage, i, BUF_BUDDY_LOW << i);
|
|
}
|
|
#endif /* UNIV_DEBUG */
|
|
bpage->state = BUF_BLOCK_ZIP_FREE;
|
|
buf_buddy_add_to_free(bpage, i);
|
|
}
|