mirror of
https://github.com/MariaDB/server.git
synced 2025-02-01 11:31:51 +01:00
86b8525254
------------------------------------------------------------------------ r3607 | marko | 2008-12-30 22:33:31 +0200 (Tue, 30 Dec 2008) | 20 lines branches/zip: Remove the dependency on the MySQL HASH table implementation. Use the InnoDB hash table for keeping track of INNOBASE_SHARE objects. struct st_innobase_share: Make table_name const uchar*. Add the member table_name_hash. innobase_open_tables: Change the type from HASH to hash_table_t*. innobase_get_key(): Remove. innobase_fold_name(): New function, for computing the fold value for the InnoDB hash table. get_share(), free_share(): Use the InnoDB hash functions. innobase_end(): Free innobase_open_tables before shutting down InnoDB. Shutting down InnoDB will invalidate all memory allocated via InnoDB. rb://65 approved by Heikki Tuuri. This addresses Issue #104. ------------------------------------------------------------------------ r3608 | marko | 2008-12-30 22:45:04 +0200 (Tue, 30 Dec 2008) | 22 lines branches/zip: When setting the PAGE_LEVEL of a compressed B-tree page from or to 0, compress the page at the same time. This is necessary, because the column information stored on the compressed page will differ between leaf and non-leaf pages. Leaf pages are identified by PAGE_LEVEL=0. This bug was reported as Issue #150. Document the similarity between btr_page_create() and btr_page_empty(). Make the function signature of btr_page_empty() identical with btr_page_create(). (This will add the parameter "level".) btr_root_raise_and_insert(): Replace some code with a call to btr_page_empty(). btr_attach_half_pages(): Assert that the page level has already been set on both block and new_block. Do not set it again. btr_discard_only_page_on_level(): Document that this function is probably never called. Make it work on any height tree. (Tested on 2-high tree by disabling btr_lift_page_up().) rb://68 ------------------------------------------------------------------------ r3612 | marko | 2009-01-02 11:02:44 +0200 (Fri, 02 Jan 2009) | 14 lines branches/zip: Merge c2998 from branches/6.0, so that the same InnoDB Plugin source tree will work both under 5.1 and 6.0. Do not add the test case innodb_ctype_ldml.test, because it would not work under MySQL 5.1. Refuse to create tables whose columns contain collation IDs above 255. This removes an assertion failure that was introduced in WL#4164 (Two-byte collation IDs). create_table_def(): Do not fail an assertion if a column contains a charset-collation ID greater than 256. Instead, issue an error and refuse to create the table. The original change (branches/6.0 r2998) was rb://51 approved by Calvin Sun. ------------------------------------------------------------------------ r3613 | inaam | 2009-01-02 15:10:50 +0200 (Fri, 02 Jan 2009) | 6 lines branches/zip: Implement the parameter innodb_use_sys_malloc (false by default), for disabling InnoDB's internal memory allocator and using system malloc/free instead. rb://62 approved by Marko ------------------------------------------------------------------------ r3614 | marko | 2009-01-02 15:55:12 +0200 (Fri, 02 Jan 2009) | 1 line branches/zip: ChangeLog: Document r3608 and r3613. ------------------------------------------------------------------------ r3615 | marko | 2009-01-02 15:57:51 +0200 (Fri, 02 Jan 2009) | 1 line branches/zip: ChangeLog: Clarify the impact of r3608. ------------------------------------------------------------------------ r3616 | marko | 2009-01-03 00:23:30 +0200 (Sat, 03 Jan 2009) | 1 line branches/zip: srv_suspend_mysql_thread(): Add some clarifying comments. ------------------------------------------------------------------------ r3618 | marko | 2009-01-05 12:54:53 +0200 (Mon, 05 Jan 2009) | 15 lines branches/zip: Merge revisions 3598:3601 from branches/5.1: ------------------------------------------------------------------------ r3601 | marko | 2008-12-22 16:05:19 +0200 (Mon, 22 Dec 2008) | 9 lines branches/5.1: Make SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED a true replacement of SET GLOBAL INNODB_LOCKS_UNSAFE_FOR_BINLOG=1. This fixes an error that was introduced in r370, causing semi-consistent read not to not unlock rows in READ COMMITTED mode. (Bug #41671, Issue #146) rb://67 approved by Heikki Tuuri ------------------------------------------------------------------------ ------------------------------------------------------------------------ r3623 | vasil | 2009-01-06 09:56:32 +0200 (Tue, 06 Jan 2009) | 7 lines branches/zip: Add patch to fix the failing main.variables mysql-test. It started failing after the variable innodb_use_sys_malloc was added because it matches '%alloc%' and the test is badly written and expects that no new variables like that will ever be added. ------------------------------------------------------------------------ r3795 | marko | 2009-01-07 16:17:47 +0200 (Wed, 07 Jan 2009) | 7 lines branches/zip: row_merge_tuple_cmp(): Do not report a duplicate key value if any of the fields are NULL. While the tuples are equal in the sorting order, SQL NULL is defined to be logically inequal to anything else. (Bug #41904) rb://70 approved by Heikki Tuuri ------------------------------------------------------------------------ r3796 | marko | 2009-01-07 16:19:32 +0200 (Wed, 07 Jan 2009) | 1 line branches/zip: Add the tests that were forgotten from r3795. ------------------------------------------------------------------------ r3797 | marko | 2009-01-07 16:22:18 +0200 (Wed, 07 Jan 2009) | 22 lines branches/zip: Do not call trx_allocate_for_mysql() directly, but use helper functions that initialize some members of the transaction struct. (Bug #41680) innobase_trx_init(): New function: initialize some fields of a transaction struct from a MySQL THD object. innobase_trx_allocate(): New function: allocate and initialize a transaction struct. check_trx_exists(): Use the above two functions. ha_innobase::delete_table(), ha_innobase::rename_table(), ha_innobase::add_index(), ha_innobase::final_drop_index(): Use innobase_trx_allocate(). innobase_drop_database(): In the Windows plugin, initialize the trx_t specially, because the THD is not available. Otherwise, use innobase_trx_allocate(). rb://69 accepted by Heikki Tuuri ------------------------------------------------------------------------ r3798 | marko | 2009-01-07 16:42:42 +0200 (Wed, 07 Jan 2009) | 8 lines branches/zip: row_merge_drop_temp_indexes(): Do not lock the rows of SYS_INDEXES when looking for partially created indexes. Use the transaction isolation level READ UNCOMMITTED to avoid interfering with locks held by incomplete transactions that will be rolled back in a subsequent step in the recovery. (Issue #152) Approved by Heikki Tuuri ------------------------------------------------------------------------ r3852 | vasil | 2009-01-08 22:10:10 +0200 (Thu, 08 Jan 2009) | 4 lines branches/zip: Add ChangeLog entries for r3795 r3796 r3797 r3798. ------------------------------------------------------------------------ r3866 | marko | 2009-01-09 15:09:51 +0200 (Fri, 09 Jan 2009) | 2 lines branches/zip: buf_flush_try_page(): Move some common code from each switch case before the switch block. ------------------------------------------------------------------------ r3867 | marko | 2009-01-09 15:13:14 +0200 (Fri, 09 Jan 2009) | 2 lines branches/zip: buf_flush_try_page(): Introduce the variable is_compressed for caching the result of buf_page_get_state(bpage) == BUF_BLOCK_FILE_PAGE. ------------------------------------------------------------------------ r3868 | marko | 2009-01-09 15:40:11 +0200 (Fri, 09 Jan 2009) | 4 lines branches/zip: buf_flush_insert_into_flush_list(), buf_flush_insert_sorted_into_flush_list(): Remove unused code. Change the parameter to buf_block_t* block and assert that block->state == BUF_BLOCK_FILE_PAGE. This is part of Issue #155. ------------------------------------------------------------------------ r3873 | marko | 2009-01-09 22:27:40 +0200 (Fri, 09 Jan 2009) | 17 lines branches/zip: Some non-functional changes related to Issue #155. buf_page_struct: Note that space and offset are also protected by buf_pool_mutex. They are only assigned to by buf_block_set_file_page(). Thus, it suffices for buf_flush_batch() to hold just buf_pool_mutex when checking these fields. buf_flush_try_page(): Rename "locked" to "is_s_latched", per Heikki's request. buf_flush_batch(): Move the common statement mutex_exit(block_mutex) from all if-else if-else branches before the if block. Remove the redundant test (buf_pool->init_flush[flush_type] == FALSE) that was apparently copied from buf_flush_write_complete(). buf_flush_write_block_low(): Note why it is safe not to hold buf_pool_mutex or block_mutex. Enumerate the assumptions in debug assertions. ------------------------------------------------------------------------ r3874 | marko | 2009-01-09 23:09:06 +0200 (Fri, 09 Jan 2009) | 4 lines branches/zip: Add comments related to Issue #155. buf_flush_try_page(): Note why it is safe to access bpage without holding buf_pool_mutex or block_mutex. ------------------------------------------------------------------------ r3875 | marko | 2009-01-09 23:15:12 +0200 (Fri, 09 Jan 2009) | 11 lines branches/zip: Non-functional change: Tighten debug assertions and remove dead code. buf_flush_ready_for_flush(), buf_flush_try_page(): Assert that flush_type is one of BUF_FLUSH_LRU or BUF_FLUSH_LIST. The flush_type comes from buf_flush_batch(), which already asserts this. The assertion holds for all calls in the source code. buf_flush_try_page(): Remove the dead case BUF_FLUSH_SINGLE_PAGE of switch (flush_type). ------------------------------------------------------------------------ r3879 | marko | 2009-01-12 12:46:44 +0200 (Mon, 12 Jan 2009) | 14 lines branches/zip: Simplify the flushing of dirty pages from the buffer pool. buf_flush_try_page(): Rename to buf_flush_page(), and change the return type to void. Replace the parameters space, offset with bpage, and remove the second page hash lookup. Note and assert that both buf_pool_mutex and block_mutex must now be held upon entering the function. They will still be released by this function. buf_flush_try_neighbors(): Replace buf_flush_try_page() with buf_flush_page(). Make the logic easier to follow by not negating the precondition of buf_flush_page(). rb://73 approved by Sunny Bains. This is related to Issue #157. ------------------------------------------------------------------------ r3880 | marko | 2009-01-12 13:24:37 +0200 (Mon, 12 Jan 2009) | 2 lines branches/zip: buf_flush_page(): Fix a comment that should have been fixed in r3879. Spotted by Sunny. ------------------------------------------------------------------------ r3881 | marko | 2009-01-12 14:25:22 +0200 (Mon, 12 Jan 2009) | 2 lines branches/zip: buf_page_get_newest_modification(): Use the block mutex instead of the buffer pool mutex. This is related to Issue #157. ------------------------------------------------------------------------ r3882 | marko | 2009-01-12 14:40:08 +0200 (Mon, 12 Jan 2009) | 3 lines branches/zip: struct mtr_struct: Remove the unused field magic_n unless UNIV_DEBUG is defined. mtr->magic_n is only assigned to and checked in UNIV_DEBUG builds. ------------------------------------------------------------------------ r3883 | marko | 2009-01-12 14:48:59 +0200 (Mon, 12 Jan 2009) | 1 line branches/zip: Non-functional change: Use ut_d when assigning to mtr->state. ------------------------------------------------------------------------ r3884 | marko | 2009-01-12 18:56:11 +0200 (Mon, 12 Jan 2009) | 16 lines branches/zip: Non-functional change: Add some debug assertions and comments. buf_page_t: Note that the LRU fields are protected by buf_pool_mutex only, not block->mutex or buf_pool_zip_mutex. buf_page_get_freed_page_clock(): Note that this is sometimes invoked without mutex protection. buf_pool_get_oldest_modification(): Note that the result may be out of date. buf_page_get_LRU_position(), buf_page_is_old(): Assert that the buffer pool mutex is being held. buf_page_release(): Assert that dirty blocks are in the flush list. ------------------------------------------------------------------------ r3896 | marko | 2009-01-13 09:30:26 +0200 (Tue, 13 Jan 2009) | 2 lines branches/zip: buf_flush_try_neighbors(): Fix a bug that was introduced in r3879 (rb://73). ------------------------------------------------------------------------ r3900 | marko | 2009-01-13 10:32:24 +0200 (Tue, 13 Jan 2009) | 1 line branches/zip: Fix some comments to say buf_pool_mutex. ------------------------------------------------------------------------ r3907 | marko | 2009-01-13 11:54:01 +0200 (Tue, 13 Jan 2009) | 3 lines branches/zip: row_merge_create_temporary_table(): On error, row_create_table_for_mysql() already frees new_table. Do not attempt to free it again. ------------------------------------------------------------------------ r3908 | marko | 2009-01-13 12:34:32 +0200 (Tue, 13 Jan 2009) | 1 line branches/zip: Enable HASH_ASSERT_OWNED independently of UNIV_SYNC_DEBUG. ------------------------------------------------------------------------ r3914 | marko | 2009-01-13 21:46:22 +0200 (Tue, 13 Jan 2009) | 37 lines branches/zip: In hash table lookups, assert that the traversed items satisfy some conditions when UNIV_DEBUG is defined. HASH_SEARCH(): New parameter: ASSERTION. All users will pass an appropriate ut_ad() or nothing. dict_table_add_to_columns(): Assert that the table being added to the data dictionary cache is not already being pointed to by the name_hash and id_hash tables. HASH_SEARCH_ALL(): New macro, for use in dict_table_add_to_columns(). dict_mem_table_free(): Set ut_d(table->cached = FALSE), so that we can check ut_ad(table->cached) when traversing the hash tables, as in HASH_SEARCH(name_hash, dict_sys->table_hash, ...) and HASH_SEARCH(id_hash, dict_sys->table_id_hash, ...). dict_table_get_low(), dict_table_get_on_id_low(): Assert ut_ad(!table || table->cached). fil_space_get_by_id(): Check ut_ad(space->magic_n == FIL_SPACE_MAGIC_N) in HASH_SEARCH(hash, fil_system->spaces, ...). fil_space_get_by_name(): Check ut_ad(space->magic_n == FIL_SPACE_MAGIC_N) in HASH_SEARCH(name_hash, fil_system->name_hash, ...). buf_buddy_block_free(): Check that the blocks are in valid state in HASH_SEARCH(hash, buf_pool->zip_hash, ...). buf_page_hash_get(): Check that the blocks are in valid state in HASH_SEARCH(hash, buf_pool->page_hash, ...). get_share(), free_share(): Check ut_ad(share->use_count > 0) in HASH_SEARCH(table_name_hash, innobase_open_tables, ...). This was posted as rb://75 for tracking down errors similar to Issue #153. ------------------------------------------------------------------------ r3931 | marko | 2009-01-14 16:06:22 +0200 (Wed, 14 Jan 2009) | 26 lines branches/zip: Merge revisions 3601:3930 from branches/5.1: ------------------------------------------------------------------------ r3911 | sunny | 2009-01-13 14:15:24 +0200 (Tue, 13 Jan 2009) | 13 lines branches/5.1: Fix Bug#38187 Error 153 when creating savepoints InnoDB previously treated savepoints as a stack e.g., SAVEPOINT a; SAVEPOINT b; SAVEPOINT c; SAVEPOINT b; <- This would delete b and c. This fix changes the behavior to: SAVEPOINT a; SAVEPOINT b; SAVEPOINT c; SAVEPOINT b; <- Does not delete savepoint c ------------------------------------------------------------------------ r3930 | marko | 2009-01-14 15:51:30 +0200 (Wed, 14 Jan 2009) | 4 lines branches/5.1: dict_load_table(): If dict_load_indexes() fails, invoke dict_table_remove_from_cache() instead of dict_mem_table_free(), so that the data dictionary will not point to freed data. (Bug #42075, Issue #153, rb://76 approved by Heikki Tuuri) ------------------------------------------------------------------------ ------------------------------------------------------------------------
1444 lines
36 KiB
C
1444 lines
36 KiB
C
/******************************************************
|
|
Loads to the memory cache database object definitions
|
|
from dictionary tables
|
|
|
|
(c) 1996 Innobase Oy
|
|
|
|
Created 4/24/1996 Heikki Tuuri
|
|
*******************************************************/
|
|
|
|
#include "dict0load.h"
|
|
#ifndef UNIV_HOTBACKUP
|
|
#include "mysql_version.h"
|
|
#endif /* !UNIV_HOTBACKUP */
|
|
|
|
#ifdef UNIV_NONINL
|
|
#include "dict0load.ic"
|
|
#endif
|
|
|
|
#include "btr0pcur.h"
|
|
#include "btr0btr.h"
|
|
#include "page0page.h"
|
|
#include "mach0data.h"
|
|
#include "dict0dict.h"
|
|
#include "dict0boot.h"
|
|
#include "rem0cmp.h"
|
|
#include "srv0start.h"
|
|
#include "srv0srv.h"
|
|
|
|
/********************************************************************
|
|
Returns TRUE if index's i'th column's name is 'name' .*/
|
|
static
|
|
ibool
|
|
name_of_col_is(
|
|
/*===========*/
|
|
/* out: */
|
|
dict_table_t* table, /* in: table */
|
|
dict_index_t* index, /* in: index */
|
|
ulint i, /* in: */
|
|
const char* name) /* in: name to compare to */
|
|
{
|
|
ulint tmp = dict_col_get_no(dict_field_get_col(
|
|
dict_index_get_nth_field(
|
|
index, i)));
|
|
|
|
return(strcmp(name, dict_table_get_col_name(table, tmp)) == 0);
|
|
}
|
|
|
|
/************************************************************************
|
|
Finds the first table name in the given database. */
|
|
UNIV_INTERN
|
|
char*
|
|
dict_get_first_table_name_in_db(
|
|
/*============================*/
|
|
/* out, own: table name, NULL if
|
|
does not exist; the caller must
|
|
free the memory in the string! */
|
|
const char* name) /* in: database name which ends in '/' */
|
|
{
|
|
dict_table_t* sys_tables;
|
|
btr_pcur_t pcur;
|
|
dict_index_t* sys_index;
|
|
dtuple_t* tuple;
|
|
mem_heap_t* heap;
|
|
dfield_t* dfield;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
heap = mem_heap_create(1000);
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_tables = dict_table_get_low("SYS_TABLES");
|
|
sys_index = UT_LIST_GET_FIRST(sys_tables->indexes);
|
|
ut_a(!dict_table_is_comp(sys_tables));
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
dfield_set_data(dfield, name, ut_strlen(name));
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
loop:
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)) {
|
|
/* Not found */
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(NULL);
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
|
|
if (len < strlen(name)
|
|
|| ut_memcmp(name, field, strlen(name)) != 0) {
|
|
/* Not found */
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(NULL);
|
|
}
|
|
|
|
if (!rec_get_deleted_flag(rec, 0)) {
|
|
|
|
/* We found one */
|
|
|
|
char* table_name = mem_strdupl((char*) field, len);
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(table_name);
|
|
}
|
|
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
|
|
goto loop;
|
|
}
|
|
|
|
/************************************************************************
|
|
Prints to the standard output information on all tables found in the data
|
|
dictionary system table. */
|
|
UNIV_INTERN
|
|
void
|
|
dict_print(void)
|
|
/*============*/
|
|
{
|
|
dict_table_t* sys_tables;
|
|
dict_index_t* sys_index;
|
|
dict_table_t* table;
|
|
btr_pcur_t pcur;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
mtr_t mtr;
|
|
|
|
/* Enlarge the fatal semaphore wait timeout during the InnoDB table
|
|
monitor printout */
|
|
|
|
mutex_enter(&kernel_mutex);
|
|
srv_fatal_semaphore_wait_threshold += 7200; /* 2 hours */
|
|
mutex_exit(&kernel_mutex);
|
|
|
|
mutex_enter(&(dict_sys->mutex));
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_tables = dict_table_get_low("SYS_TABLES");
|
|
sys_index = UT_LIST_GET_FIRST(sys_tables->indexes);
|
|
|
|
btr_pcur_open_at_index_side(TRUE, sys_index, BTR_SEARCH_LEAF, &pcur,
|
|
TRUE, &mtr);
|
|
loop:
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)) {
|
|
/* end of index */
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
|
|
mutex_exit(&(dict_sys->mutex));
|
|
|
|
/* Restore the fatal semaphore wait timeout */
|
|
|
|
mutex_enter(&kernel_mutex);
|
|
srv_fatal_semaphore_wait_threshold -= 7200; /* 2 hours */
|
|
mutex_exit(&kernel_mutex);
|
|
|
|
return;
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
|
|
if (!rec_get_deleted_flag(rec, 0)) {
|
|
|
|
/* We found one */
|
|
|
|
char* table_name = mem_strdupl((char*) field, len);
|
|
|
|
btr_pcur_store_position(&pcur, &mtr);
|
|
|
|
mtr_commit(&mtr);
|
|
|
|
table = dict_table_get_low(table_name);
|
|
mem_free(table_name);
|
|
|
|
if (table == NULL) {
|
|
fputs("InnoDB: Failed to load table ", stderr);
|
|
ut_print_namel(stderr, NULL, TRUE, (char*) field, len);
|
|
putc('\n', stderr);
|
|
} else {
|
|
/* The table definition was corrupt if there
|
|
is no index */
|
|
|
|
if (dict_table_get_first_index(table)) {
|
|
dict_update_statistics_low(table, TRUE);
|
|
}
|
|
|
|
dict_table_print_low(table);
|
|
}
|
|
|
|
mtr_start(&mtr);
|
|
|
|
btr_pcur_restore_position(BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
}
|
|
|
|
goto loop;
|
|
}
|
|
|
|
/************************************************************************
|
|
Determine the flags of a table described in SYS_TABLES. */
|
|
static
|
|
ulint
|
|
dict_sys_tables_get_flags(
|
|
/*======================*/
|
|
/* out: compressed page size in kilobytes;
|
|
or 0 if the tablespace is uncompressed,
|
|
ULINT_UNDEFINED on error */
|
|
const rec_t* rec) /* in: a record of SYS_TABLES */
|
|
{
|
|
const byte* field;
|
|
ulint len;
|
|
ulint n_cols;
|
|
ulint flags;
|
|
|
|
field = rec_get_nth_field_old(rec, 5, &len);
|
|
ut_a(len == 4);
|
|
|
|
flags = mach_read_from_4(field);
|
|
|
|
if (UNIV_LIKELY(flags == DICT_TABLE_ORDINARY)) {
|
|
return(0);
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &len);
|
|
n_cols = mach_read_from_4(field);
|
|
|
|
if (UNIV_UNLIKELY(!(n_cols & 0x80000000UL))) {
|
|
/* New file formats require ROW_FORMAT=COMPACT. */
|
|
return(ULINT_UNDEFINED);
|
|
}
|
|
|
|
switch (flags & (DICT_TF_FORMAT_MASK | DICT_TF_COMPACT)) {
|
|
default:
|
|
case DICT_TF_FORMAT_51 << DICT_TF_FORMAT_SHIFT:
|
|
case DICT_TF_FORMAT_51 << DICT_TF_FORMAT_SHIFT | DICT_TF_COMPACT:
|
|
/* flags should be DICT_TABLE_ORDINARY,
|
|
or DICT_TF_FORMAT_MASK should be nonzero. */
|
|
return(ULINT_UNDEFINED);
|
|
|
|
case DICT_TF_FORMAT_ZIP << DICT_TF_FORMAT_SHIFT | DICT_TF_COMPACT:
|
|
#if DICT_TF_FORMAT_MAX > DICT_TF_FORMAT_ZIP
|
|
# error "missing case labels for DICT_TF_FORMAT_ZIP .. DICT_TF_FORMAT_MAX"
|
|
#endif
|
|
/* We support this format. */
|
|
break;
|
|
}
|
|
|
|
if (UNIV_UNLIKELY((flags & DICT_TF_ZSSIZE_MASK)
|
|
> (DICT_TF_ZSSIZE_MAX << DICT_TF_ZSSIZE_SHIFT))) {
|
|
/* Unsupported compressed page size. */
|
|
return(ULINT_UNDEFINED);
|
|
}
|
|
|
|
if (UNIV_UNLIKELY(flags & (~0 << DICT_TF_BITS))) {
|
|
/* Some unused bits are set. */
|
|
return(ULINT_UNDEFINED);
|
|
}
|
|
|
|
return(flags);
|
|
}
|
|
|
|
/************************************************************************
|
|
In a crash recovery we already have all the tablespace objects created.
|
|
This function compares the space id information in the InnoDB data dictionary
|
|
to what we already read with fil_load_single_table_tablespaces().
|
|
|
|
In a normal startup, we create the tablespace objects for every table in
|
|
InnoDB's data dictionary, if the corresponding .ibd file exists.
|
|
We also scan the biggest space id, and store it to fil_system. */
|
|
UNIV_INTERN
|
|
void
|
|
dict_check_tablespaces_and_store_max_id(
|
|
/*====================================*/
|
|
ibool in_crash_recovery) /* in: are we doing a crash recovery */
|
|
{
|
|
dict_table_t* sys_tables;
|
|
dict_index_t* sys_index;
|
|
btr_pcur_t pcur;
|
|
const rec_t* rec;
|
|
ulint max_space_id = 0;
|
|
mtr_t mtr;
|
|
|
|
mutex_enter(&(dict_sys->mutex));
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_tables = dict_table_get_low("SYS_TABLES");
|
|
sys_index = UT_LIST_GET_FIRST(sys_tables->indexes);
|
|
ut_a(!dict_table_is_comp(sys_tables));
|
|
|
|
btr_pcur_open_at_index_side(TRUE, sys_index, BTR_SEARCH_LEAF, &pcur,
|
|
TRUE, &mtr);
|
|
loop:
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)) {
|
|
/* end of index */
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
|
|
/* We must make the tablespace cache aware of the biggest
|
|
known space id */
|
|
|
|
/* printf("Biggest space id in data dictionary %lu\n",
|
|
max_space_id); */
|
|
fil_set_max_space_id_if_bigger(max_space_id);
|
|
|
|
mutex_exit(&(dict_sys->mutex));
|
|
|
|
return;
|
|
}
|
|
|
|
if (!rec_get_deleted_flag(rec, 0)) {
|
|
|
|
/* We found one */
|
|
const byte* field;
|
|
ulint len;
|
|
ulint space_id;
|
|
ulint flags;
|
|
char* name;
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
name = mem_strdupl((char*) field, len);
|
|
|
|
flags = dict_sys_tables_get_flags(rec);
|
|
if (UNIV_UNLIKELY(flags == ULINT_UNDEFINED)) {
|
|
|
|
field = rec_get_nth_field_old(rec, 5, &len);
|
|
flags = mach_read_from_4(field);
|
|
|
|
ut_print_timestamp(stderr);
|
|
fputs(" InnoDB: Error: table ", stderr);
|
|
ut_print_filename(stderr, name);
|
|
fprintf(stderr, "\n"
|
|
"InnoDB: in InnoDB data dictionary"
|
|
" has unknown type %lx.\n",
|
|
(ulong) flags);
|
|
|
|
goto loop;
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 9, &len);
|
|
ut_a(len == 4);
|
|
|
|
space_id = mach_read_from_4(field);
|
|
|
|
btr_pcur_store_position(&pcur, &mtr);
|
|
|
|
mtr_commit(&mtr);
|
|
|
|
if (space_id != 0 && in_crash_recovery) {
|
|
/* Check that the tablespace (the .ibd file) really
|
|
exists; print a warning to the .err log if not */
|
|
|
|
fil_space_for_table_exists_in_mem(space_id, name,
|
|
FALSE, TRUE, TRUE);
|
|
}
|
|
|
|
if (space_id != 0 && !in_crash_recovery) {
|
|
/* It is a normal database startup: create the space
|
|
object and check that the .ibd file exists. */
|
|
|
|
fil_open_single_table_tablespace(FALSE, space_id,
|
|
flags, name);
|
|
}
|
|
|
|
mem_free(name);
|
|
|
|
if (space_id > max_space_id) {
|
|
max_space_id = space_id;
|
|
}
|
|
|
|
mtr_start(&mtr);
|
|
|
|
btr_pcur_restore_position(BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
}
|
|
|
|
goto loop;
|
|
}
|
|
|
|
/************************************************************************
|
|
Loads definitions for table columns. */
|
|
static
|
|
void
|
|
dict_load_columns(
|
|
/*==============*/
|
|
dict_table_t* table, /* in: table */
|
|
mem_heap_t* heap) /* in: memory heap for temporary storage */
|
|
{
|
|
dict_table_t* sys_columns;
|
|
dict_index_t* sys_index;
|
|
btr_pcur_t pcur;
|
|
dtuple_t* tuple;
|
|
dfield_t* dfield;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
byte* buf;
|
|
char* name;
|
|
ulint mtype;
|
|
ulint prtype;
|
|
ulint col_len;
|
|
ulint i;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_columns = dict_table_get_low("SYS_COLUMNS");
|
|
sys_index = UT_LIST_GET_FIRST(sys_columns->indexes);
|
|
ut_a(!dict_table_is_comp(sys_columns));
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
buf = mem_heap_alloc(heap, 8);
|
|
mach_write_to_8(buf, table->id);
|
|
|
|
dfield_set_data(dfield, buf, 8);
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
for (i = 0; i + DATA_N_SYS_COLS < (ulint) table->n_cols; i++) {
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
ut_a(btr_pcur_is_on_user_rec(&pcur));
|
|
|
|
ut_a(!rec_get_deleted_flag(rec, 0));
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
ut_ad(len == 8);
|
|
ut_a(ut_dulint_cmp(table->id, mach_read_from_8(field)) == 0);
|
|
|
|
field = rec_get_nth_field_old(rec, 1, &len);
|
|
ut_ad(len == 4);
|
|
ut_a(i == mach_read_from_4(field));
|
|
|
|
ut_a(name_of_col_is(sys_columns, sys_index, 4, "NAME"));
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &len);
|
|
name = mem_heap_strdupl(heap, (char*) field, len);
|
|
|
|
field = rec_get_nth_field_old(rec, 5, &len);
|
|
mtype = mach_read_from_4(field);
|
|
|
|
field = rec_get_nth_field_old(rec, 6, &len);
|
|
prtype = mach_read_from_4(field);
|
|
|
|
if (dtype_get_charset_coll(prtype) == 0
|
|
&& dtype_is_string_type(mtype)) {
|
|
/* The table was created with < 4.1.2. */
|
|
|
|
if (dtype_is_binary_string_type(mtype, prtype)) {
|
|
/* Use the binary collation for
|
|
string columns of binary type. */
|
|
|
|
prtype = dtype_form_prtype(
|
|
prtype,
|
|
DATA_MYSQL_BINARY_CHARSET_COLL);
|
|
} else {
|
|
/* Use the default charset for
|
|
other than binary columns. */
|
|
|
|
prtype = dtype_form_prtype(
|
|
prtype,
|
|
data_mysql_default_charset_coll);
|
|
}
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 7, &len);
|
|
col_len = mach_read_from_4(field);
|
|
|
|
ut_a(name_of_col_is(sys_columns, sys_index, 8, "PREC"));
|
|
|
|
dict_mem_table_add_col(table, heap, name,
|
|
mtype, prtype, col_len);
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
}
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
}
|
|
|
|
/************************************************************************
|
|
Loads definitions for index fields. */
|
|
static
|
|
void
|
|
dict_load_fields(
|
|
/*=============*/
|
|
dict_index_t* index, /* in: index whose fields to load */
|
|
mem_heap_t* heap) /* in: memory heap for temporary storage */
|
|
{
|
|
dict_table_t* sys_fields;
|
|
dict_index_t* sys_index;
|
|
btr_pcur_t pcur;
|
|
dtuple_t* tuple;
|
|
dfield_t* dfield;
|
|
ulint pos_and_prefix_len;
|
|
ulint prefix_len;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
byte* buf;
|
|
ulint i;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_fields = dict_table_get_low("SYS_FIELDS");
|
|
sys_index = UT_LIST_GET_FIRST(sys_fields->indexes);
|
|
ut_a(!dict_table_is_comp(sys_fields));
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
buf = mem_heap_alloc(heap, 8);
|
|
mach_write_to_8(buf, index->id);
|
|
|
|
dfield_set_data(dfield, buf, 8);
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
for (i = 0; i < index->n_fields; i++) {
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
ut_a(btr_pcur_is_on_user_rec(&pcur));
|
|
|
|
/* There could be delete marked records in SYS_FIELDS
|
|
because SYS_FIELDS.INDEX_ID can be updated
|
|
by ALTER TABLE ADD INDEX. */
|
|
|
|
if (rec_get_deleted_flag(rec, 0)) {
|
|
|
|
goto next_rec;
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
ut_ad(len == 8);
|
|
|
|
field = rec_get_nth_field_old(rec, 1, &len);
|
|
ut_a(len == 4);
|
|
|
|
/* The next field stores the field position in the index
|
|
and a possible column prefix length if the index field
|
|
does not contain the whole column. The storage format is
|
|
like this: if there is at least one prefix field in the index,
|
|
then the HIGH 2 bytes contain the field number (== i) and the
|
|
low 2 bytes the prefix length for the field. Otherwise the
|
|
field number (== i) is contained in the 2 LOW bytes. */
|
|
|
|
pos_and_prefix_len = mach_read_from_4(field);
|
|
|
|
ut_a((pos_and_prefix_len & 0xFFFFUL) == i
|
|
|| (pos_and_prefix_len & 0xFFFF0000UL) == (i << 16));
|
|
|
|
if ((i == 0 && pos_and_prefix_len > 0)
|
|
|| (pos_and_prefix_len & 0xFFFF0000UL) > 0) {
|
|
|
|
prefix_len = pos_and_prefix_len & 0xFFFFUL;
|
|
} else {
|
|
prefix_len = 0;
|
|
}
|
|
|
|
ut_a(name_of_col_is(sys_fields, sys_index, 4, "COL_NAME"));
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &len);
|
|
|
|
dict_mem_index_add_field(index,
|
|
mem_heap_strdupl(heap,
|
|
(char*) field, len),
|
|
prefix_len);
|
|
|
|
next_rec:
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
}
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
}
|
|
|
|
/************************************************************************
|
|
Loads definitions for table indexes. Adds them to the data dictionary
|
|
cache. */
|
|
static
|
|
ulint
|
|
dict_load_indexes(
|
|
/*==============*/
|
|
/* out: DB_SUCCESS if ok, DB_CORRUPTION
|
|
if corruption of dictionary table or
|
|
DB_UNSUPPORTED if table has unknown index
|
|
type */
|
|
dict_table_t* table, /* in: table */
|
|
mem_heap_t* heap) /* in: memory heap for temporary storage */
|
|
{
|
|
dict_table_t* sys_indexes;
|
|
dict_index_t* sys_index;
|
|
dict_index_t* index;
|
|
btr_pcur_t pcur;
|
|
dtuple_t* tuple;
|
|
dfield_t* dfield;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
ulint name_len;
|
|
char* name_buf;
|
|
ulint type;
|
|
ulint space;
|
|
ulint page_no;
|
|
ulint n_fields;
|
|
byte* buf;
|
|
ibool is_sys_table;
|
|
dulint id;
|
|
mtr_t mtr;
|
|
ulint error = DB_SUCCESS;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
if ((ut_dulint_get_high(table->id) == 0)
|
|
&& (ut_dulint_get_low(table->id) < DICT_HDR_FIRST_ID)) {
|
|
is_sys_table = TRUE;
|
|
} else {
|
|
is_sys_table = FALSE;
|
|
}
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_indexes = dict_table_get_low("SYS_INDEXES");
|
|
sys_index = UT_LIST_GET_FIRST(sys_indexes->indexes);
|
|
ut_a(!dict_table_is_comp(sys_indexes));
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
buf = mem_heap_alloc(heap, 8);
|
|
mach_write_to_8(buf, table->id);
|
|
|
|
dfield_set_data(dfield, buf, 8);
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
for (;;) {
|
|
if (!btr_pcur_is_on_user_rec(&pcur)) {
|
|
|
|
break;
|
|
}
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
ut_ad(len == 8);
|
|
|
|
if (ut_memcmp(buf, field, len) != 0) {
|
|
break;
|
|
} else if (rec_get_deleted_flag(rec, 0)) {
|
|
/* Skip delete marked records */
|
|
goto next_rec;
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 1, &len);
|
|
ut_ad(len == 8);
|
|
id = mach_read_from_8(field);
|
|
|
|
ut_a(name_of_col_is(sys_indexes, sys_index, 4, "NAME"));
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &name_len);
|
|
name_buf = mem_heap_strdupl(heap, (char*) field, name_len);
|
|
|
|
field = rec_get_nth_field_old(rec, 5, &len);
|
|
n_fields = mach_read_from_4(field);
|
|
|
|
field = rec_get_nth_field_old(rec, 6, &len);
|
|
type = mach_read_from_4(field);
|
|
|
|
field = rec_get_nth_field_old(rec, 7, &len);
|
|
space = mach_read_from_4(field);
|
|
|
|
ut_a(name_of_col_is(sys_indexes, sys_index, 8, "PAGE_NO"));
|
|
|
|
field = rec_get_nth_field_old(rec, 8, &len);
|
|
page_no = mach_read_from_4(field);
|
|
|
|
/* We check for unsupported types first, so that the
|
|
subsequent checks are relevant for the supported types. */
|
|
if (type & ~(DICT_CLUSTERED | DICT_UNIQUE)) {
|
|
|
|
fprintf(stderr,
|
|
"InnoDB: Error: unknown type %lu"
|
|
" of index %s of table %s\n",
|
|
(ulong) type, name_buf, table->name);
|
|
|
|
error = DB_UNSUPPORTED;
|
|
goto func_exit;
|
|
} else if (page_no == FIL_NULL) {
|
|
|
|
fprintf(stderr,
|
|
"InnoDB: Error: trying to load index %s"
|
|
" for table %s\n"
|
|
"InnoDB: but the index tree has been freed!\n",
|
|
name_buf, table->name);
|
|
|
|
error = DB_CORRUPTION;
|
|
goto func_exit;
|
|
} else if ((type & DICT_CLUSTERED) == 0
|
|
&& NULL == dict_table_get_first_index(table)) {
|
|
|
|
fputs("InnoDB: Error: trying to load index ",
|
|
stderr);
|
|
ut_print_name(stderr, NULL, FALSE, name_buf);
|
|
fputs(" for table ", stderr);
|
|
ut_print_name(stderr, NULL, TRUE, table->name);
|
|
fputs("\nInnoDB: but the first index"
|
|
" is not clustered!\n", stderr);
|
|
|
|
error = DB_CORRUPTION;
|
|
goto func_exit;
|
|
} else if (is_sys_table
|
|
&& ((type & DICT_CLUSTERED)
|
|
|| ((table == dict_sys->sys_tables)
|
|
&& (name_len == (sizeof "ID_IND") - 1)
|
|
&& (0 == ut_memcmp(name_buf,
|
|
"ID_IND", name_len))))) {
|
|
|
|
/* The index was created in memory already at booting
|
|
of the database server */
|
|
} else {
|
|
index = dict_mem_index_create(table->name, name_buf,
|
|
space, type, n_fields);
|
|
index->id = id;
|
|
|
|
dict_load_fields(index, heap);
|
|
error = dict_index_add_to_cache(table, index, page_no,
|
|
FALSE);
|
|
/* The data dictionary tables should never contain
|
|
invalid index definitions. If we ignored this error
|
|
and simply did not load this index definition, the
|
|
.frm file would disagree with the index definitions
|
|
inside InnoDB. */
|
|
if (UNIV_UNLIKELY(error != DB_SUCCESS)) {
|
|
|
|
goto func_exit;
|
|
}
|
|
}
|
|
|
|
next_rec:
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
}
|
|
|
|
func_exit:
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
|
|
return(error);
|
|
}
|
|
|
|
/************************************************************************
|
|
Loads a table definition and also all its index definitions, and also
|
|
the cluster definition if the table is a member in a cluster. Also loads
|
|
all foreign key constraints where the foreign key is in the table or where
|
|
a foreign key references columns in this table. Adds all these to the data
|
|
dictionary cache. */
|
|
UNIV_INTERN
|
|
dict_table_t*
|
|
dict_load_table(
|
|
/*============*/
|
|
/* out: table, NULL if does not exist;
|
|
if the table is stored in an .ibd file,
|
|
but the file does not exist,
|
|
then we set the ibd_file_missing flag TRUE
|
|
in the table object we return */
|
|
const char* name) /* in: table name in the
|
|
databasename/tablename format */
|
|
{
|
|
ibool ibd_file_missing = FALSE;
|
|
dict_table_t* table;
|
|
dict_table_t* sys_tables;
|
|
btr_pcur_t pcur;
|
|
dict_index_t* sys_index;
|
|
dtuple_t* tuple;
|
|
mem_heap_t* heap;
|
|
dfield_t* dfield;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
ulint space;
|
|
ulint n_cols;
|
|
ulint flags;
|
|
ulint err;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
heap = mem_heap_create(32000);
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_tables = dict_table_get_low("SYS_TABLES");
|
|
sys_index = UT_LIST_GET_FIRST(sys_tables->indexes);
|
|
ut_a(!dict_table_is_comp(sys_tables));
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
dfield_set_data(dfield, name, ut_strlen(name));
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)
|
|
|| rec_get_deleted_flag(rec, 0)) {
|
|
/* Not found */
|
|
err_exit:
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(NULL);
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
|
|
/* Check if the table name in record is the searched one */
|
|
if (len != ut_strlen(name) || ut_memcmp(name, field, len) != 0) {
|
|
|
|
goto err_exit;
|
|
}
|
|
|
|
ut_a(name_of_col_is(sys_tables, sys_index, 9, "SPACE"));
|
|
|
|
field = rec_get_nth_field_old(rec, 9, &len);
|
|
space = mach_read_from_4(field);
|
|
|
|
/* Check if the tablespace exists and has the right name */
|
|
if (space != 0) {
|
|
flags = dict_sys_tables_get_flags(rec);
|
|
|
|
if (UNIV_UNLIKELY(flags == ULINT_UNDEFINED)) {
|
|
field = rec_get_nth_field_old(rec, 5, &len);
|
|
flags = mach_read_from_4(field);
|
|
|
|
ut_print_timestamp(stderr);
|
|
fputs(" InnoDB: Error: table ", stderr);
|
|
ut_print_filename(stderr, name);
|
|
fprintf(stderr, "\n"
|
|
"InnoDB: in InnoDB data dictionary"
|
|
" has unknown type %lx.\n",
|
|
(ulong) flags);
|
|
goto err_exit;
|
|
}
|
|
|
|
if (fil_space_for_table_exists_in_mem(space, name, FALSE,
|
|
FALSE, FALSE)) {
|
|
/* Ok; (if we did a crash recovery then the tablespace
|
|
can already be in the memory cache) */
|
|
} else {
|
|
/* In >= 4.1.9, InnoDB scans the data dictionary also
|
|
at a normal mysqld startup. It is an error if the
|
|
space object does not exist in memory. */
|
|
|
|
ut_print_timestamp(stderr);
|
|
fprintf(stderr,
|
|
" InnoDB: error: space object of table %s,\n"
|
|
"InnoDB: space id %lu did not exist in memory."
|
|
" Retrying an open.\n",
|
|
name, (ulong)space);
|
|
/* Try to open the tablespace */
|
|
if (!fil_open_single_table_tablespace(
|
|
TRUE, space, flags, name)) {
|
|
/* We failed to find a sensible tablespace
|
|
file */
|
|
|
|
ibd_file_missing = TRUE;
|
|
}
|
|
}
|
|
} else {
|
|
flags = 0;
|
|
}
|
|
|
|
ut_a(name_of_col_is(sys_tables, sys_index, 4, "N_COLS"));
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &len);
|
|
n_cols = mach_read_from_4(field);
|
|
|
|
/* The high-order bit of N_COLS is the "compact format" flag. */
|
|
if (n_cols & 0x80000000UL) {
|
|
flags |= DICT_TF_COMPACT;
|
|
}
|
|
|
|
table = dict_mem_table_create(name, space, n_cols & ~0x80000000UL,
|
|
flags);
|
|
|
|
table->ibd_file_missing = (unsigned int) ibd_file_missing;
|
|
|
|
ut_a(name_of_col_is(sys_tables, sys_index, 3, "ID"));
|
|
|
|
field = rec_get_nth_field_old(rec, 3, &len);
|
|
table->id = mach_read_from_8(field);
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
|
|
dict_load_columns(table, heap);
|
|
|
|
dict_table_add_to_cache(table, heap);
|
|
|
|
mem_heap_empty(heap);
|
|
|
|
err = dict_load_indexes(table, heap);
|
|
#ifndef UNIV_HOTBACKUP
|
|
/* If the force recovery flag is set, we open the table irrespective
|
|
of the error condition, since the user may want to dump data from the
|
|
clustered index. However we load the foreign key information only if
|
|
all indexes were loaded. */
|
|
if (err == DB_SUCCESS) {
|
|
err = dict_load_foreigns(table->name, TRUE);
|
|
} else if (!srv_force_recovery) {
|
|
dict_table_remove_from_cache(table);
|
|
table = NULL;
|
|
}
|
|
# if 0
|
|
if (err != DB_SUCCESS && table != NULL) {
|
|
|
|
mutex_enter(&dict_foreign_err_mutex);
|
|
|
|
ut_print_timestamp(stderr);
|
|
|
|
fprintf(stderr,
|
|
" InnoDB: Error: could not make a foreign key"
|
|
" definition to match\n"
|
|
"InnoDB: the foreign key table"
|
|
" or the referenced table!\n"
|
|
"InnoDB: The data dictionary of InnoDB is corrupt."
|
|
" You may need to drop\n"
|
|
"InnoDB: and recreate the foreign key table"
|
|
" or the referenced table.\n"
|
|
"InnoDB: Submit a detailed bug report"
|
|
" to http://bugs.mysql.com\n"
|
|
"InnoDB: Latest foreign key error printout:\n%s\n",
|
|
dict_foreign_err_buf);
|
|
|
|
mutex_exit(&dict_foreign_err_mutex);
|
|
}
|
|
# endif /* 0 */
|
|
#endif /* !UNIV_HOTBACKUP */
|
|
mem_heap_free(heap);
|
|
|
|
return(table);
|
|
}
|
|
|
|
/***************************************************************************
|
|
Loads a table object based on the table id. */
|
|
UNIV_INTERN
|
|
dict_table_t*
|
|
dict_load_table_on_id(
|
|
/*==================*/
|
|
/* out: table; NULL if table does not exist */
|
|
dulint table_id) /* in: table id */
|
|
{
|
|
byte id_buf[8];
|
|
btr_pcur_t pcur;
|
|
mem_heap_t* heap;
|
|
dtuple_t* tuple;
|
|
dfield_t* dfield;
|
|
dict_index_t* sys_table_ids;
|
|
dict_table_t* sys_tables;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
dict_table_t* table;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
/* NOTE that the operation of this function is protected by
|
|
the dictionary mutex, and therefore no deadlocks can occur
|
|
with other dictionary operations. */
|
|
|
|
mtr_start(&mtr);
|
|
/*---------------------------------------------------*/
|
|
/* Get the secondary index based on ID for table SYS_TABLES */
|
|
sys_tables = dict_sys->sys_tables;
|
|
sys_table_ids = dict_table_get_next_index(
|
|
dict_table_get_first_index(sys_tables));
|
|
ut_a(!dict_table_is_comp(sys_tables));
|
|
heap = mem_heap_create(256);
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
/* Write the table id in byte format to id_buf */
|
|
mach_write_to_8(id_buf, table_id);
|
|
|
|
dfield_set_data(dfield, id_buf, 8);
|
|
dict_index_copy_types(tuple, sys_table_ids, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_table_ids, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)
|
|
|| rec_get_deleted_flag(rec, 0)) {
|
|
/* Not found */
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(NULL);
|
|
}
|
|
|
|
/*---------------------------------------------------*/
|
|
/* Now we have the record in the secondary index containing the
|
|
table ID and NAME */
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
ut_ad(len == 8);
|
|
|
|
/* Check if the table id in record is the one searched for */
|
|
if (ut_dulint_cmp(table_id, mach_read_from_8(field)) != 0) {
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(NULL);
|
|
}
|
|
|
|
/* Now we get the table name from the record */
|
|
field = rec_get_nth_field_old(rec, 1, &len);
|
|
/* Load the table definition to memory */
|
|
table = dict_load_table(mem_heap_strdupl(heap, (char*) field, len));
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
return(table);
|
|
}
|
|
|
|
/************************************************************************
|
|
This function is called when the database is booted. Loads system table
|
|
index definitions except for the clustered index which is added to the
|
|
dictionary cache at booting before calling this function. */
|
|
UNIV_INTERN
|
|
void
|
|
dict_load_sys_table(
|
|
/*================*/
|
|
dict_table_t* table) /* in: system table */
|
|
{
|
|
mem_heap_t* heap;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
heap = mem_heap_create(1000);
|
|
|
|
dict_load_indexes(table, heap);
|
|
|
|
mem_heap_free(heap);
|
|
}
|
|
|
|
#ifndef UNIV_HOTBACKUP
|
|
/************************************************************************
|
|
Loads foreign key constraint col names (also for the referenced table). */
|
|
static
|
|
void
|
|
dict_load_foreign_cols(
|
|
/*===================*/
|
|
const char* id, /* in: foreign constraint id as a
|
|
null-terminated string */
|
|
dict_foreign_t* foreign)/* in: foreign constraint object */
|
|
{
|
|
dict_table_t* sys_foreign_cols;
|
|
dict_index_t* sys_index;
|
|
btr_pcur_t pcur;
|
|
dtuple_t* tuple;
|
|
dfield_t* dfield;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
ulint i;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
foreign->foreign_col_names = mem_heap_alloc(
|
|
foreign->heap, foreign->n_fields * sizeof(void*));
|
|
|
|
foreign->referenced_col_names = mem_heap_alloc(
|
|
foreign->heap, foreign->n_fields * sizeof(void*));
|
|
mtr_start(&mtr);
|
|
|
|
sys_foreign_cols = dict_table_get_low("SYS_FOREIGN_COLS");
|
|
sys_index = UT_LIST_GET_FIRST(sys_foreign_cols->indexes);
|
|
ut_a(!dict_table_is_comp(sys_foreign_cols));
|
|
|
|
tuple = dtuple_create(foreign->heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
dfield_set_data(dfield, id, ut_strlen(id));
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
for (i = 0; i < foreign->n_fields; i++) {
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
ut_a(btr_pcur_is_on_user_rec(&pcur));
|
|
ut_a(!rec_get_deleted_flag(rec, 0));
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
ut_a(len == ut_strlen(id));
|
|
ut_a(ut_memcmp(id, field, len) == 0);
|
|
|
|
field = rec_get_nth_field_old(rec, 1, &len);
|
|
ut_a(len == 4);
|
|
ut_a(i == mach_read_from_4(field));
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &len);
|
|
foreign->foreign_col_names[i] = mem_heap_strdupl(
|
|
foreign->heap, (char*) field, len);
|
|
|
|
field = rec_get_nth_field_old(rec, 5, &len);
|
|
foreign->referenced_col_names[i] = mem_heap_strdupl(
|
|
foreign->heap, (char*) field, len);
|
|
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
}
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
}
|
|
|
|
/***************************************************************************
|
|
Loads a foreign key constraint to the dictionary cache. */
|
|
static
|
|
ulint
|
|
dict_load_foreign(
|
|
/*==============*/
|
|
/* out: DB_SUCCESS or error code */
|
|
const char* id, /* in: foreign constraint id as a
|
|
null-terminated string */
|
|
ibool check_charsets)
|
|
/* in: TRUE=check charset compatibility */
|
|
{
|
|
dict_foreign_t* foreign;
|
|
dict_table_t* sys_foreign;
|
|
btr_pcur_t pcur;
|
|
dict_index_t* sys_index;
|
|
dtuple_t* tuple;
|
|
mem_heap_t* heap2;
|
|
dfield_t* dfield;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
ulint n_fields_and_type;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
heap2 = mem_heap_create(1000);
|
|
|
|
mtr_start(&mtr);
|
|
|
|
sys_foreign = dict_table_get_low("SYS_FOREIGN");
|
|
sys_index = UT_LIST_GET_FIRST(sys_foreign->indexes);
|
|
ut_a(!dict_table_is_comp(sys_foreign));
|
|
|
|
tuple = dtuple_create(heap2, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
dfield_set_data(dfield, id, ut_strlen(id));
|
|
dict_index_copy_types(tuple, sys_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sys_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)
|
|
|| rec_get_deleted_flag(rec, 0)) {
|
|
/* Not found */
|
|
|
|
fprintf(stderr,
|
|
"InnoDB: Error A: cannot load foreign constraint %s\n",
|
|
id);
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap2);
|
|
|
|
return(DB_ERROR);
|
|
}
|
|
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
|
|
/* Check if the id in record is the searched one */
|
|
if (len != ut_strlen(id) || ut_memcmp(id, field, len) != 0) {
|
|
|
|
fprintf(stderr,
|
|
"InnoDB: Error B: cannot load foreign constraint %s\n",
|
|
id);
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap2);
|
|
|
|
return(DB_ERROR);
|
|
}
|
|
|
|
/* Read the table names and the number of columns associated
|
|
with the constraint */
|
|
|
|
mem_heap_free(heap2);
|
|
|
|
foreign = dict_mem_foreign_create();
|
|
|
|
n_fields_and_type = mach_read_from_4(
|
|
rec_get_nth_field_old(rec, 5, &len));
|
|
|
|
ut_a(len == 4);
|
|
|
|
/* We store the type in the bits 24..29 of n_fields_and_type. */
|
|
|
|
foreign->type = (unsigned int) (n_fields_and_type >> 24);
|
|
foreign->n_fields = (unsigned int) (n_fields_and_type & 0x3FFUL);
|
|
|
|
foreign->id = mem_heap_strdup(foreign->heap, id);
|
|
|
|
field = rec_get_nth_field_old(rec, 3, &len);
|
|
foreign->foreign_table_name = mem_heap_strdupl(
|
|
foreign->heap, (char*) field, len);
|
|
|
|
field = rec_get_nth_field_old(rec, 4, &len);
|
|
foreign->referenced_table_name = mem_heap_strdupl(
|
|
foreign->heap, (char*) field, len);
|
|
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
|
|
dict_load_foreign_cols(id, foreign);
|
|
|
|
/* If the foreign table is not yet in the dictionary cache, we
|
|
have to load it so that we are able to make type comparisons
|
|
in the next function call. */
|
|
|
|
dict_table_get_low(foreign->foreign_table_name);
|
|
|
|
/* Note that there may already be a foreign constraint object in
|
|
the dictionary cache for this constraint: then the following
|
|
call only sets the pointers in it to point to the appropriate table
|
|
and index objects and frees the newly created object foreign.
|
|
Adding to the cache should always succeed since we are not creating
|
|
a new foreign key constraint but loading one from the data
|
|
dictionary. */
|
|
|
|
return(dict_foreign_add_to_cache(foreign, check_charsets));
|
|
}
|
|
|
|
/***************************************************************************
|
|
Loads foreign key constraints where the table is either the foreign key
|
|
holder or where the table is referenced by a foreign key. Adds these
|
|
constraints to the data dictionary. Note that we know that the dictionary
|
|
cache already contains all constraints where the other relevant table is
|
|
already in the dictionary cache. */
|
|
UNIV_INTERN
|
|
ulint
|
|
dict_load_foreigns(
|
|
/*===============*/
|
|
/* out: DB_SUCCESS or error code */
|
|
const char* table_name, /* in: table name */
|
|
ibool check_charsets) /* in: TRUE=check charset
|
|
compatibility */
|
|
{
|
|
btr_pcur_t pcur;
|
|
mem_heap_t* heap;
|
|
dtuple_t* tuple;
|
|
dfield_t* dfield;
|
|
dict_index_t* sec_index;
|
|
dict_table_t* sys_foreign;
|
|
const rec_t* rec;
|
|
const byte* field;
|
|
ulint len;
|
|
char* id ;
|
|
ulint err;
|
|
mtr_t mtr;
|
|
|
|
ut_ad(mutex_own(&(dict_sys->mutex)));
|
|
|
|
sys_foreign = dict_table_get_low("SYS_FOREIGN");
|
|
|
|
if (sys_foreign == NULL) {
|
|
/* No foreign keys defined yet in this database */
|
|
|
|
fprintf(stderr,
|
|
"InnoDB: Error: no foreign key system tables"
|
|
" in the database\n");
|
|
|
|
return(DB_ERROR);
|
|
}
|
|
|
|
ut_a(!dict_table_is_comp(sys_foreign));
|
|
mtr_start(&mtr);
|
|
|
|
/* Get the secondary index based on FOR_NAME from table
|
|
SYS_FOREIGN */
|
|
|
|
sec_index = dict_table_get_next_index(
|
|
dict_table_get_first_index(sys_foreign));
|
|
start_load:
|
|
heap = mem_heap_create(256);
|
|
|
|
tuple = dtuple_create(heap, 1);
|
|
dfield = dtuple_get_nth_field(tuple, 0);
|
|
|
|
dfield_set_data(dfield, table_name, ut_strlen(table_name));
|
|
dict_index_copy_types(tuple, sec_index, 1);
|
|
|
|
btr_pcur_open_on_user_rec(sec_index, tuple, PAGE_CUR_GE,
|
|
BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
loop:
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
|
|
if (!btr_pcur_is_on_user_rec(&pcur)) {
|
|
/* End of index */
|
|
|
|
goto load_next_index;
|
|
}
|
|
|
|
/* Now we have the record in the secondary index containing a table
|
|
name and a foreign constraint ID */
|
|
|
|
rec = btr_pcur_get_rec(&pcur);
|
|
field = rec_get_nth_field_old(rec, 0, &len);
|
|
|
|
/* Check if the table name in the record is the one searched for; the
|
|
following call does the comparison in the latin1_swedish_ci
|
|
charset-collation, in a case-insensitive way. */
|
|
|
|
if (0 != cmp_data_data(dfield_get_type(dfield)->mtype,
|
|
dfield_get_type(dfield)->prtype,
|
|
dfield_get_data(dfield), dfield_get_len(dfield),
|
|
field, len)) {
|
|
|
|
goto load_next_index;
|
|
}
|
|
|
|
/* Since table names in SYS_FOREIGN are stored in a case-insensitive
|
|
order, we have to check that the table name matches also in a binary
|
|
string comparison. On Unix, MySQL allows table names that only differ
|
|
in character case. */
|
|
|
|
if (0 != ut_memcmp(field, table_name, len)) {
|
|
|
|
goto next_rec;
|
|
}
|
|
|
|
if (rec_get_deleted_flag(rec, 0)) {
|
|
|
|
goto next_rec;
|
|
}
|
|
|
|
/* Now we get a foreign key constraint id */
|
|
field = rec_get_nth_field_old(rec, 1, &len);
|
|
id = mem_heap_strdupl(heap, (char*) field, len);
|
|
|
|
btr_pcur_store_position(&pcur, &mtr);
|
|
|
|
mtr_commit(&mtr);
|
|
|
|
/* Load the foreign constraint definition to the dictionary cache */
|
|
|
|
err = dict_load_foreign(id, check_charsets);
|
|
|
|
if (err != DB_SUCCESS) {
|
|
btr_pcur_close(&pcur);
|
|
mem_heap_free(heap);
|
|
|
|
return(err);
|
|
}
|
|
|
|
mtr_start(&mtr);
|
|
|
|
btr_pcur_restore_position(BTR_SEARCH_LEAF, &pcur, &mtr);
|
|
next_rec:
|
|
btr_pcur_move_to_next_user_rec(&pcur, &mtr);
|
|
|
|
goto loop;
|
|
|
|
load_next_index:
|
|
btr_pcur_close(&pcur);
|
|
mtr_commit(&mtr);
|
|
mem_heap_free(heap);
|
|
|
|
sec_index = dict_table_get_next_index(sec_index);
|
|
|
|
if (sec_index != NULL) {
|
|
|
|
mtr_start(&mtr);
|
|
|
|
goto start_load;
|
|
}
|
|
|
|
return(DB_SUCCESS);
|
|
}
|
|
#endif /* !UNIV_HOTBACKUP */
|