mariadb/sql/ha_heap.cc
unknown f631b361b6 Table definition cache, part 2
The table opening process now works the following way:
- Create common TABLE_SHARE object
- Read the .frm file and unpack it into the TABLE_SHARE object
- Create a TABLE object based on the information in the TABLE_SHARE
  object and open a handler to the table object

Other noteworthy changes:
- In TABLE_SHARE the most common strings are now LEX_STRING's
- Better error message when table is not found
- Variable table_cache is now renamed 'table_open_cache'
- New variable 'table_definition_cache' that is the number of table defintions that will be cached
- strxnmov() calls are now fixed to avoid overflows
- strxnmov() will now always add one end \0 to result
- engine objects are now created with a TABLE_SHARE object instead of a TABLE object.
- After creating a field object one must call field->init(table) before using it

- For a busy system this change will give you:
 - Less memory usage for table object
 - Faster opening of tables (if it's has been in use or is in table definition cache)
 - Allow you to cache many table definitions objects
 - Faster drop of table


mysql-test/mysql-test-run.sh:
  Fixed some problems with --gdb option
  Test both with socket and tcp/ip port that all old servers are killed
mysql-test/r/flush_table.result:
  More tests with lock table with 2 threads + flush table
mysql-test/r/information_schema.result:
  Removed old (now wrong) result
mysql-test/r/innodb.result:
  Better error messages (thanks to TDC patch)
mysql-test/r/merge.result:
  Extra flush table test
mysql-test/r/ndb_bitfield.result:
  Better error messages (thanks to TDC patch)
mysql-test/r/ndb_partition_error.result:
  Better error messages (thanks to TDC patch)
mysql-test/r/query_cache.result:
  Remove tables left from old tests
mysql-test/r/temp_table.result:
  Test truncate with temporary tables
mysql-test/r/variables.result:
  Table_cache -> Table_open_cache
mysql-test/t/flush_table.test:
  More tests with lock table with 2 threads + flush table
mysql-test/t/merge.test:
  Extra flush table test
mysql-test/t/multi_update.test:
  Added 'sleep' to make test predictable
mysql-test/t/query_cache.test:
  Remove tables left from old tests
mysql-test/t/temp_table.test:
  Test truncate with temporary tables
mysql-test/t/variables.test:
  Table_cache -> Table_open_cache
mysql-test/valgrind.supp:
  Remove warning that may happens becasue threads dies in different order
mysys/hash.c:
  Fixed wrong DBUG_PRINT
mysys/mf_dirname.c:
  More DBUG
mysys/mf_pack.c:
  Better comment
mysys/mf_tempdir.c:
  More DBUG
  Ensure that we call cleanup_dirname() on all temporary directory paths.
  
  If we don't do this, we will get a failure when comparing temporary table
  names as in some cases the temporary table name is run through convert_dirname())
mysys/my_alloc.c:
  Indentation fix
sql/examples/ha_example.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/examples/ha_example.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/examples/ha_tina.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/examples/ha_tina.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/field.cc:
  Update for table definition cache:
  - Field creation now takes TABLE_SHARE instead of TABLE as argument
    (This is becasue field definitions are now cached in TABLE_SHARE)
    When a field is created, one now must call field->init(TABLE) before using it
  - Use s->db instead of s->table_cache_key
  - Added Field::clone() to create a field in TABLE from a field in TABLE_SHARE
  - make_field() takes TABLE_SHARE as argument instead of TABLE
  - move_field() -> move_field_offset()
sql/field.h:
  Update for table definition cache:
  - Field creation now takes TABLE_SHARE instead of TABLE as argument
    (This is becasue field definitions are now cached in TABLE_SHARE)
    When a field is created, one now must call field->init(TABLE) before using it
  - Added Field::clone() to create a field in TABLE from a field in TABLE_SHARE
  - make_field() takes TABLE_SHARE as argument instead of TABLE
  - move_field() -> move_field_offset()
sql/ha_archive.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_archive.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_berkeley.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Changed name of argument create() to not hide internal 'table' variable.
  table->s  -> table_share
sql/ha_berkeley.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_blackhole.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_blackhole.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_federated.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Fixed comments
  Remove index variable and replace with pointers (simple optimization)
  move_field() -> move_field_offset()
  Removed some strlen() calls
sql/ha_federated.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_heap.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Simplify delete_table() and create() as the given file names are now without extension
sql/ha_heap.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_innodb.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_innodb.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_myisam.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Remove not needed fn_format()
  Fixed for new table->s structure
sql/ha_myisam.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_myisammrg.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Don't set 'is_view' for MERGE tables
  Use new interface to find_temporary_table()
sql/ha_myisammrg.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Added flag HA_NO_COPY_ON_ALTER
sql/ha_ndbcluster.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Fixed wrong calls to strxnmov()
  Give error HA_ERR_TABLE_DEF_CHANGED if table definition has changed
  drop_table -> intern_drop_table()
  table->s -> table_share
  Move part_info to TABLE
  Fixed comments & DBUG print's
  New arguments to print_error()
sql/ha_ndbcluster.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
sql/ha_partition.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  We can't set up or use part_info when creating handler as there is not yet any table object
  New ha_intialise() to work with TDC (Done by Mikael)
sql/ha_partition.h:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  Got set_part_info() from Mikael
sql/handler.cc:
  We new use TABLE_SHARE instead of TABLE when creating engine handlers
  ha_delete_table() now also takes database as an argument
  handler::ha_open() now takes TABLE as argument
  ha_open() now calls ha_allocate_read_write_set()
  Simplify ha_allocate_read_write_set()
  Remove ha_deallocate_read_write_set()
  Use table_share (Cached by table definition cache)
sql/handler.h:
  New table flag: HA_NO_COPY_ON_ALTER (used by merge tables)
  Remove ha_deallocate_read_write_set()
  get_new_handler() now takes TABLE_SHARE as argument
  ha_delete_table() now gets database as argument
sql/item.cc:
  table_name and db are now LEX_STRING objects
  When creating fields, we have now have to call field->init(table)
  move_field -> move_field_offset()
sql/item.h:
  tmp_table_field_from_field_type() now takes an extra paramenter 'fixed_length' to allow one to force usage of CHAR
   instead of BLOB
sql/item_cmpfunc.cc:
  Fixed call to tmp_table_field_from_field_type()
sql/item_create.cc:
  Assert if new not handled cast type
sql/item_func.cc:
  When creating fields, we have now have to call field->init(table)
  dummy_table used by 'sp' now needs a TABLE_SHARE object
sql/item_subselect.cc:
  Trivial code cleanups
sql/item_sum.cc:
  When creating fields, we have now have to call field->init(table)
sql/item_timefunc.cc:
  Item_func_str_to_date::tmp_table_field() now replaced by call to
   tmp_table_field_from_field_type() (see item_timefunc.h)
sql/item_timefunc.h:
  Simply tmp_table_field()
sql/item_uniq.cc:
  When creating fields, we have now have to call field->init(table)
sql/key.cc:
  Added 'KEY' argument to 'find_ref_key' to simplify code
sql/lock.cc:
  More debugging
  Use create_table_def_key() to create key for table cache
  Allocate TABLE_SHARE properly when creating name lock
  Fix that locked_table_name doesn't test same table twice
sql/mysql_priv.h:
  New functions for table definition cache
  New interfaces to a lot of functions.
  New faster interface to find_temporary_table() and close_temporary_table()
sql/mysqld.cc:
  Added support for table definition cache of size 'table_def_size'
  Fixed som calls to strnmov()
  Changed name of 'table_cache' to 'table_open_cache'
sql/opt_range.cc:
  Use new interfaces
  Fixed warnings from valgrind
sql/parse_file.cc:
  Safer calls to strxnmov()
  Fixed typo
sql/set_var.cc:
  Added variable 'table_definition_cache'
  Variable table_cache renamed to 'table_open_cache'
sql/slave.cc:
  Use new interface
sql/sp.cc:
  Proper use of TABLE_SHARE
sql/sp_head.cc:
  Remove compiler warnings
  We have now to call field->init(table)
sql/sp_head.h:
  Pointers to parsed strings are now const
sql/sql_acl.cc:
  table_name is now a LEX_STRING
sql/sql_base.cc:
  Main implementation of table definition cache
  (The #ifdef's are there for the future when table definition cache will replace open table cache)
  Now table definitions are cached indepndent of open tables, which will speed up things when a table is in use at once from several places
  Views are not yet cached; For the moment we only cache if a table is a view or not.
  
  Faster implementation of find_temorary_table()
  Replace 'wait_for_refresh()' with the more general function 'wait_for_condition()'
  Drop table is slightly faster as we can use the table definition cache to know the type of the table
sql/sql_cache.cc:
  table_cache_key and table_name are now LEX_STRING
  'sDBUG print fixes
sql/sql_class.cc:
  table_cache_key is now a LEX_STRING
  safer strxnmov()
sql/sql_class.h:
  Added number of open table shares (table definitions)
sql/sql_db.cc:
  safer strxnmov()
sql/sql_delete.cc:
  Use new interface to find_temporary_table()
sql/sql_derived.cc:
  table_name is now a LEX_STRING
sql/sql_handler.cc:
  TABLE_SHARE->db and TABLE_SHARE->table_name are now LEX_STRING's
sql/sql_insert.cc:
  TABLE_SHARE->db and TABLE_SHARE->table_name are now LEX_STRING's
sql/sql_lex.cc:
  Make parsed string a const (to quickly find out if anything is trying to change the query string)
sql/sql_lex.h:
  Make parsed string a const (to quickly find out if anything is trying to change the query string)
sql/sql_load.cc:
  Safer strxnmov()
sql/sql_parse.cc:
  Better error if wrong DB name
sql/sql_partition.cc:
  part_info moved to TABLE from TABLE_SHARE
  Indentation changes
sql/sql_select.cc:
  Indentation fixes
  Call field->init(TABLE) for new created fields
  Update create_tmp_table() to use TABLE_SHARE properly
sql/sql_select.h:
  Call field->init(TABLE) for new created fields
sql/sql_show.cc:
  table_name is now a LEX_STRING
  part_info moved to TABLE
sql/sql_table.cc:
  Use table definition cache to speed up delete of tables
  Fixed calls to functions with new interfaces
  Don't use 'share_not_to_be_used'
  Instead of doing openfrm() when doing repair, we now have to call
  get_table_share() followed by open_table_from_share().
  Replace some fn_format() with faster unpack_filename().
  Safer strxnmov()
  part_info is now in TABLE
  Added Mikaels patch for partition and ALTER TABLE
  Instead of using 'TABLE_SHARE->is_view' use 'table_flags() & HA_NO_COPY_ON_ALTER
sql/sql_test.cc:
  table_name and table_cache_key are now LEX_STRING's
sql/sql_trigger.cc:
  TABLE_SHARE->db and TABLE_SHARE->table_name are now LEX_STRING's
  safer strxnmov()
  Removed compiler warnings
sql/sql_update.cc:
  Call field->init(TABLE) after field is created
sql/sql_view.cc:
  safer strxnmov()
  Create common TABLE_SHARE object for views to allow us to cache if table is a view
sql/structs.h:
  Added SHOW_TABLE_DEFINITIONS
sql/table.cc:
  Creation and destruct of TABLE_SHARE objects that are common for many TABLE objects
  
  The table opening process now works the following way:
  - Create common TABLE_SHARE object
  - Read the .frm file and unpack it into the TABLE_SHARE object
  - Create a TABLE object based on the information in the TABLE_SHARE
    object and open a handler to the table object
  
  open_table_def() is written in such a way that it should be trival to add parsing of the .frm files in new formats
sql/table.h:
  TABLE objects for the same database table now share a common TABLE_SHARE object
  In TABLE_SHARE the most common strings are now LEX_STRING's
sql/unireg.cc:
  Changed arguments to rea_create_table() to have same order as other functions
  Call field->init(table) for new created fields
sql/unireg.h:
  Added OPEN_VIEW
strings/strxnmov.c:
  Change strxnmov() to always add end \0
  This makes usage of strxnmov() safer as most of MySQL code assumes that strxnmov() will create a null terminated string
2005-11-23 22:45:02 +02:00

670 lines
18 KiB
C++

/* Copyright (C) 2000,2004 MySQL AB & MySQL Finland AB & TCX DataKonsult AB
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
#ifdef USE_PRAGMA_IMPLEMENTATION
#pragma implementation // gcc: Class implementation
#endif
#include "mysql_priv.h"
#include <myisampack.h>
#include "ha_heap.h"
static handler *heap_create_handler(TABLE_SHARE *table);
handlerton heap_hton= {
"MEMORY",
SHOW_OPTION_YES,
"Hash based, stored in memory, useful for temporary tables",
DB_TYPE_HEAP,
NULL,
0, /* slot */
0, /* savepoint size. */
NULL, /* close_connection */
NULL, /* savepoint */
NULL, /* rollback to savepoint */
NULL, /* release savepoint */
NULL, /* commit */
NULL, /* rollback */
NULL, /* prepare */
NULL, /* recover */
NULL, /* commit_by_xid */
NULL, /* rollback_by_xid */
NULL, /* create_cursor_read_view */
NULL, /* set_cursor_read_view */
NULL, /* close_cursor_read_view */
heap_create_handler, /* Create a new handler */
NULL, /* Drop a database */
heap_panic, /* Panic call */
NULL, /* Release temporary latches */
NULL, /* Update Statistics */
NULL, /* Start Consistent Snapshot */
NULL, /* Flush logs */
NULL, /* Show status */
NULL, /* Replication Report Sent Binlog */
HTON_CAN_RECREATE
};
static handler *heap_create_handler(TABLE_SHARE *table)
{
return new ha_heap(table);
}
/*****************************************************************************
** HEAP tables
*****************************************************************************/
ha_heap::ha_heap(TABLE_SHARE *table_arg)
:handler(&heap_hton, table_arg), file(0), records_changed(0),
key_stats_ok(0)
{}
static const char *ha_heap_exts[] = {
NullS
};
const char **ha_heap::bas_ext() const
{
return ha_heap_exts;
}
/*
Hash index statistics is updated (copied from HP_KEYDEF::hash_buckets to
rec_per_key) after 1/HEAP_STATS_UPDATE_THRESHOLD fraction of table records
have been inserted/updated/deleted. delete_all_rows() and table flush cause
immediate update.
NOTE
hash index statistics must be updated when number of table records changes
from 0 to non-zero value and vice versa. Otherwise records_in_range may
erroneously return 0 and 'range' may miss records.
*/
#define HEAP_STATS_UPDATE_THRESHOLD 10
int ha_heap::open(const char *name, int mode, uint test_if_locked)
{
if (!(file= heap_open(name, mode)) && my_errno == ENOENT)
{
HA_CREATE_INFO create_info;
bzero(&create_info, sizeof(create_info));
if (!create(name, table, &create_info))
{
file= heap_open(name, mode);
implicit_emptied= 1;
}
}
ref_length= sizeof(HEAP_PTR);
if (file)
{
/* Initialize variables for the opened table */
set_keys_for_scanning();
/*
We cannot run update_key_stats() here because we do not have a
lock on the table. The 'records' count might just be changed
temporarily at this moment and we might get wrong statistics (Bug
#10178). Instead we request for update. This will be done in
ha_heap::info(), which is always called before key statistics are
used.
*/
key_stats_ok= FALSE;
}
return (file ? 0 : 1);
}
int ha_heap::close(void)
{
return heap_close(file);
}
/*
Compute which keys to use for scanning
SYNOPSIS
set_keys_for_scanning()
no parameter
DESCRIPTION
Set the bitmap btree_keys, which is used when the upper layers ask
which keys to use for scanning. For each btree index the
corresponding bit is set.
RETURN
void
*/
void ha_heap::set_keys_for_scanning(void)
{
btree_keys.clear_all();
for (uint i= 0 ; i < table->s->keys ; i++)
{
if (table->key_info[i].algorithm == HA_KEY_ALG_BTREE)
btree_keys.set_bit(i);
}
}
void ha_heap::update_key_stats()
{
for (uint i= 0; i < table->s->keys; i++)
{
KEY *key=table->key_info+i;
if (!key->rec_per_key)
continue;
if (key->algorithm != HA_KEY_ALG_BTREE)
{
ha_rows hash_buckets= file->s->keydef[i].hash_buckets;
key->rec_per_key[key->key_parts-1]=
hash_buckets ? file->s->records/hash_buckets : 0;
}
}
records_changed= 0;
/* At the end of update_key_stats() we can proudly claim they are OK. */
key_stats_ok= TRUE;
}
int ha_heap::write_row(byte * buf)
{
int res;
statistic_increment(table->in_use->status_var.ha_write_count,&LOCK_status);
if (table->timestamp_field_type & TIMESTAMP_AUTO_SET_ON_INSERT)
table->timestamp_field->set_time();
if (table->next_number_field && buf == table->record[0])
update_auto_increment();
res= heap_write(file,buf);
if (!res && (++records_changed*HEAP_STATS_UPDATE_THRESHOLD >
file->s->records))
key_stats_ok= FALSE;
return res;
}
int ha_heap::update_row(const byte * old_data, byte * new_data)
{
int res;
statistic_increment(table->in_use->status_var.ha_update_count,&LOCK_status);
if (table->timestamp_field_type & TIMESTAMP_AUTO_SET_ON_UPDATE)
table->timestamp_field->set_time();
res= heap_update(file,old_data,new_data);
if (!res && ++records_changed*HEAP_STATS_UPDATE_THRESHOLD >
file->s->records)
key_stats_ok= FALSE;
return res;
}
int ha_heap::delete_row(const byte * buf)
{
int res;
statistic_increment(table->in_use->status_var.ha_delete_count,&LOCK_status);
res= heap_delete(file,buf);
if (!res && table->s->tmp_table == NO_TMP_TABLE &&
++records_changed*HEAP_STATS_UPDATE_THRESHOLD > file->s->records)
key_stats_ok= FALSE;
return res;
}
int ha_heap::index_read(byte * buf, const byte * key, uint key_len,
enum ha_rkey_function find_flag)
{
DBUG_ASSERT(inited==INDEX);
statistic_increment(table->in_use->status_var.ha_read_key_count,
&LOCK_status);
int error = heap_rkey(file,buf,active_index, key, key_len, find_flag);
table->status = error ? STATUS_NOT_FOUND : 0;
return error;
}
int ha_heap::index_read_last(byte *buf, const byte *key, uint key_len)
{
DBUG_ASSERT(inited==INDEX);
statistic_increment(table->in_use->status_var.ha_read_key_count,
&LOCK_status);
int error= heap_rkey(file, buf, active_index, key, key_len,
HA_READ_PREFIX_LAST);
table->status= error ? STATUS_NOT_FOUND : 0;
return error;
}
int ha_heap::index_read_idx(byte * buf, uint index, const byte * key,
uint key_len, enum ha_rkey_function find_flag)
{
statistic_increment(table->in_use->status_var.ha_read_key_count,
&LOCK_status);
int error = heap_rkey(file, buf, index, key, key_len, find_flag);
table->status = error ? STATUS_NOT_FOUND : 0;
return error;
}
int ha_heap::index_next(byte * buf)
{
DBUG_ASSERT(inited==INDEX);
statistic_increment(table->in_use->status_var.ha_read_next_count,
&LOCK_status);
int error=heap_rnext(file,buf);
table->status=error ? STATUS_NOT_FOUND: 0;
return error;
}
int ha_heap::index_prev(byte * buf)
{
DBUG_ASSERT(inited==INDEX);
statistic_increment(table->in_use->status_var.ha_read_prev_count,
&LOCK_status);
int error=heap_rprev(file,buf);
table->status=error ? STATUS_NOT_FOUND: 0;
return error;
}
int ha_heap::index_first(byte * buf)
{
DBUG_ASSERT(inited==INDEX);
statistic_increment(table->in_use->status_var.ha_read_first_count,
&LOCK_status);
int error=heap_rfirst(file, buf, active_index);
table->status=error ? STATUS_NOT_FOUND: 0;
return error;
}
int ha_heap::index_last(byte * buf)
{
DBUG_ASSERT(inited==INDEX);
statistic_increment(table->in_use->status_var.ha_read_last_count,
&LOCK_status);
int error=heap_rlast(file, buf, active_index);
table->status=error ? STATUS_NOT_FOUND: 0;
return error;
}
int ha_heap::rnd_init(bool scan)
{
return scan ? heap_scan_init(file) : 0;
}
int ha_heap::rnd_next(byte *buf)
{
statistic_increment(table->in_use->status_var.ha_read_rnd_next_count,
&LOCK_status);
int error=heap_scan(file, buf);
table->status=error ? STATUS_NOT_FOUND: 0;
return error;
}
int ha_heap::rnd_pos(byte * buf, byte *pos)
{
int error;
HEAP_PTR position;
statistic_increment(table->in_use->status_var.ha_read_rnd_count,
&LOCK_status);
memcpy_fixed((char*) &position,pos,sizeof(HEAP_PTR));
error=heap_rrnd(file, buf, position);
table->status=error ? STATUS_NOT_FOUND: 0;
return error;
}
void ha_heap::position(const byte *record)
{
*(HEAP_PTR*) ref= heap_position(file); // Ref is aligned
}
void ha_heap::info(uint flag)
{
HEAPINFO info;
(void) heap_info(file,&info,flag);
records = info.records;
deleted = info.deleted;
errkey = info.errkey;
mean_rec_length=info.reclength;
data_file_length=info.data_length;
index_file_length=info.index_length;
max_data_file_length= info.max_records* info.reclength;
delete_length= info.deleted * info.reclength;
if (flag & HA_STATUS_AUTO)
auto_increment_value= info.auto_increment;
/*
If info() is called for the first time after open(), we will still
have to update the key statistics. Hoping that a table lock is now
in place.
*/
if (! key_stats_ok)
update_key_stats();
}
int ha_heap::extra(enum ha_extra_function operation)
{
return heap_extra(file,operation);
}
int ha_heap::delete_all_rows()
{
heap_clear(file);
if (table->s->tmp_table == NO_TMP_TABLE)
key_stats_ok= FALSE;
return 0;
}
int ha_heap::external_lock(THD *thd, int lock_type)
{
return 0; // No external locking
}
/*
Disable indexes.
SYNOPSIS
disable_indexes()
mode mode of operation:
HA_KEY_SWITCH_NONUNIQ disable all non-unique keys
HA_KEY_SWITCH_ALL disable all keys
HA_KEY_SWITCH_NONUNIQ_SAVE dis. non-uni. and make persistent
HA_KEY_SWITCH_ALL_SAVE dis. all keys and make persistent
DESCRIPTION
Disable indexes and clear keys to use for scanning.
IMPLEMENTATION
HA_KEY_SWITCH_NONUNIQ is not implemented.
HA_KEY_SWITCH_NONUNIQ_SAVE is not implemented with HEAP.
HA_KEY_SWITCH_ALL_SAVE is not implemented with HEAP.
RETURN
0 ok
HA_ERR_WRONG_COMMAND mode not implemented.
*/
int ha_heap::disable_indexes(uint mode)
{
int error;
if (mode == HA_KEY_SWITCH_ALL)
{
if (!(error= heap_disable_indexes(file)))
set_keys_for_scanning();
}
else
{
/* mode not implemented */
error= HA_ERR_WRONG_COMMAND;
}
return error;
}
/*
Enable indexes.
SYNOPSIS
enable_indexes()
mode mode of operation:
HA_KEY_SWITCH_NONUNIQ enable all non-unique keys
HA_KEY_SWITCH_ALL enable all keys
HA_KEY_SWITCH_NONUNIQ_SAVE en. non-uni. and make persistent
HA_KEY_SWITCH_ALL_SAVE en. all keys and make persistent
DESCRIPTION
Enable indexes and set keys to use for scanning.
The indexes might have been disabled by disable_index() before.
The function works only if both data and indexes are empty,
since the heap storage engine cannot repair the indexes.
To be sure, call handler::delete_all_rows() before.
IMPLEMENTATION
HA_KEY_SWITCH_NONUNIQ is not implemented.
HA_KEY_SWITCH_NONUNIQ_SAVE is not implemented with HEAP.
HA_KEY_SWITCH_ALL_SAVE is not implemented with HEAP.
RETURN
0 ok
HA_ERR_CRASHED data or index is non-empty. Delete all rows and retry.
HA_ERR_WRONG_COMMAND mode not implemented.
*/
int ha_heap::enable_indexes(uint mode)
{
int error;
if (mode == HA_KEY_SWITCH_ALL)
{
if (!(error= heap_enable_indexes(file)))
set_keys_for_scanning();
}
else
{
/* mode not implemented */
error= HA_ERR_WRONG_COMMAND;
}
return error;
}
/*
Test if indexes are disabled.
SYNOPSIS
indexes_are_disabled()
no parameters
RETURN
0 indexes are not disabled
1 all indexes are disabled
[2 non-unique indexes are disabled - NOT YET IMPLEMENTED]
*/
int ha_heap::indexes_are_disabled(void)
{
return heap_indexes_are_disabled(file);
}
THR_LOCK_DATA **ha_heap::store_lock(THD *thd,
THR_LOCK_DATA **to,
enum thr_lock_type lock_type)
{
if (lock_type != TL_IGNORE && file->lock.type == TL_UNLOCK)
file->lock.type=lock_type;
*to++= &file->lock;
return to;
}
/*
We have to ignore ENOENT entries as the HEAP table is created on open and
not when doing a CREATE on the table.
*/
int ha_heap::delete_table(const char *name)
{
char buff[FN_REFLEN];
int error= heap_delete_table(name);
return error == ENOENT ? 0 : error;
}
void ha_heap::drop_table(const char *name)
{
heap_drop_table(file);
close();
}
int ha_heap::rename_table(const char * from, const char * to)
{
return heap_rename(from,to);
}
ha_rows ha_heap::records_in_range(uint inx, key_range *min_key,
key_range *max_key)
{
KEY *key=table->key_info+inx;
if (key->algorithm == HA_KEY_ALG_BTREE)
return hp_rb_records_in_range(file, inx, min_key, max_key);
if (!min_key || !max_key ||
min_key->length != max_key->length ||
min_key->length != key->key_length ||
min_key->flag != HA_READ_KEY_EXACT ||
max_key->flag != HA_READ_AFTER_KEY)
return HA_POS_ERROR; // Can only use exact keys
/* Assert that info() did run. We need current statistics here. */
DBUG_ASSERT(key_stats_ok);
return key->rec_per_key[key->key_parts-1];
}
int ha_heap::create(const char *name, TABLE *table_arg,
HA_CREATE_INFO *create_info)
{
uint key, parts, mem_per_row= 0, keys= table_arg->s->keys;
uint auto_key= 0, auto_key_type= 0;
ha_rows max_rows;
HP_KEYDEF *keydef;
HA_KEYSEG *seg;
int error;
TABLE_SHARE *share= table_arg->s;
bool found_real_auto_increment= 0;
for (key= parts= 0; key < keys; key++)
parts+= table_arg->key_info[key].key_parts;
if (!(keydef= (HP_KEYDEF*) my_malloc(keys * sizeof(HP_KEYDEF) +
parts * sizeof(HA_KEYSEG),
MYF(MY_WME))))
return my_errno;
seg= my_reinterpret_cast(HA_KEYSEG*) (keydef + keys);
for (key= 0; key < keys; key++)
{
KEY *pos= table_arg->key_info+key;
KEY_PART_INFO *key_part= pos->key_part;
KEY_PART_INFO *key_part_end= key_part + pos->key_parts;
keydef[key].keysegs= (uint) pos->key_parts;
keydef[key].flag= (pos->flags & (HA_NOSAME | HA_NULL_ARE_EQUAL));
keydef[key].seg= seg;
switch (pos->algorithm) {
case HA_KEY_ALG_UNDEF:
case HA_KEY_ALG_HASH:
keydef[key].algorithm= HA_KEY_ALG_HASH;
mem_per_row+= sizeof(char*) * 2; // = sizeof(HASH_INFO)
break;
case HA_KEY_ALG_BTREE:
keydef[key].algorithm= HA_KEY_ALG_BTREE;
mem_per_row+=sizeof(TREE_ELEMENT)+pos->key_length+sizeof(char*);
break;
default:
DBUG_ASSERT(0); // cannot happen
}
for (; key_part != key_part_end; key_part++, seg++)
{
Field *field= key_part->field;
if (pos->algorithm == HA_KEY_ALG_BTREE)
seg->type= field->key_type();
else
{
if ((seg->type = field->key_type()) != (int) HA_KEYTYPE_TEXT &&
seg->type != HA_KEYTYPE_VARTEXT1 &&
seg->type != HA_KEYTYPE_VARTEXT2 &&
seg->type != HA_KEYTYPE_VARBINARY1 &&
seg->type != HA_KEYTYPE_VARBINARY2)
seg->type= HA_KEYTYPE_BINARY;
}
seg->start= (uint) key_part->offset;
seg->length= (uint) key_part->length;
seg->flag= key_part->key_part_flag;
seg->charset= field->charset();
if (field->null_ptr)
{
seg->null_bit= field->null_bit;
seg->null_pos= (uint) (field->null_ptr - (uchar*) table_arg->record[0]);
}
else
{
seg->null_bit= 0;
seg->null_pos= 0;
}
if (field->flags & AUTO_INCREMENT_FLAG &&
table_arg->found_next_number_field &&
key == share->next_number_index)
{
/*
Store key number and type for found auto_increment key
We have to store type as seg->type can differ from it
*/
auto_key= key+ 1;
auto_key_type= field->key_type();
}
}
}
mem_per_row+= MY_ALIGN(share->reclength + 1, sizeof(char*));
max_rows = (ha_rows) (table_arg->in_use->variables.max_heap_table_size /
mem_per_row);
if (table_arg->found_next_number_field)
{
keydef[share->next_number_index].flag|= HA_AUTO_KEY;
found_real_auto_increment= share->next_number_key_offset == 0;
}
HP_CREATE_INFO hp_create_info;
hp_create_info.auto_key= auto_key;
hp_create_info.auto_key_type= auto_key_type;
hp_create_info.auto_increment= (create_info->auto_increment_value ?
create_info->auto_increment_value - 1 : 0);
hp_create_info.max_table_size=current_thd->variables.max_heap_table_size;
hp_create_info.with_auto_increment= found_real_auto_increment;
max_rows = (ha_rows) (hp_create_info.max_table_size / mem_per_row);
error= heap_create(name,
keys, keydef, share->reclength,
(ulong) ((share->max_rows < max_rows &&
share->max_rows) ?
share->max_rows : max_rows),
(ulong) share->min_rows, &hp_create_info);
my_free((gptr) keydef, MYF(0));
if (file)
info(HA_STATUS_NO_LOCK | HA_STATUS_CONST | HA_STATUS_VARIABLE);
return (error);
}
void ha_heap::update_create_info(HA_CREATE_INFO *create_info)
{
table->file->info(HA_STATUS_AUTO);
if (!(create_info->used_fields & HA_CREATE_USED_AUTO))
create_info->auto_increment_value= auto_increment_value;
}
ulonglong ha_heap::get_auto_increment()
{
ha_heap::info(HA_STATUS_AUTO);
return auto_increment_value;
}
bool ha_heap::check_if_incompatible_data(HA_CREATE_INFO *info,
uint table_changes)
{
/* Check that auto_increment value was not changed */
if ((table_changes != IS_EQUAL_YES &&
info->used_fields & HA_CREATE_USED_AUTO) &&
info->auto_increment_value != 0)
return COMPATIBLE_DATA_NO;
return COMPATIBLE_DATA_YES;
}