2008-11-10 21:21:49 +01:00
|
|
|
/* Copyright 2005-2008 MySQL AB, 2008 Sun Microsystems, Inc.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
This program is free software; you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
2006-12-27 02:23:51 +01:00
|
|
|
the Free Software Foundation; version 2 of the License.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program; if not, write to the Free Software
|
|
|
|
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
|
|
|
|
|
|
|
|
/*
|
2005-11-19 02:02:27 +01:00
|
|
|
This handler was developed by Mikael Ronstrom for version 5.1 of MySQL.
|
2005-07-18 13:31:02 +02:00
|
|
|
It is an abstraction layer on top of other handlers such as MyISAM,
|
|
|
|
InnoDB, Federated, Berkeley DB and so forth. Partitioned tables can also
|
|
|
|
be handled by a storage engine. The current example of this is NDB
|
|
|
|
Cluster that has internally handled partitioning. This have benefits in
|
|
|
|
that many loops needed in the partition handler can be avoided.
|
|
|
|
|
|
|
|
Partitioning has an inherent feature which in some cases is positive and
|
|
|
|
in some cases is negative. It splits the data into chunks. This makes
|
|
|
|
the data more manageable, queries can easily be parallelised towards the
|
|
|
|
parts and indexes are split such that there are less levels in the
|
|
|
|
index trees. The inherent disadvantage is that to use a split index
|
|
|
|
one has to scan all index parts which is ok for large queries but for
|
|
|
|
small queries it can be a disadvantage.
|
|
|
|
|
|
|
|
Partitioning lays the foundation for more manageable databases that are
|
|
|
|
extremely large. It does also lay the foundation for more parallelism
|
|
|
|
in the execution of queries. This functionality will grow with later
|
|
|
|
versions of MySQL.
|
|
|
|
|
|
|
|
You can enable it in your buld by doing the following during your build
|
|
|
|
process:
|
|
|
|
./configure --with-partition
|
|
|
|
|
|
|
|
The partition is setup to use table locks. It implements an partition "SHARE"
|
|
|
|
that is inserted into a hash by table name. You can use this to store
|
|
|
|
information of state that any partition handler object will be able to see
|
|
|
|
if it is using the same table.
|
|
|
|
|
|
|
|
Please read the object definition in ha_partition.h before reading the rest
|
|
|
|
if this file.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifdef __GNUC__
|
|
|
|
#pragma implementation // gcc: Class implementation
|
|
|
|
#endif
|
|
|
|
|
2005-11-26 05:35:37 +01:00
|
|
|
#include "mysql_priv.h"
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-04-13 22:49:29 +02:00
|
|
|
#ifdef WITH_PARTITION_STORAGE_ENGINE
|
2005-07-18 13:31:02 +02:00
|
|
|
#include "ha_partition.h"
|
|
|
|
|
2006-04-13 22:49:29 +02:00
|
|
|
#include <mysql/plugin.h>
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
static const char *ha_par_ext= ".par";
|
|
|
|
#ifdef NOT_USED
|
|
|
|
static int free_share(PARTITION_SHARE * share);
|
|
|
|
static PARTITION_SHARE *get_share(const char *table_name, TABLE * table);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE create/delete handler object
|
|
|
|
****************************************************************************/
|
|
|
|
|
2006-09-30 02:19:02 +02:00
|
|
|
static handler *partition_create_handler(handlerton *hton,
|
|
|
|
TABLE_SHARE *share,
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
MEM_ROOT *mem_root);
|
2006-01-17 08:40:00 +01:00
|
|
|
static uint partition_flags();
|
|
|
|
static uint alter_table_flags(uint flags);
|
2005-11-07 16:25:06 +01:00
|
|
|
|
2006-05-28 14:51:01 +02:00
|
|
|
|
2006-09-15 19:28:00 +02:00
|
|
|
static int partition_initialize(void *p)
|
2006-05-28 14:51:01 +02:00
|
|
|
{
|
2006-09-15 19:28:00 +02:00
|
|
|
|
2006-10-01 03:31:13 +02:00
|
|
|
handlerton *partition_hton;
|
2006-09-15 19:28:00 +02:00
|
|
|
partition_hton= (handlerton *)p;
|
|
|
|
|
|
|
|
partition_hton->state= SHOW_OPTION_YES;
|
|
|
|
partition_hton->db_type= DB_TYPE_PARTITION_DB;
|
|
|
|
partition_hton->create= partition_create_handler;
|
|
|
|
partition_hton->partition_flags= partition_flags;
|
|
|
|
partition_hton->alter_table_flags= alter_table_flags;
|
|
|
|
partition_hton->flags= HTON_NOT_USER_SELECTABLE | HTON_HIDDEN;
|
|
|
|
|
2006-05-28 14:51:01 +02:00
|
|
|
return 0;
|
|
|
|
}
|
2005-07-20 00:40:49 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Create new partition handler
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
partition_create_handler()
|
|
|
|
table Table object
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
New partition object
|
|
|
|
*/
|
|
|
|
|
2006-09-30 02:19:02 +02:00
|
|
|
static handler *partition_create_handler(handlerton *hton,
|
|
|
|
TABLE_SHARE *share,
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
MEM_ROOT *mem_root)
|
2005-11-07 16:25:06 +01:00
|
|
|
{
|
2006-09-30 02:19:02 +02:00
|
|
|
ha_partition *file= new (mem_root) ha_partition(hton, share);
|
2008-10-29 21:20:04 +01:00
|
|
|
if (file && file->initialize_partition(mem_root))
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
{
|
|
|
|
delete file;
|
|
|
|
file= 0;
|
|
|
|
}
|
|
|
|
return file;
|
2005-11-07 16:25:06 +01:00
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
HA_CAN_PARTITION:
|
|
|
|
Used by storage engines that can handle partitioning without this
|
|
|
|
partition handler
|
|
|
|
(Partition, NDB)
|
|
|
|
|
|
|
|
HA_CAN_UPDATE_PARTITION_KEY:
|
|
|
|
Set if the handler can update fields that are part of the partition
|
|
|
|
function.
|
|
|
|
|
|
|
|
HA_CAN_PARTITION_UNIQUE:
|
|
|
|
Set if the handler can handle unique indexes where the fields of the
|
|
|
|
unique key are not part of the fields of the partition function. Thus
|
|
|
|
a unique key can be set on all fields.
|
|
|
|
|
|
|
|
HA_USE_AUTO_PARTITION
|
|
|
|
Set if the handler sets all tables to be partitioned by default.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static uint partition_flags()
|
|
|
|
{
|
|
|
|
return HA_CAN_PARTITION;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint alter_table_flags(uint flags __attribute__((unused)))
|
|
|
|
{
|
|
|
|
return (HA_PARTITION_FUNCTION_SUPPORTED |
|
|
|
|
HA_FAST_CHANGE_PARTITION);
|
|
|
|
}
|
|
|
|
|
2007-09-07 13:30:42 +02:00
|
|
|
const uint ha_partition::NO_CURRENT_PART_ID= 0xFFFFFFFF;
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Constructor method
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
ha_partition()
|
|
|
|
table Table object
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
2005-11-23 21:45:02 +01:00
|
|
|
|
2006-09-30 02:19:02 +02:00
|
|
|
ha_partition::ha_partition(handlerton *hton, TABLE_SHARE *share)
|
|
|
|
:handler(hton, share), m_part_info(NULL), m_create_handler(FALSE),
|
2008-10-29 21:20:04 +01:00
|
|
|
m_is_sub_partitioned(0)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::ha_partition(table)");
|
|
|
|
init_handler_variables();
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Constructor method
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
ha_partition()
|
|
|
|
part_info Partition info
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
2006-09-30 02:19:02 +02:00
|
|
|
ha_partition::ha_partition(handlerton *hton, partition_info *part_info)
|
2008-10-29 21:20:04 +01:00
|
|
|
:handler(hton, NULL), m_part_info(part_info), m_create_handler(TRUE),
|
|
|
|
m_is_sub_partitioned(m_part_info->is_sub_partitioned())
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::ha_partition(part_info)");
|
|
|
|
init_handler_variables();
|
|
|
|
DBUG_ASSERT(m_part_info);
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
2008-10-29 21:20:04 +01:00
|
|
|
Initialize handler object
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
init_handler_variables()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
void ha_partition::init_handler_variables()
|
|
|
|
{
|
|
|
|
active_index= MAX_KEY;
|
2006-01-17 08:40:00 +01:00
|
|
|
m_mode= 0;
|
|
|
|
m_open_test_lock= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
m_file_buffer= NULL;
|
|
|
|
m_name_buffer_ptr= NULL;
|
|
|
|
m_engine_array= NULL;
|
|
|
|
m_file= NULL;
|
2006-07-10 23:08:42 +02:00
|
|
|
m_file_tot_parts= 0;
|
2006-01-17 08:40:00 +01:00
|
|
|
m_reorged_file= NULL;
|
2006-03-04 21:21:27 +01:00
|
|
|
m_new_file= NULL;
|
2006-01-17 08:40:00 +01:00
|
|
|
m_reorged_parts= 0;
|
|
|
|
m_added_file= NULL;
|
2005-07-18 13:31:02 +02:00
|
|
|
m_tot_parts= 0;
|
|
|
|
m_pkey_is_clustered= 0;
|
|
|
|
m_lock_type= F_UNLCK;
|
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
|
|
|
m_scan_value= 2;
|
|
|
|
m_ref_length= 0;
|
|
|
|
m_part_spec.end_part= NO_CURRENT_PART_ID;
|
|
|
|
m_index_scan_type= partition_no_index_scan;
|
|
|
|
m_start_key.key= NULL;
|
|
|
|
m_start_key.length= 0;
|
|
|
|
m_myisam= FALSE;
|
|
|
|
m_innodb= FALSE;
|
|
|
|
m_extra_cache= FALSE;
|
|
|
|
m_extra_cache_size= 0;
|
2008-10-29 21:20:04 +01:00
|
|
|
m_handler_status= handler_not_initialized;
|
2005-07-18 13:31:02 +02:00
|
|
|
m_low_byte_first= 1;
|
|
|
|
m_part_field_array= NULL;
|
|
|
|
m_ordered_rec_buffer= NULL;
|
|
|
|
m_top_entry= NO_CURRENT_PART_ID;
|
|
|
|
m_rec_length= 0;
|
|
|
|
m_last_part= 0;
|
|
|
|
m_rec0= 0;
|
2008-10-10 12:01:01 +02:00
|
|
|
m_curr_key_info[0]= NULL;
|
|
|
|
m_curr_key_info[1]= NULL;
|
2008-10-29 21:20:04 +01:00
|
|
|
is_clone= FALSE,
|
|
|
|
auto_increment_lock= FALSE;
|
|
|
|
auto_increment_safe_stmt_log_lock= FALSE;
|
2005-11-19 02:02:27 +01:00
|
|
|
/*
|
|
|
|
this allows blackhole to work properly
|
|
|
|
*/
|
|
|
|
m_no_locks= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
#ifdef DONT_HAVE_TO_BE_INITALIZED
|
|
|
|
m_start_key.flag= 0;
|
|
|
|
m_ordered= TRUE;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-03-31 09:02:44 +02:00
|
|
|
const char *ha_partition::table_type() const
|
|
|
|
{
|
|
|
|
// we can do this since we only support a single engine type
|
|
|
|
return m_file[0]->table_type();
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Destructor method
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
~ha_partition()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
ha_partition::~ha_partition()
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::~ha_partition()");
|
|
|
|
if (m_file != NULL)
|
|
|
|
{
|
|
|
|
uint i;
|
|
|
|
for (i= 0; i < m_tot_parts; i++)
|
|
|
|
delete m_file[i];
|
|
|
|
}
|
|
|
|
my_free((char*) m_ordered_rec_buffer, MYF(MY_ALLOW_ZERO_PTR));
|
|
|
|
|
|
|
|
clear_handler_file();
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2008-10-29 21:20:04 +01:00
|
|
|
Initialize partition handler object
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
SYNOPSIS
|
2008-10-29 21:20:04 +01:00
|
|
|
initialize_partition()
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
mem_root Allocate memory through this
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
1 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
The partition handler is only a layer on top of other engines. Thus it
|
|
|
|
can't really perform anything without the underlying handlers. Thus we
|
|
|
|
add this method as part of the allocation of a handler object.
|
|
|
|
|
|
|
|
1) Allocation of underlying handlers
|
|
|
|
If we have access to the partition info we will allocate one handler
|
|
|
|
instance for each partition.
|
|
|
|
2) Allocation without partition info
|
|
|
|
The cases where we don't have access to this information is when called
|
|
|
|
in preparation for delete_table and rename_table and in that case we
|
|
|
|
only need to set HA_FILE_BASED. In that case we will use the .par file
|
|
|
|
that contains information about the partitions and their engines and
|
|
|
|
the names of each partition.
|
|
|
|
3) Table flags initialisation
|
|
|
|
We need also to set table flags for the partition handler. This is not
|
|
|
|
static since it depends on what storage engines are used as underlying
|
|
|
|
handlers.
|
|
|
|
The table flags is set in this routine to simulate the behaviour of a
|
|
|
|
normal storage engine
|
|
|
|
The flag HA_FILE_BASED will be set independent of the underlying handlers
|
|
|
|
4) Index flags initialisation
|
2008-10-29 21:20:04 +01:00
|
|
|
When knowledge exists on the indexes it is also possible to initialize the
|
|
|
|
index flags. Again the index flags must be initialized by using the under-
|
2005-07-18 13:31:02 +02:00
|
|
|
lying handlers since this is storage engine dependent.
|
|
|
|
The flag HA_READ_ORDER will be reset for the time being to indicate no
|
|
|
|
ordered output is available from partition handler indexes. Later a merge
|
|
|
|
sort will be performed using the underlying handlers.
|
|
|
|
5) primary_key_is_clustered, has_transactions and low_byte_first is
|
|
|
|
calculated here.
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2008-10-29 21:20:04 +01:00
|
|
|
bool ha_partition::initialize_partition(MEM_ROOT *mem_root)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
handler **file_array, *file;
|
2008-10-29 21:20:04 +01:00
|
|
|
ulonglong check_table_flags;
|
|
|
|
DBUG_ENTER("ha_partition::initialize_partition");
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2005-11-23 21:45:02 +01:00
|
|
|
if (m_create_handler)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-02-16 17:38:33 +01:00
|
|
|
m_tot_parts= m_part_info->get_tot_partitions();
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ASSERT(m_tot_parts > 0);
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (new_handlers_from_part_info(mem_root))
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(1);
|
2005-11-23 21:45:02 +01:00
|
|
|
}
|
|
|
|
else if (!table_share || !table_share->normalized_path.str)
|
|
|
|
{
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2008-10-29 21:20:04 +01:00
|
|
|
Called with dummy table share (delete, rename and alter table).
|
|
|
|
Don't need to set-up anything.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
2005-11-23 21:45:02 +01:00
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
else if (get_from_handler_file(table_share->normalized_path.str, mem_root))
|
2005-11-23 21:45:02 +01:00
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
mem_alloc_error(2);
|
2005-11-23 21:45:02 +01:00
|
|
|
DBUG_RETURN(1);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
2005-11-23 21:45:02 +01:00
|
|
|
/*
|
|
|
|
We create all underlying table handlers here. We do it in this special
|
|
|
|
method to be able to report allocation errors.
|
|
|
|
|
2008-10-29 21:20:04 +01:00
|
|
|
Set up low_byte_first, primary_key_is_clustered and
|
2005-11-23 21:45:02 +01:00
|
|
|
has_transactions since they are called often in all kinds of places,
|
|
|
|
other parameters are calculated on demand.
|
2008-10-29 21:20:04 +01:00
|
|
|
Verify that all partitions have the same table_flags.
|
2005-11-23 21:45:02 +01:00
|
|
|
*/
|
2008-10-29 21:20:04 +01:00
|
|
|
check_table_flags= m_file[0]->ha_table_flags();
|
2005-11-23 21:45:02 +01:00
|
|
|
m_low_byte_first= m_file[0]->low_byte_first();
|
|
|
|
m_pkey_is_clustered= TRUE;
|
|
|
|
file_array= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
file= *file_array;
|
|
|
|
if (m_low_byte_first != file->low_byte_first())
|
|
|
|
{
|
|
|
|
// Cannot have handlers with different endian
|
|
|
|
my_error(ER_MIX_HANDLER_ERROR, MYF(0));
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
}
|
|
|
|
if (!file->primary_key_is_clustered())
|
|
|
|
m_pkey_is_clustered= FALSE;
|
2008-10-29 21:20:04 +01:00
|
|
|
if (check_table_flags != file->ha_table_flags())
|
|
|
|
{
|
|
|
|
my_error(ER_MIX_HANDLER_ERROR, MYF(0));
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
}
|
2005-11-23 21:45:02 +01:00
|
|
|
} while (*(++file_array));
|
2008-10-29 21:20:04 +01:00
|
|
|
m_handler_status= handler_initialized;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE meta data changes
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Delete a table
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
delete_table()
|
|
|
|
name Full path of table name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Used to delete a table. By the time delete_table() has been called all
|
|
|
|
opened references to this table will have been closed (and your globally
|
|
|
|
shared references released. The variable name will just be the name of
|
|
|
|
the table. You will need to remove any files you have created at this
|
|
|
|
point.
|
|
|
|
|
|
|
|
If you do not implement this, the default delete_table() is called from
|
|
|
|
handler.cc and it will delete all files with the file extentions returned
|
|
|
|
by bas_ext().
|
|
|
|
|
|
|
|
Called from handler.cc by delete_table and ha_create_table(). Only used
|
|
|
|
during create if the table_flag HA_DROP_BEFORE_CREATE was specified for
|
|
|
|
the storage engine.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::delete_table(const char *name)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
DBUG_ENTER("ha_partition::delete_table");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((error= del_ren_cre_table(name, NULL, NULL, NULL)))
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
DBUG_RETURN(handler::delete_table(name));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Rename a table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
rename_table()
|
|
|
|
from Full path of old table name
|
|
|
|
to Full path of new table name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
DESCRIPTION
|
|
|
|
Renames a table from one name to another from alter table call.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
If you do not implement this, the default rename_table() is called from
|
|
|
|
handler.cc and it will rename all files with the file extentions returned
|
|
|
|
by bas_ext().
|
|
|
|
|
|
|
|
Called from sql_table.cc by mysql_rename_table().
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::rename_table(const char *from, const char *to)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
DBUG_ENTER("ha_partition::rename_table");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((error= del_ren_cre_table(from, to, NULL, NULL)))
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
DBUG_RETURN(handler::rename_table(from, to));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Create the handler file (.par-file)
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
create_handler_files()
|
|
|
|
name Full path of table name
|
2006-04-11 14:06:32 +02:00
|
|
|
create_info Create info generated for CREATE TABLE
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
create_handler_files is called to create any handler specific files
|
|
|
|
before opening the file with openfrm to later call ::create on the
|
|
|
|
file object.
|
|
|
|
In the partition handler this is used to store the names of partitions
|
|
|
|
and types of engines in the partitions.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2006-02-01 10:06:07 +01:00
|
|
|
int ha_partition::create_handler_files(const char *path,
|
|
|
|
const char *old_path,
|
2006-04-20 03:22:35 +02:00
|
|
|
int action_flag,
|
2006-04-11 14:06:32 +02:00
|
|
|
HA_CREATE_INFO *create_info)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::create_handler_files()");
|
2005-08-19 16:26:05 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
We need to update total number of parts since we might write the handler
|
|
|
|
file as part of a partition management command
|
|
|
|
*/
|
2006-04-20 03:43:30 +02:00
|
|
|
if (action_flag == CHF_DELETE_FLAG ||
|
|
|
|
action_flag == CHF_RENAME_FLAG)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-02-01 10:06:07 +01:00
|
|
|
char name[FN_REFLEN];
|
|
|
|
char old_name[FN_REFLEN];
|
|
|
|
|
2006-02-11 06:41:52 +01:00
|
|
|
strxmov(name, path, ha_par_ext, NullS);
|
|
|
|
strxmov(old_name, old_path, ha_par_ext, NullS);
|
2006-04-16 03:49:13 +02:00
|
|
|
if ((action_flag == CHF_DELETE_FLAG &&
|
|
|
|
my_delete(name, MYF(MY_WME))) ||
|
|
|
|
(action_flag == CHF_RENAME_FLAG &&
|
|
|
|
my_rename(old_name, name, MYF(MY_WME))))
|
2006-02-01 10:06:07 +01:00
|
|
|
{
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
}
|
|
|
|
}
|
2006-04-20 03:43:30 +02:00
|
|
|
else if (action_flag == CHF_CREATE_FLAG)
|
2006-02-01 10:06:07 +01:00
|
|
|
{
|
|
|
|
if (create_handler_file(path))
|
|
|
|
{
|
|
|
|
my_error(ER_CANT_CREATE_HANDLER_FILE, MYF(0));
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Create a partitioned table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
create()
|
|
|
|
name Full path of table name
|
|
|
|
table_arg Table object
|
|
|
|
create_info Create info generated for CREATE TABLE
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
create() is called to create a table. The variable name will have the name
|
|
|
|
of the table. When create() is called you do not need to worry about
|
|
|
|
opening the table. Also, the FRM file will have already been created so
|
|
|
|
adjusting create_info will not do you any good. You can overwrite the frm
|
|
|
|
file at this point if you wish to change the table definition, but there
|
|
|
|
are no methods currently provided for doing that.
|
|
|
|
|
|
|
|
Called from handler.cc by ha_create_table().
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::create(const char *name, TABLE *table_arg,
|
|
|
|
HA_CREATE_INFO *create_info)
|
|
|
|
{
|
|
|
|
char t_name[FN_REFLEN];
|
|
|
|
DBUG_ENTER("ha_partition::create");
|
|
|
|
|
|
|
|
strmov(t_name, name);
|
|
|
|
DBUG_ASSERT(*fn_rext((char*)name) == '\0');
|
|
|
|
if (del_ren_cre_table(t_name, NULL, table_arg, create_info))
|
|
|
|
{
|
|
|
|
handler::delete_table(t_name);
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
}
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Drop partitions as part of ALTER TABLE of partitions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
drop_partitions()
|
|
|
|
path Complete path of db and table name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Failure
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Use part_info object on handler object to deduce which partitions to
|
|
|
|
drop (each partition has a state attached to it)
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::drop_partitions(const char *path)
|
|
|
|
{
|
|
|
|
List_iterator<partition_element> part_it(m_part_info->partitions);
|
|
|
|
char part_name_buff[FN_REFLEN];
|
|
|
|
uint no_parts= m_part_info->partitions.elements;
|
|
|
|
uint no_subparts= m_part_info->no_subparts;
|
|
|
|
uint i= 0;
|
|
|
|
uint name_variant;
|
2006-02-14 14:22:21 +01:00
|
|
|
int ret_error;
|
2006-02-09 20:25:10 +01:00
|
|
|
int error= 0;
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_ENTER("ha_partition::drop_partitions");
|
|
|
|
|
2008-07-11 01:14:13 +02:00
|
|
|
/*
|
|
|
|
Assert that it works without HA_FILE_BASED and lower_case_table_name = 2.
|
|
|
|
We use m_file[0] as long as all partitions have the same storage engine.
|
|
|
|
*/
|
|
|
|
DBUG_ASSERT(!strcmp(path, get_canonical_filename(m_file[0], path,
|
|
|
|
part_name_buff)));
|
2006-01-17 08:40:00 +01:00
|
|
|
do
|
|
|
|
{
|
2006-02-10 22:36:01 +01:00
|
|
|
partition_element *part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_TO_BE_DROPPED)
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
handler *file;
|
|
|
|
/*
|
|
|
|
This part is to be dropped, meaning the part or all its subparts.
|
|
|
|
*/
|
|
|
|
name_variant= NORMAL_PART_NAME;
|
|
|
|
if (m_is_sub_partitioned)
|
|
|
|
{
|
|
|
|
List_iterator<partition_element> sub_it(part_elem->subpartitions);
|
|
|
|
uint j= 0, part;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *sub_elem= sub_it++;
|
|
|
|
part= i * no_subparts + j;
|
|
|
|
create_subpartition_name(part_name_buff, path,
|
|
|
|
part_elem->partition_name,
|
|
|
|
sub_elem->partition_name, name_variant);
|
2006-02-10 22:36:01 +01:00
|
|
|
file= m_file[part];
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_PRINT("info", ("Drop subpartition %s", part_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_delete_table(part_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-05-27 22:34:13 +02:00
|
|
|
if (deactivate_ddl_log_entry(sub_elem->log_entry->entry_pos))
|
|
|
|
error= 1;
|
2006-01-17 08:40:00 +01:00
|
|
|
} while (++j < no_subparts);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
create_partition_name(part_name_buff, path,
|
|
|
|
part_elem->partition_name, name_variant,
|
|
|
|
TRUE);
|
2006-02-10 22:36:01 +01:00
|
|
|
file= m_file[i];
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_PRINT("info", ("Drop partition %s", part_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_delete_table(part_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-05-27 22:34:13 +02:00
|
|
|
if (deactivate_ddl_log_entry(part_elem->log_entry->entry_pos))
|
|
|
|
error= 1;
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
if (part_elem->part_state == PART_IS_CHANGED)
|
|
|
|
part_elem->part_state= PART_NORMAL;
|
|
|
|
else
|
|
|
|
part_elem->part_state= PART_IS_DROPPED;
|
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
2006-05-27 22:34:13 +02:00
|
|
|
VOID(sync_ddl_log());
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Rename partitions as part of ALTER TABLE of partitions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
rename_partitions()
|
|
|
|
path Complete path of db and table name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
TRUE Failure
|
|
|
|
FALSE Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
When reorganising partitions, adding hash partitions and coalescing
|
|
|
|
partitions it can be necessary to rename partitions while holding
|
|
|
|
an exclusive lock on the table.
|
|
|
|
Which partitions to rename is given by state of partitions found by the
|
|
|
|
partition info struct referenced from the handler object
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::rename_partitions(const char *path)
|
|
|
|
{
|
|
|
|
List_iterator<partition_element> part_it(m_part_info->partitions);
|
|
|
|
List_iterator<partition_element> temp_it(m_part_info->temp_partitions);
|
|
|
|
char part_name_buff[FN_REFLEN];
|
|
|
|
char norm_name_buff[FN_REFLEN];
|
|
|
|
uint no_parts= m_part_info->partitions.elements;
|
|
|
|
uint part_count= 0;
|
|
|
|
uint no_subparts= m_part_info->no_subparts;
|
|
|
|
uint i= 0;
|
|
|
|
uint j= 0;
|
2006-02-09 20:25:10 +01:00
|
|
|
int error= 0;
|
2006-02-14 14:22:21 +01:00
|
|
|
int ret_error;
|
2006-01-17 08:40:00 +01:00
|
|
|
uint temp_partitions= m_part_info->temp_partitions.elements;
|
|
|
|
handler *file;
|
|
|
|
partition_element *part_elem, *sub_elem;
|
|
|
|
DBUG_ENTER("ha_partition::rename_partitions");
|
|
|
|
|
2008-07-11 01:14:13 +02:00
|
|
|
/*
|
|
|
|
Assert that it works without HA_FILE_BASED and lower_case_table_name = 2.
|
|
|
|
We use m_file[0] as long as all partitions have the same storage engine.
|
|
|
|
*/
|
|
|
|
DBUG_ASSERT(!strcmp(path, get_canonical_filename(m_file[0], path,
|
|
|
|
norm_name_buff)));
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
if (temp_partitions)
|
|
|
|
{
|
2006-02-10 22:36:01 +01:00
|
|
|
/*
|
|
|
|
These are the reorganised partitions that have already been copied.
|
|
|
|
We delete the partitions and log the delete by inactivating the
|
|
|
|
delete log entry in the table log. We only need to synchronise
|
|
|
|
these writes before moving to the next loop since there is no
|
|
|
|
interaction among reorganised partitions, they cannot have the
|
|
|
|
same name.
|
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
do
|
|
|
|
{
|
|
|
|
part_elem= temp_it++;
|
|
|
|
if (m_is_sub_partitioned)
|
|
|
|
{
|
|
|
|
List_iterator<partition_element> sub_it(part_elem->subpartitions);
|
|
|
|
do
|
|
|
|
{
|
|
|
|
sub_elem= sub_it++;
|
|
|
|
file= m_reorged_file[part_count++];
|
|
|
|
create_subpartition_name(norm_name_buff, path,
|
|
|
|
part_elem->partition_name,
|
|
|
|
sub_elem->partition_name,
|
|
|
|
NORMAL_PART_NAME);
|
2006-02-10 22:36:01 +01:00
|
|
|
DBUG_PRINT("info", ("Delete subpartition %s", norm_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_delete_table(norm_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-03-25 00:19:13 +01:00
|
|
|
else if (deactivate_ddl_log_entry(sub_elem->log_entry->entry_pos))
|
2006-02-10 22:36:01 +01:00
|
|
|
error= 1;
|
|
|
|
else
|
|
|
|
sub_elem->log_entry= NULL; /* Indicate success */
|
2006-01-17 08:40:00 +01:00
|
|
|
} while (++j < no_subparts);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
file= m_reorged_file[part_count++];
|
|
|
|
create_partition_name(norm_name_buff, path,
|
|
|
|
part_elem->partition_name, NORMAL_PART_NAME,
|
|
|
|
TRUE);
|
2006-02-10 22:36:01 +01:00
|
|
|
DBUG_PRINT("info", ("Delete partition %s", norm_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_delete_table(norm_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-03-25 00:19:13 +01:00
|
|
|
else if (deactivate_ddl_log_entry(part_elem->log_entry->entry_pos))
|
2006-02-10 22:36:01 +01:00
|
|
|
error= 1;
|
|
|
|
else
|
2006-02-20 22:22:19 +01:00
|
|
|
part_elem->log_entry= NULL; /* Indicate success */
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
} while (++i < temp_partitions);
|
2006-03-25 00:19:13 +01:00
|
|
|
VOID(sync_ddl_log());
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
i= 0;
|
|
|
|
do
|
|
|
|
{
|
2006-02-10 22:36:01 +01:00
|
|
|
/*
|
|
|
|
When state is PART_IS_CHANGED it means that we have created a new
|
|
|
|
TEMP partition that is to be renamed to normal partition name and
|
|
|
|
we are to delete the old partition with currently the normal name.
|
|
|
|
|
|
|
|
We perform this operation by
|
|
|
|
1) Delete old partition with normal partition name
|
|
|
|
2) Signal this in table log entry
|
|
|
|
3) Synch table log to ensure we have consistency in crashes
|
|
|
|
4) Rename temporary partition name to normal partition name
|
|
|
|
5) Signal this to table log entry
|
|
|
|
It is not necessary to synch the last state since a new rename
|
|
|
|
should not corrupt things if there was no temporary partition.
|
|
|
|
|
|
|
|
The only other parts we need to cater for are new parts that
|
|
|
|
replace reorganised parts. The reorganised parts were deleted
|
|
|
|
by the code above that goes through the temp_partitions list.
|
|
|
|
Thus the synch above makes it safe to simply perform step 4 and 5
|
|
|
|
for those entries.
|
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_IS_CHANGED ||
|
2006-05-27 22:34:13 +02:00
|
|
|
part_elem->part_state == PART_TO_BE_DROPPED ||
|
2006-01-17 08:40:00 +01:00
|
|
|
(part_elem->part_state == PART_IS_ADDED && temp_partitions))
|
|
|
|
{
|
|
|
|
if (m_is_sub_partitioned)
|
|
|
|
{
|
|
|
|
List_iterator<partition_element> sub_it(part_elem->subpartitions);
|
|
|
|
uint part;
|
|
|
|
|
|
|
|
j= 0;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
sub_elem= sub_it++;
|
|
|
|
part= i * no_subparts + j;
|
|
|
|
create_subpartition_name(norm_name_buff, path,
|
|
|
|
part_elem->partition_name,
|
|
|
|
sub_elem->partition_name,
|
|
|
|
NORMAL_PART_NAME);
|
|
|
|
if (part_elem->part_state == PART_IS_CHANGED)
|
|
|
|
{
|
|
|
|
file= m_reorged_file[part_count++];
|
2006-02-10 22:36:01 +01:00
|
|
|
DBUG_PRINT("info", ("Delete subpartition %s", norm_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_delete_table(norm_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-03-25 00:19:13 +01:00
|
|
|
else if (deactivate_ddl_log_entry(sub_elem->log_entry->entry_pos))
|
2006-02-10 22:36:01 +01:00
|
|
|
error= 1;
|
2006-03-25 00:19:13 +01:00
|
|
|
VOID(sync_ddl_log());
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
file= m_new_file[part];
|
|
|
|
create_subpartition_name(part_name_buff, path,
|
|
|
|
part_elem->partition_name,
|
|
|
|
sub_elem->partition_name,
|
|
|
|
TEMP_PART_NAME);
|
|
|
|
DBUG_PRINT("info", ("Rename subpartition from %s to %s",
|
|
|
|
part_name_buff, norm_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_rename_table(part_name_buff,
|
|
|
|
norm_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-03-25 00:19:13 +01:00
|
|
|
else if (deactivate_ddl_log_entry(sub_elem->log_entry->entry_pos))
|
2006-02-10 22:36:01 +01:00
|
|
|
error= 1;
|
|
|
|
else
|
|
|
|
sub_elem->log_entry= NULL;
|
2006-01-17 08:40:00 +01:00
|
|
|
} while (++j < no_subparts);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
create_partition_name(norm_name_buff, path,
|
|
|
|
part_elem->partition_name, NORMAL_PART_NAME,
|
|
|
|
TRUE);
|
|
|
|
if (part_elem->part_state == PART_IS_CHANGED)
|
|
|
|
{
|
|
|
|
file= m_reorged_file[part_count++];
|
2006-02-20 22:22:19 +01:00
|
|
|
DBUG_PRINT("info", ("Delete partition %s", norm_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_delete_table(norm_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-03-25 00:19:13 +01:00
|
|
|
else if (deactivate_ddl_log_entry(part_elem->log_entry->entry_pos))
|
2006-02-10 22:36:01 +01:00
|
|
|
error= 1;
|
2006-03-25 00:19:13 +01:00
|
|
|
VOID(sync_ddl_log());
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
file= m_new_file[i];
|
|
|
|
create_partition_name(part_name_buff, path,
|
|
|
|
part_elem->partition_name, TEMP_PART_NAME,
|
|
|
|
TRUE);
|
|
|
|
DBUG_PRINT("info", ("Rename partition from %s to %s",
|
|
|
|
part_name_buff, norm_name_buff));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((ret_error= file->ha_rename_table(part_name_buff,
|
|
|
|
norm_name_buff)))
|
2006-02-14 14:22:21 +01:00
|
|
|
error= ret_error;
|
2006-03-25 00:19:13 +01:00
|
|
|
else if (deactivate_ddl_log_entry(part_elem->log_entry->entry_pos))
|
2006-02-10 22:36:01 +01:00
|
|
|
error= 1;
|
|
|
|
else
|
|
|
|
part_elem->log_entry= NULL;
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
2006-03-25 00:19:13 +01:00
|
|
|
VOID(sync_ddl_log());
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#define OPTIMIZE_PARTS 1
|
|
|
|
#define ANALYZE_PARTS 2
|
|
|
|
#define CHECK_PARTS 3
|
|
|
|
#define REPAIR_PARTS 4
|
|
|
|
|
2008-08-11 20:02:03 +02:00
|
|
|
static const char *opt_op_name[]= {NULL,
|
|
|
|
"optimize", "analyze", "check", "repair" };
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Optimize table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
optimize()
|
|
|
|
thd Thread object
|
|
|
|
check_opt Check/analyze/repair/optimize options
|
|
|
|
|
|
|
|
RETURN VALUES
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::optimize(THD *thd, HA_CHECK_OPT *check_opt)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::optimize");
|
|
|
|
|
2008-10-10 20:12:38 +02:00
|
|
|
DBUG_RETURN(handle_opt_partitions(thd, check_opt, OPTIMIZE_PARTS));
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Analyze table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
analyze()
|
|
|
|
thd Thread object
|
|
|
|
check_opt Check/analyze/repair/optimize options
|
|
|
|
|
|
|
|
RETURN VALUES
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::analyze(THD *thd, HA_CHECK_OPT *check_opt)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::analyze");
|
|
|
|
|
2008-10-10 20:12:38 +02:00
|
|
|
DBUG_RETURN(handle_opt_partitions(thd, check_opt, ANALYZE_PARTS));
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Check table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
check()
|
|
|
|
thd Thread object
|
|
|
|
check_opt Check/analyze/repair/optimize options
|
|
|
|
|
|
|
|
RETURN VALUES
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::check(THD *thd, HA_CHECK_OPT *check_opt)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::check");
|
|
|
|
|
2008-10-10 20:12:38 +02:00
|
|
|
DBUG_RETURN(handle_opt_partitions(thd, check_opt, CHECK_PARTS));
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Repair table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
repair()
|
|
|
|
thd Thread object
|
|
|
|
check_opt Check/analyze/repair/optimize options
|
|
|
|
|
|
|
|
RETURN VALUES
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::repair(THD *thd, HA_CHECK_OPT *check_opt)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::repair");
|
|
|
|
|
2008-10-10 20:12:38 +02:00
|
|
|
DBUG_RETURN(handle_opt_partitions(thd, check_opt, REPAIR_PARTS));
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
|
2008-10-10 20:12:38 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Handle optimize/analyze/check/repair of one partition
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
handle_opt_part()
|
|
|
|
thd Thread object
|
|
|
|
check_opt Options
|
|
|
|
file Handler object of partition
|
|
|
|
flag Optimize/Analyze/Check/Repair flag
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Failure
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int handle_opt_part(THD *thd, HA_CHECK_OPT *check_opt,
|
|
|
|
handler *file, uint flag)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
DBUG_ENTER("handle_opt_part");
|
|
|
|
DBUG_PRINT("enter", ("flag = %u", flag));
|
|
|
|
|
|
|
|
if (flag == OPTIMIZE_PARTS)
|
2007-12-20 19:16:55 +01:00
|
|
|
error= file->ha_optimize(thd, check_opt);
|
2006-01-17 08:40:00 +01:00
|
|
|
else if (flag == ANALYZE_PARTS)
|
2007-12-20 19:16:55 +01:00
|
|
|
error= file->ha_analyze(thd, check_opt);
|
2006-01-17 08:40:00 +01:00
|
|
|
else if (flag == CHECK_PARTS)
|
2006-02-17 10:37:37 +01:00
|
|
|
error= file->ha_check(thd, check_opt);
|
2006-01-17 08:40:00 +01:00
|
|
|
else if (flag == REPAIR_PARTS)
|
2006-02-17 10:37:37 +01:00
|
|
|
error= file->ha_repair(thd, check_opt);
|
2006-01-17 08:40:00 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
DBUG_ASSERT(FALSE);
|
|
|
|
error= 1;
|
|
|
|
}
|
|
|
|
if (error == HA_ADMIN_ALREADY_DONE)
|
|
|
|
error= 0;
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
2008-08-11 20:02:03 +02:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
print a message row formatted for ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE
|
|
|
|
(modelled after mi_check_print_msg)
|
|
|
|
TODO: move this into the handler, or rewrite mysql_admin_table.
|
|
|
|
*/
|
|
|
|
static bool print_admin_msg(THD* thd, const char* msg_type,
|
|
|
|
const char* db_name, const char* table_name,
|
|
|
|
const char* op_name, const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list args;
|
|
|
|
Protocol *protocol= thd->protocol;
|
|
|
|
uint length, msg_length;
|
|
|
|
char msgbuf[MI_MAX_MSG_BUF];
|
|
|
|
char name[NAME_LEN*2+2];
|
|
|
|
|
|
|
|
va_start(args, fmt);
|
|
|
|
msg_length= my_vsnprintf(msgbuf, sizeof(msgbuf), fmt, args);
|
|
|
|
va_end(args);
|
|
|
|
msgbuf[sizeof(msgbuf) - 1] = 0; // healthy paranoia
|
|
|
|
|
|
|
|
|
|
|
|
if (!thd->vio_ok())
|
|
|
|
{
|
|
|
|
sql_print_error(msgbuf);
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
length=(uint) (strxmov(name, db_name, ".", table_name,NullS) - name);
|
|
|
|
/*
|
|
|
|
TODO: switch from protocol to push_warning here. The main reason we didn't
|
|
|
|
it yet is parallel repair. Due to following trace:
|
|
|
|
mi_check_print_msg/push_warning/sql_alloc/my_pthread_getspecific_ptr.
|
|
|
|
|
|
|
|
Also we likely need to lock mutex here (in both cases with protocol and
|
|
|
|
push_warning).
|
|
|
|
*/
|
|
|
|
DBUG_PRINT("info",("print_admin_msg: %s, %s, %s, %s", name, op_name,
|
|
|
|
msg_type, msgbuf));
|
|
|
|
protocol->prepare_for_resend();
|
|
|
|
protocol->store(name, length, system_charset_info);
|
|
|
|
protocol->store(op_name, system_charset_info);
|
|
|
|
protocol->store(msg_type, system_charset_info);
|
|
|
|
protocol->store(msgbuf, msg_length, system_charset_info);
|
|
|
|
if (protocol->write())
|
|
|
|
{
|
|
|
|
sql_print_error("Failed on my_net_write, writing to stderr instead: %s\n",
|
|
|
|
msgbuf);
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
return FALSE;
|
|
|
|
}
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Handle optimize/analyze/check/repair of partitions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
handle_opt_partitions()
|
|
|
|
thd Thread object
|
|
|
|
check_opt Options
|
|
|
|
flag Optimize/Analyze/Check/Repair flag
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Failure
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::handle_opt_partitions(THD *thd, HA_CHECK_OPT *check_opt,
|
2008-10-10 20:12:38 +02:00
|
|
|
uint flag)
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
List_iterator<partition_element> part_it(m_part_info->partitions);
|
|
|
|
uint no_parts= m_part_info->no_parts;
|
|
|
|
uint no_subparts= m_part_info->no_subparts;
|
|
|
|
uint i= 0;
|
|
|
|
int error;
|
|
|
|
DBUG_ENTER("ha_partition::handle_opt_partitions");
|
2008-10-10 20:12:38 +02:00
|
|
|
DBUG_PRINT("enter", ("flag= %u", flag));
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
2008-08-11 20:02:03 +02:00
|
|
|
/*
|
|
|
|
when ALTER TABLE <CMD> PARTITION ...
|
|
|
|
it should only do named partitions, otherwise all partitions
|
|
|
|
*/
|
2008-10-10 20:12:38 +02:00
|
|
|
if (!(thd->lex->alter_info.flags & ALTER_ADMIN_PARTITION) ||
|
2008-08-11 20:02:03 +02:00
|
|
|
part_elem->part_state == PART_CHANGED)
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
if (m_is_sub_partitioned)
|
|
|
|
{
|
2008-08-11 20:02:03 +02:00
|
|
|
List_iterator<partition_element> subpart_it(part_elem->subpartitions);
|
|
|
|
partition_element *sub_elem;
|
2006-01-17 08:40:00 +01:00
|
|
|
uint j= 0, part;
|
|
|
|
do
|
|
|
|
{
|
2008-08-11 20:02:03 +02:00
|
|
|
sub_elem= subpart_it++;
|
2006-01-17 08:40:00 +01:00
|
|
|
part= i * no_subparts + j;
|
2008-08-11 20:02:03 +02:00
|
|
|
DBUG_PRINT("info", ("Optimize subpartition %u (%s)",
|
|
|
|
part, sub_elem->partition_name));
|
|
|
|
#ifdef NOT_USED
|
|
|
|
if (print_admin_msg(thd, "note", table_share->db.str, table->alias,
|
|
|
|
opt_op_name[flag],
|
|
|
|
"Start to operate on subpartition %s",
|
|
|
|
sub_elem->partition_name))
|
|
|
|
DBUG_RETURN(HA_ADMIN_INTERNAL_ERROR);
|
|
|
|
#endif
|
2006-01-17 08:40:00 +01:00
|
|
|
if ((error= handle_opt_part(thd, check_opt, m_file[part], flag)))
|
|
|
|
{
|
2008-08-11 20:02:03 +02:00
|
|
|
/* print a line which partition the error belongs to */
|
|
|
|
if (error != HA_ADMIN_NOT_IMPLEMENTED &&
|
|
|
|
error != HA_ADMIN_ALREADY_DONE &&
|
|
|
|
error != HA_ADMIN_TRY_ALTER)
|
|
|
|
{
|
|
|
|
print_admin_msg(thd, "error", table_share->db.str, table->alias,
|
|
|
|
opt_op_name[flag],
|
|
|
|
"Subpartition %s returned error",
|
|
|
|
sub_elem->partition_name);
|
|
|
|
}
|
2006-05-23 13:39:35 +02:00
|
|
|
DBUG_RETURN(error);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
} while (++j < no_subparts);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2008-08-11 20:02:03 +02:00
|
|
|
DBUG_PRINT("info", ("Optimize partition %u (%s)", i,
|
|
|
|
part_elem->partition_name));
|
|
|
|
#ifdef NOT_USED
|
|
|
|
if (print_admin_msg(thd, "note", table_share->db.str, table->alias,
|
|
|
|
opt_op_name[flag],
|
|
|
|
"Start to operate on partition %s",
|
|
|
|
part_elem->partition_name))
|
|
|
|
DBUG_RETURN(HA_ADMIN_INTERNAL_ERROR);
|
|
|
|
#endif
|
2006-01-17 08:40:00 +01:00
|
|
|
if ((error= handle_opt_part(thd, check_opt, m_file[i], flag)))
|
|
|
|
{
|
2008-08-11 20:02:03 +02:00
|
|
|
/* print a line which partition the error belongs to */
|
|
|
|
if (error != HA_ADMIN_NOT_IMPLEMENTED &&
|
|
|
|
error != HA_ADMIN_ALREADY_DONE &&
|
|
|
|
error != HA_ADMIN_TRY_ALTER)
|
|
|
|
{
|
|
|
|
print_admin_msg(thd, "error", table_share->db.str, table->alias,
|
|
|
|
opt_op_name[flag], "Partition %s returned error",
|
|
|
|
part_elem->partition_name);
|
|
|
|
}
|
2006-05-23 13:39:35 +02:00
|
|
|
DBUG_RETURN(error);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
}
|
|
|
|
|
2008-07-07 17:54:42 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
@brief Check and repair the table if neccesary
|
|
|
|
|
|
|
|
@param thd Thread object
|
|
|
|
|
|
|
|
@retval TRUE Error/Not supported
|
|
|
|
@retval FALSE Success
|
|
|
|
*/
|
|
|
|
|
|
|
|
bool ha_partition::check_and_repair(THD *thd)
|
|
|
|
{
|
|
|
|
handler **file= m_file;
|
|
|
|
DBUG_ENTER("ha_partition::check_and_repair");
|
|
|
|
|
|
|
|
do
|
|
|
|
{
|
|
|
|
if ((*file)->ha_check_and_repair(thd))
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
@breif Check if the table can be automatically repaired
|
|
|
|
|
|
|
|
@retval TRUE Can be auto repaired
|
|
|
|
@retval FALSE Cannot be auto repaired
|
|
|
|
*/
|
|
|
|
|
|
|
|
bool ha_partition::auto_repair() const
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::auto_repair");
|
|
|
|
|
|
|
|
/*
|
|
|
|
As long as we only support one storage engine per table,
|
|
|
|
we can use the first partition for this function.
|
|
|
|
*/
|
|
|
|
DBUG_RETURN(m_file[0]->auto_repair());
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
@breif Check if the table is crashed
|
|
|
|
|
|
|
|
@retval TRUE Crashed
|
|
|
|
@retval FALSE Not crashed
|
|
|
|
*/
|
|
|
|
|
|
|
|
bool ha_partition::is_crashed() const
|
|
|
|
{
|
|
|
|
handler **file= m_file;
|
|
|
|
DBUG_ENTER("ha_partition::is_crashed");
|
|
|
|
|
|
|
|
do
|
|
|
|
{
|
|
|
|
if ((*file)->is_crashed())
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Prepare by creating a new partition
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
prepare_new_partition()
|
|
|
|
table Table object
|
|
|
|
create_info Create info from CREATE TABLE
|
|
|
|
file Handler object of new partition
|
|
|
|
part_name partition name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2007-08-13 15:11:25 +02:00
|
|
|
int ha_partition::prepare_new_partition(TABLE *tbl,
|
2006-01-17 08:40:00 +01:00
|
|
|
HA_CREATE_INFO *create_info,
|
2006-07-05 18:57:23 +02:00
|
|
|
handler *file, const char *part_name,
|
|
|
|
partition_element *p_elem)
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
bool create_flag= FALSE;
|
|
|
|
DBUG_ENTER("prepare_new_partition");
|
|
|
|
|
2007-08-13 15:11:25 +02:00
|
|
|
if ((error= set_up_table_before_create(tbl, part_name, create_info,
|
2006-08-07 12:22:08 +02:00
|
|
|
0, p_elem)))
|
|
|
|
goto error;
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((error= file->ha_create(part_name, tbl, create_info)))
|
2006-01-17 08:40:00 +01:00
|
|
|
goto error;
|
|
|
|
create_flag= TRUE;
|
2007-08-13 15:11:25 +02:00
|
|
|
if ((error= file->ha_open(tbl, part_name, m_mode, m_open_test_lock)))
|
2006-01-17 08:40:00 +01:00
|
|
|
goto error;
|
2007-07-25 16:56:17 +02:00
|
|
|
/*
|
|
|
|
Note: if you plan to add another call that may return failure,
|
|
|
|
better to do it before external_lock() as cleanup_new_partition()
|
|
|
|
assumes that external_lock() is last call that may fail here.
|
|
|
|
Otherwise see description for cleanup_new_partition().
|
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if ((error= file->ha_external_lock(ha_thd(), m_lock_type)))
|
2006-01-17 08:40:00 +01:00
|
|
|
goto error;
|
|
|
|
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
error:
|
|
|
|
if (create_flag)
|
2007-12-20 19:16:55 +01:00
|
|
|
VOID(file->ha_delete_table(part_name));
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Cleanup by removing all created partitions after error
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
cleanup_new_partition()
|
|
|
|
part_count Number of partitions to remove
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
|
|
|
|
DESCRIPTION
|
2007-07-25 16:56:17 +02:00
|
|
|
This function is called immediately after prepare_new_partition() in
|
|
|
|
case the latter fails.
|
|
|
|
|
|
|
|
In prepare_new_partition() last call that may return failure is
|
|
|
|
external_lock(). That means if prepare_new_partition() fails,
|
|
|
|
partition does not have external lock. Thus no need to call
|
|
|
|
external_lock(F_UNLCK) here.
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
TODO:
|
|
|
|
We must ensure that in the case that we get an error during the process
|
|
|
|
that we call external_lock with F_UNLCK, close the table and delete the
|
|
|
|
table in the case where we have been successful with prepare_handler.
|
|
|
|
We solve this by keeping an array of successful calls to prepare_handler
|
|
|
|
which can then be used to undo the call.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
void ha_partition::cleanup_new_partition(uint part_count)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
handler **save_m_file= m_file;
|
|
|
|
DBUG_ENTER("ha_partition::cleanup_new_partition");
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
if (m_added_file && m_added_file[0])
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
m_file= m_added_file;
|
|
|
|
m_added_file= NULL;
|
|
|
|
|
|
|
|
/* delete_table also needed, a bit more complex */
|
|
|
|
close();
|
|
|
|
|
|
|
|
m_added_file= m_file;
|
|
|
|
m_file= save_m_file;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_VOID_RETURN;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Implement the partition changes defined by ALTER TABLE of partitions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
change_partitions()
|
|
|
|
create_info HA_CREATE_INFO object describing all
|
|
|
|
fields and indexes in table
|
|
|
|
path Complete path of db and table name
|
|
|
|
out: copied Output parameter where number of copied
|
|
|
|
records are added
|
|
|
|
out: deleted Output parameter where number of deleted
|
|
|
|
records are added
|
|
|
|
pack_frm_data Reference to packed frm file
|
|
|
|
pack_frm_len Length of packed frm file
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Failure
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Add and copy if needed a number of partitions, during this operation
|
|
|
|
no other operation is ongoing in the server. This is used by
|
|
|
|
ADD PARTITION all types as well as by REORGANIZE PARTITION. For
|
|
|
|
one-phased implementations it is used also by DROP and COALESCE
|
|
|
|
PARTITIONs.
|
|
|
|
One-phased implementation needs the new frm file, other handlers will
|
|
|
|
get zero length and a NULL reference here.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::change_partitions(HA_CREATE_INFO *create_info,
|
|
|
|
const char *path,
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
ulonglong * const copied,
|
|
|
|
ulonglong * const deleted,
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
const uchar *pack_frm_data
|
2006-01-17 08:40:00 +01:00
|
|
|
__attribute__((unused)),
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
size_t pack_frm_len
|
2006-01-17 08:40:00 +01:00
|
|
|
__attribute__((unused)))
|
2005-08-19 16:26:05 +02:00
|
|
|
{
|
|
|
|
List_iterator<partition_element> part_it(m_part_info->partitions);
|
2006-01-17 08:40:00 +01:00
|
|
|
List_iterator <partition_element> t_it(m_part_info->temp_partitions);
|
2005-08-19 16:26:05 +02:00
|
|
|
char part_name_buff[FN_REFLEN];
|
2006-01-17 08:40:00 +01:00
|
|
|
uint no_parts= m_part_info->partitions.elements;
|
|
|
|
uint no_subparts= m_part_info->no_subparts;
|
|
|
|
uint i= 0;
|
2006-07-10 23:08:42 +02:00
|
|
|
uint no_remain_partitions, part_count, orig_count;
|
2006-01-17 08:40:00 +01:00
|
|
|
handler **new_file_array;
|
2005-08-19 16:26:05 +02:00
|
|
|
int error= 1;
|
2006-01-17 08:40:00 +01:00
|
|
|
bool first;
|
|
|
|
uint temp_partitions= m_part_info->temp_partitions.elements;
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
THD *thd= ha_thd();
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_ENTER("ha_partition::change_partitions");
|
|
|
|
|
2008-07-11 01:14:13 +02:00
|
|
|
/*
|
|
|
|
Assert that it works without HA_FILE_BASED and lower_case_table_name = 2.
|
|
|
|
We use m_file[0] as long as all partitions have the same storage engine.
|
|
|
|
*/
|
|
|
|
DBUG_ASSERT(!strcmp(path, get_canonical_filename(m_file[0], path,
|
|
|
|
part_name_buff)));
|
2006-01-17 08:40:00 +01:00
|
|
|
m_reorged_parts= 0;
|
2006-02-16 17:38:33 +01:00
|
|
|
if (!m_part_info->is_sub_partitioned())
|
2006-01-17 08:40:00 +01:00
|
|
|
no_subparts= 1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
Step 1:
|
|
|
|
Calculate number of reorganised partitions and allocate space for
|
|
|
|
their handler references.
|
|
|
|
*/
|
|
|
|
if (temp_partitions)
|
|
|
|
{
|
|
|
|
m_reorged_parts= temp_partitions * no_subparts;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_CHANGED ||
|
|
|
|
part_elem->part_state == PART_REORGED_DROPPED)
|
|
|
|
{
|
|
|
|
m_reorged_parts+= no_subparts;
|
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
|
|
|
}
|
|
|
|
if (m_reorged_parts &&
|
2006-07-10 23:08:42 +02:00
|
|
|
!(m_reorged_file= (handler**)sql_calloc(sizeof(handler*)*
|
2006-01-17 08:40:00 +01:00
|
|
|
(m_reorged_parts + 1))))
|
|
|
|
{
|
2006-07-10 23:08:42 +02:00
|
|
|
mem_alloc_error(sizeof(handler*)*(m_reorged_parts+1));
|
2006-04-16 03:49:13 +02:00
|
|
|
DBUG_RETURN(ER_OUTOFMEMORY);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
Step 2:
|
|
|
|
Calculate number of partitions after change and allocate space for
|
|
|
|
their handler references.
|
|
|
|
*/
|
|
|
|
no_remain_partitions= 0;
|
|
|
|
if (temp_partitions)
|
|
|
|
{
|
|
|
|
no_remain_partitions= no_parts * no_subparts;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
part_it.rewind();
|
|
|
|
i= 0;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_NORMAL ||
|
|
|
|
part_elem->part_state == PART_TO_BE_ADDED ||
|
|
|
|
part_elem->part_state == PART_CHANGED)
|
|
|
|
{
|
|
|
|
no_remain_partitions+= no_subparts;
|
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
|
|
|
}
|
|
|
|
if (!(new_file_array= (handler**)sql_calloc(sizeof(handler*)*
|
|
|
|
(2*(no_remain_partitions + 1)))))
|
|
|
|
{
|
|
|
|
mem_alloc_error(sizeof(handler*)*2*(no_remain_partitions+1));
|
2006-04-16 03:49:13 +02:00
|
|
|
DBUG_RETURN(ER_OUTOFMEMORY);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
m_added_file= &new_file_array[no_remain_partitions + 1];
|
|
|
|
|
|
|
|
/*
|
|
|
|
Step 3:
|
|
|
|
Fill m_reorged_file with handler references and NULL at the end
|
|
|
|
*/
|
|
|
|
if (m_reorged_parts)
|
|
|
|
{
|
|
|
|
i= 0;
|
|
|
|
part_count= 0;
|
|
|
|
first= TRUE;
|
|
|
|
part_it.rewind();
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_CHANGED ||
|
|
|
|
part_elem->part_state == PART_REORGED_DROPPED)
|
|
|
|
{
|
|
|
|
memcpy((void*)&m_reorged_file[part_count],
|
|
|
|
(void*)&m_file[i*no_subparts],
|
|
|
|
sizeof(handler*)*no_subparts);
|
|
|
|
part_count+= no_subparts;
|
|
|
|
}
|
|
|
|
else if (first && temp_partitions &&
|
|
|
|
part_elem->part_state == PART_TO_BE_ADDED)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
When doing an ALTER TABLE REORGANIZE PARTITION a number of
|
|
|
|
partitions is to be reorganised into a set of new partitions.
|
|
|
|
The reorganised partitions are in this case in the temp_partitions
|
|
|
|
list. We copy all of them in one batch and thus we only do this
|
|
|
|
until we find the first partition with state PART_TO_BE_ADDED
|
|
|
|
since this is where the new partitions go in and where the old
|
|
|
|
ones used to be.
|
|
|
|
*/
|
|
|
|
first= FALSE;
|
2006-08-08 00:33:12 +02:00
|
|
|
DBUG_ASSERT(((i*no_subparts) + m_reorged_parts) <= m_file_tot_parts);
|
2006-01-17 08:40:00 +01:00
|
|
|
memcpy((void*)m_reorged_file, &m_file[i*no_subparts],
|
2006-08-08 00:33:12 +02:00
|
|
|
sizeof(handler*)*m_reorged_parts);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
Step 4:
|
|
|
|
Fill new_array_file with handler references. Create the handlers if
|
|
|
|
needed.
|
|
|
|
*/
|
|
|
|
i= 0;
|
|
|
|
part_count= 0;
|
2006-07-10 23:08:42 +02:00
|
|
|
orig_count= 0;
|
2006-09-04 17:18:34 +02:00
|
|
|
first= TRUE;
|
2006-01-17 08:40:00 +01:00
|
|
|
part_it.rewind();
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_NORMAL)
|
|
|
|
{
|
2006-07-10 23:08:42 +02:00
|
|
|
DBUG_ASSERT(orig_count + no_subparts <= m_file_tot_parts);
|
|
|
|
memcpy((void*)&new_file_array[part_count], (void*)&m_file[orig_count],
|
2006-01-17 08:40:00 +01:00
|
|
|
sizeof(handler*)*no_subparts);
|
|
|
|
part_count+= no_subparts;
|
2006-07-10 23:08:42 +02:00
|
|
|
orig_count+= no_subparts;
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
else if (part_elem->part_state == PART_CHANGED ||
|
|
|
|
part_elem->part_state == PART_TO_BE_ADDED)
|
|
|
|
{
|
|
|
|
uint j= 0;
|
|
|
|
do
|
|
|
|
{
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!(new_file_array[part_count++]=
|
|
|
|
get_new_handler(table->s,
|
|
|
|
thd->mem_root,
|
|
|
|
part_elem->engine_type)))
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
mem_alloc_error(sizeof(handler));
|
2006-04-16 03:49:13 +02:00
|
|
|
DBUG_RETURN(ER_OUTOFMEMORY);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
} while (++j < no_subparts);
|
2006-09-04 17:18:34 +02:00
|
|
|
if (part_elem->part_state == PART_CHANGED)
|
|
|
|
orig_count+= no_subparts;
|
|
|
|
else if (temp_partitions && first)
|
|
|
|
{
|
|
|
|
orig_count+= (no_subparts * temp_partitions);
|
|
|
|
first= FALSE;
|
|
|
|
}
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
2006-09-04 17:18:34 +02:00
|
|
|
first= FALSE;
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Step 5:
|
|
|
|
Create the new partitions and also open, lock and call external_lock
|
|
|
|
on them to prepare them for copy phase and also for later close
|
|
|
|
calls
|
|
|
|
*/
|
|
|
|
i= 0;
|
|
|
|
part_count= 0;
|
|
|
|
part_it.rewind();
|
2005-08-19 16:26:05 +02:00
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
2006-01-17 08:40:00 +01:00
|
|
|
if (part_elem->part_state == PART_TO_BE_ADDED ||
|
|
|
|
part_elem->part_state == PART_CHANGED)
|
2005-08-19 16:26:05 +02:00
|
|
|
{
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
A new partition needs to be created PART_TO_BE_ADDED means an
|
|
|
|
entirely new partition and PART_CHANGED means a changed partition
|
|
|
|
that will still exist with either more or less data in it.
|
2005-08-19 16:26:05 +02:00
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
uint name_variant= NORMAL_PART_NAME;
|
|
|
|
if (part_elem->part_state == PART_CHANGED ||
|
|
|
|
(part_elem->part_state == PART_TO_BE_ADDED && temp_partitions))
|
|
|
|
name_variant= TEMP_PART_NAME;
|
2006-02-16 17:38:33 +01:00
|
|
|
if (m_part_info->is_sub_partitioned())
|
2005-08-19 16:26:05 +02:00
|
|
|
{
|
|
|
|
List_iterator<partition_element> sub_it(part_elem->subpartitions);
|
|
|
|
uint j= 0, part;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *sub_elem= sub_it++;
|
|
|
|
create_subpartition_name(part_name_buff, path,
|
|
|
|
part_elem->partition_name,
|
2006-01-17 08:40:00 +01:00
|
|
|
sub_elem->partition_name,
|
|
|
|
name_variant);
|
2005-08-19 16:26:05 +02:00
|
|
|
part= i * no_subparts + j;
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_PRINT("info", ("Add subpartition %s", part_name_buff));
|
|
|
|
if ((error= prepare_new_partition(table, create_info,
|
|
|
|
new_file_array[part],
|
2006-07-05 18:57:23 +02:00
|
|
|
(const char *)part_name_buff,
|
|
|
|
sub_elem)))
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
cleanup_new_partition(part_count);
|
2006-04-16 03:49:13 +02:00
|
|
|
DBUG_RETURN(error);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
m_added_file[part_count++]= new_file_array[part];
|
2005-08-19 16:26:05 +02:00
|
|
|
} while (++j < no_subparts);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
create_partition_name(part_name_buff, path,
|
2006-01-17 08:40:00 +01:00
|
|
|
part_elem->partition_name, name_variant,
|
|
|
|
TRUE);
|
|
|
|
DBUG_PRINT("info", ("Add partition %s", part_name_buff));
|
|
|
|
if ((error= prepare_new_partition(table, create_info,
|
|
|
|
new_file_array[i],
|
2006-07-05 18:57:23 +02:00
|
|
|
(const char *)part_name_buff,
|
|
|
|
part_elem)))
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
cleanup_new_partition(part_count);
|
2006-04-16 03:49:13 +02:00
|
|
|
DBUG_RETURN(error);
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
m_added_file[part_count++]= new_file_array[i];
|
2005-08-19 16:26:05 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (++i < no_parts);
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
Step 6:
|
|
|
|
State update to prepare for next write of the frm file.
|
|
|
|
*/
|
|
|
|
i= 0;
|
|
|
|
part_it.rewind();
|
|
|
|
do
|
|
|
|
{
|
|
|
|
partition_element *part_elem= part_it++;
|
|
|
|
if (part_elem->part_state == PART_TO_BE_ADDED)
|
|
|
|
part_elem->part_state= PART_IS_ADDED;
|
|
|
|
else if (part_elem->part_state == PART_CHANGED)
|
|
|
|
part_elem->part_state= PART_IS_CHANGED;
|
|
|
|
else if (part_elem->part_state == PART_REORGED_DROPPED)
|
|
|
|
part_elem->part_state= PART_TO_BE_DROPPED;
|
|
|
|
} while (++i < no_parts);
|
|
|
|
for (i= 0; i < temp_partitions; i++)
|
|
|
|
{
|
|
|
|
partition_element *part_elem= t_it++;
|
|
|
|
DBUG_ASSERT(part_elem->part_state == PART_TO_BE_REORGED);
|
|
|
|
part_elem->part_state= PART_TO_BE_DROPPED;
|
|
|
|
}
|
|
|
|
m_new_file= new_file_array;
|
|
|
|
DBUG_RETURN(copy_partitions(copied, deleted));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Copy partitions as part of ALTER TABLE of partitions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
copy_partitions()
|
|
|
|
out:copied Number of records copied
|
|
|
|
out:deleted Number of records deleted
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
change_partitions has done all the preparations, now it is time to
|
|
|
|
actually copy the data from the reorganised partitions to the new
|
|
|
|
partitions.
|
|
|
|
*/
|
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
int ha_partition::copy_partitions(ulonglong * const copied,
|
|
|
|
ulonglong * const deleted)
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
uint reorg_part= 0;
|
|
|
|
int result= 0;
|
2006-01-17 09:25:12 +01:00
|
|
|
longlong func_value;
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_ENTER("ha_partition::copy_partitions");
|
|
|
|
|
2007-11-20 11:21:00 +01:00
|
|
|
if (m_part_info->linear_hash_ind)
|
|
|
|
{
|
|
|
|
if (m_part_info->part_type == HASH_PARTITION)
|
|
|
|
set_linear_hash_mask(m_part_info, m_part_info->no_parts);
|
|
|
|
else
|
|
|
|
set_linear_hash_mask(m_part_info, m_part_info->no_subparts);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
while (reorg_part < m_reorged_parts)
|
|
|
|
{
|
|
|
|
handler *file= m_reorged_file[reorg_part];
|
|
|
|
uint32 new_part;
|
|
|
|
|
|
|
|
late_extra_cache(reorg_part);
|
|
|
|
if ((result= file->ha_rnd_init(1)))
|
|
|
|
goto error;
|
|
|
|
while (TRUE)
|
|
|
|
{
|
|
|
|
if ((result= file->rnd_next(m_rec0)))
|
|
|
|
{
|
|
|
|
if (result == HA_ERR_RECORD_DELETED)
|
|
|
|
continue; //Probably MyISAM
|
|
|
|
if (result != HA_ERR_END_OF_FILE)
|
|
|
|
goto error;
|
|
|
|
/*
|
|
|
|
End-of-file reached, break out to continue with next partition or
|
|
|
|
end the copy process.
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* Found record to insert into new handler */
|
2006-01-17 09:25:12 +01:00
|
|
|
if (m_part_info->get_partition_id(m_part_info, &new_part,
|
|
|
|
&func_value))
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
This record is in the original table but will not be in the new
|
|
|
|
table since it doesn't fit into any partition any longer due to
|
|
|
|
changed partitioning ranges or list values.
|
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
(*deleted)++;
|
2006-01-17 08:40:00 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2007-12-19 20:15:02 +01:00
|
|
|
THD *thd= ha_thd();
|
2006-01-17 08:40:00 +01:00
|
|
|
/* Copy record to new handler */
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
(*copied)++;
|
2007-12-19 20:15:02 +01:00
|
|
|
tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
|
|
|
|
result= m_new_file[new_part]->ha_write_row(m_rec0);
|
|
|
|
reenable_binlog(thd);
|
|
|
|
if (result)
|
2006-01-17 08:40:00 +01:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
late_extra_no_cache(reorg_part);
|
2007-12-20 19:16:55 +01:00
|
|
|
file->ha_rnd_end();
|
2006-01-17 08:40:00 +01:00
|
|
|
reorg_part++;
|
|
|
|
}
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
error:
|
2006-04-16 03:49:13 +02:00
|
|
|
DBUG_RETURN(result);
|
2005-08-19 16:26:05 +02:00
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
Update create info as part of ALTER TABLE
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
update_create_info()
|
|
|
|
create_info Create info from ALTER TABLE
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Method empty so far
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
void ha_partition::update_create_info(HA_CREATE_INFO *create_info)
|
|
|
|
{
|
2008-10-03 11:30:54 +02:00
|
|
|
/*
|
|
|
|
Fix for bug#38751, some engines needs info-calls in ALTER.
|
|
|
|
Archive need this since it flushes in ::info.
|
|
|
|
HA_STATUS_AUTO is optimized so it will not always be forwarded
|
|
|
|
to all partitions, but HA_STATUS_VARIABLE will.
|
|
|
|
*/
|
|
|
|
info(HA_STATUS_VARIABLE);
|
|
|
|
|
2007-12-06 13:39:42 +01:00
|
|
|
info(HA_STATUS_AUTO);
|
|
|
|
|
|
|
|
if (!(create_info->used_fields & HA_CREATE_USED_AUTO))
|
|
|
|
create_info->auto_increment_value= stats.auto_increment_value;
|
|
|
|
|
2007-11-09 23:22:00 +01:00
|
|
|
create_info->data_file_name= create_info->index_file_name = NULL;
|
2005-07-18 13:31:02 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-08-23 13:58:36 +02:00
|
|
|
void ha_partition::change_table_ptr(TABLE *table_arg, TABLE_SHARE *share)
|
|
|
|
{
|
|
|
|
handler **file_array= m_file;
|
|
|
|
table= table_arg;
|
|
|
|
table_share= share;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
(*file_array)->change_table_ptr(table_arg, share);
|
|
|
|
} while (*(++file_array));
|
2008-03-17 15:56:53 +01:00
|
|
|
if (m_added_file && m_added_file[0])
|
|
|
|
{
|
|
|
|
/* if in middle of a drop/rename etc */
|
|
|
|
file_array= m_added_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
(*file_array)->change_table_ptr(table_arg, share);
|
|
|
|
} while (*(++file_array));
|
|
|
|
}
|
2006-08-23 13:58:36 +02:00
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Change comments specific to handler
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
update_table_comment()
|
|
|
|
comment Original comment
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
new comment
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
No comment changes so far
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
char *ha_partition::update_table_comment(const char *comment)
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
return (char*) comment; /* Nothing to change */
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Handle delete, rename and create table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
del_ren_cre_table()
|
|
|
|
from Full path of old table
|
|
|
|
to Full path of new table
|
|
|
|
table_arg Table object
|
|
|
|
create_info Create info
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Common routine to handle delete_table and rename_table.
|
|
|
|
The routine uses the partition handler file to get the
|
|
|
|
names of the partition instances. Both these routines
|
|
|
|
are called after creating the handler without table
|
|
|
|
object and thus the file is needed to discover the
|
|
|
|
names of the partitions and the underlying storage engines.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
uint ha_partition::del_ren_cre_table(const char *from,
|
|
|
|
const char *to,
|
|
|
|
TABLE *table_arg,
|
|
|
|
HA_CREATE_INFO *create_info)
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
int save_error= 0;
|
|
|
|
int error;
|
2008-07-11 01:14:13 +02:00
|
|
|
char from_buff[FN_REFLEN], to_buff[FN_REFLEN], from_lc_buff[FN_REFLEN],
|
|
|
|
to_lc_buff[FN_REFLEN];
|
2005-07-18 13:31:02 +02:00
|
|
|
char *name_buffer_ptr;
|
2008-08-13 10:47:24 +02:00
|
|
|
const char *from_path;
|
|
|
|
const char *to_path= NULL;
|
2005-07-18 13:31:02 +02:00
|
|
|
uint i;
|
2006-08-07 12:22:08 +02:00
|
|
|
handler **file, **abort_file;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("del_ren_cre_table()");
|
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (get_from_handler_file(from, ha_thd()->mem_root))
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
DBUG_ASSERT(m_file_buffer);
|
2008-07-11 01:14:13 +02:00
|
|
|
DBUG_PRINT("enter", ("from: (%s) to: (%s)", from, to));
|
2005-07-18 13:31:02 +02:00
|
|
|
name_buffer_ptr= m_name_buffer_ptr;
|
|
|
|
file= m_file;
|
2008-07-11 01:14:13 +02:00
|
|
|
/*
|
|
|
|
Since ha_partition has HA_FILE_BASED, it must alter underlying table names
|
|
|
|
if they do not have HA_FILE_BASED and lower_case_table_names == 2.
|
|
|
|
See Bug#37402, for Mac OS X.
|
|
|
|
The appended #P#<partname>[#SP#<subpartname>] will remain in current case.
|
|
|
|
Using the first partitions handler, since mixing handlers is not allowed.
|
|
|
|
*/
|
|
|
|
from_path= get_canonical_filename(*file, from, from_lc_buff);
|
|
|
|
if (to != NULL)
|
|
|
|
to_path= get_canonical_filename(*file, to, to_lc_buff);
|
2005-07-18 13:31:02 +02:00
|
|
|
i= 0;
|
|
|
|
do
|
|
|
|
{
|
2008-07-11 01:14:13 +02:00
|
|
|
create_partition_name(from_buff, from_path, name_buffer_ptr,
|
|
|
|
NORMAL_PART_NAME, FALSE);
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if (to != NULL)
|
|
|
|
{ // Rename branch
|
2008-07-11 01:14:13 +02:00
|
|
|
create_partition_name(to_buff, to_path, name_buffer_ptr,
|
|
|
|
NORMAL_PART_NAME, FALSE);
|
2007-12-20 19:16:55 +01:00
|
|
|
error= (*file)->ha_rename_table(from_buff, to_buff);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
else if (table_arg == NULL) // delete branch
|
2007-12-20 19:16:55 +01:00
|
|
|
error= (*file)->ha_delete_table(from_buff);
|
2005-07-18 13:31:02 +02:00
|
|
|
else
|
|
|
|
{
|
2006-08-07 12:22:08 +02:00
|
|
|
if ((error= set_up_table_before_create(table_arg, from_buff,
|
|
|
|
create_info, i, NULL)) ||
|
2007-12-20 19:16:55 +01:00
|
|
|
((error= (*file)->ha_create(from_buff, table_arg, create_info))))
|
2006-08-07 12:22:08 +02:00
|
|
|
goto create_error;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
name_buffer_ptr= strend(name_buffer_ptr) + 1;
|
|
|
|
if (error)
|
|
|
|
save_error= error;
|
|
|
|
i++;
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(save_error);
|
2006-08-07 12:22:08 +02:00
|
|
|
create_error:
|
|
|
|
name_buffer_ptr= m_name_buffer_ptr;
|
|
|
|
for (abort_file= file, file= m_file; file < abort_file; file++)
|
|
|
|
{
|
2008-07-11 01:14:13 +02:00
|
|
|
create_partition_name(from_buff, from_path, name_buffer_ptr, NORMAL_PART_NAME,
|
2006-08-07 12:22:08 +02:00
|
|
|
FALSE);
|
2007-12-20 19:16:55 +01:00
|
|
|
VOID((*file)->ha_delete_table((const char*) from_buff));
|
2006-08-07 12:22:08 +02:00
|
|
|
name_buffer_ptr= strend(name_buffer_ptr) + 1;
|
|
|
|
}
|
|
|
|
DBUG_RETURN(error);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Find partition based on partition id
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
find_partition_element()
|
|
|
|
part_id Partition id of partition looked for
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Reference to partition_element
|
|
|
|
0 Partition not found
|
|
|
|
*/
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
partition_element *ha_partition::find_partition_element(uint part_id)
|
|
|
|
{
|
|
|
|
uint i;
|
|
|
|
uint curr_part_id= 0;
|
2006-01-17 08:40:00 +01:00
|
|
|
List_iterator_fast <partition_element> part_it(m_part_info->partitions);
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
for (i= 0; i < m_part_info->no_parts; i++)
|
|
|
|
{
|
|
|
|
partition_element *part_elem;
|
|
|
|
part_elem= part_it++;
|
|
|
|
if (m_is_sub_partitioned)
|
|
|
|
{
|
|
|
|
uint j;
|
|
|
|
List_iterator_fast <partition_element> sub_it(part_elem->subpartitions);
|
|
|
|
for (j= 0; j < m_part_info->no_subparts; j++)
|
|
|
|
{
|
|
|
|
part_elem= sub_it++;
|
|
|
|
if (part_id == curr_part_id++)
|
|
|
|
return part_elem;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (part_id == curr_part_id++)
|
|
|
|
return part_elem;
|
|
|
|
}
|
|
|
|
DBUG_ASSERT(0);
|
2007-12-12 16:21:01 +01:00
|
|
|
my_error(ER_OUT_OF_RESOURCES, MYF(0));
|
2005-07-18 13:31:02 +02:00
|
|
|
current_thd->fatal_error(); // Abort
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Set up table share object before calling create on underlying handler
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
set_up_table_before_create()
|
|
|
|
table Table object
|
|
|
|
info Create info
|
|
|
|
part_id Partition id of partition to set-up
|
|
|
|
|
|
|
|
RETURN VALUE
|
2006-08-07 12:22:08 +02:00
|
|
|
TRUE Error
|
|
|
|
FALSE Success
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Set up
|
|
|
|
1) Comment on partition
|
|
|
|
2) MAX_ROWS, MIN_ROWS on partition
|
|
|
|
3) Index file name on partition
|
|
|
|
4) Data file name on partition
|
|
|
|
*/
|
|
|
|
|
2007-08-13 15:11:25 +02:00
|
|
|
int ha_partition::set_up_table_before_create(TABLE *tbl,
|
2006-08-07 12:22:08 +02:00
|
|
|
const char *partition_name_with_path,
|
|
|
|
HA_CREATE_INFO *info,
|
|
|
|
uint part_id,
|
|
|
|
partition_element *part_elem)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-08-07 12:22:08 +02:00
|
|
|
int error= 0;
|
2006-08-17 15:05:15 +02:00
|
|
|
const char *partition_name;
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
THD *thd= ha_thd();
|
2006-08-07 12:22:08 +02:00
|
|
|
DBUG_ENTER("set_up_table_before_create");
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!part_elem)
|
2006-07-05 18:57:23 +02:00
|
|
|
{
|
|
|
|
part_elem= find_partition_element(part_id);
|
|
|
|
if (!part_elem)
|
2006-08-07 12:22:08 +02:00
|
|
|
DBUG_RETURN(1); // Fatal error
|
2006-07-05 18:57:23 +02:00
|
|
|
}
|
2007-08-13 15:11:25 +02:00
|
|
|
tbl->s->max_rows= part_elem->part_max_rows;
|
|
|
|
tbl->s->min_rows= part_elem->part_min_rows;
|
2007-04-24 22:07:52 +02:00
|
|
|
partition_name= strrchr(partition_name_with_path, FN_LIBCHAR);
|
2006-08-07 12:22:08 +02:00
|
|
|
if ((part_elem->index_file_name &&
|
2006-08-17 15:05:15 +02:00
|
|
|
(error= append_file_to_dir(thd,
|
2006-08-07 12:22:08 +02:00
|
|
|
(const char**)&part_elem->index_file_name,
|
|
|
|
partition_name+1))) ||
|
|
|
|
(part_elem->data_file_name &&
|
2006-08-17 15:05:15 +02:00
|
|
|
(error= append_file_to_dir(thd,
|
2006-08-07 12:22:08 +02:00
|
|
|
(const char**)&part_elem->data_file_name,
|
|
|
|
partition_name+1))))
|
|
|
|
{
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
info->index_file_name= part_elem->index_file_name;
|
|
|
|
info->data_file_name= part_elem->data_file_name;
|
2006-08-07 12:22:08 +02:00
|
|
|
DBUG_RETURN(0);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Add two names together
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
name_add()
|
|
|
|
out:dest Destination string
|
|
|
|
first_name First name
|
|
|
|
sec_name Second name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Routine used to add two names with '_' in between then. Service routine
|
|
|
|
to create_handler_file
|
|
|
|
Include the NULL in the count of characters since it is needed as separator
|
|
|
|
between the partition names.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
static uint name_add(char *dest, const char *first_name, const char *sec_name)
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
return (uint) (strxmov(dest, first_name, "#SP#", sec_name, NullS) -dest) + 1;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Create the special .par file
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
create_handler_file()
|
|
|
|
name Full path of table name
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Method used to create handler file with names of partitions, their
|
|
|
|
engine types and the number of partitions.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
bool ha_partition::create_handler_file(const char *name)
|
|
|
|
{
|
|
|
|
partition_element *part_elem, *subpart_elem;
|
|
|
|
uint i, j, part_name_len, subpart_name_len;
|
2006-01-17 08:40:00 +01:00
|
|
|
uint tot_partition_words, tot_name_len, no_parts;
|
|
|
|
uint tot_parts= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
uint tot_len_words, tot_len_byte, chksum, tot_name_words;
|
|
|
|
char *name_buffer_ptr;
|
|
|
|
uchar *file_buffer, *engine_array;
|
|
|
|
bool result= TRUE;
|
|
|
|
char file_name[FN_REFLEN];
|
2006-01-17 08:40:00 +01:00
|
|
|
char part_name[FN_REFLEN];
|
|
|
|
char subpart_name[FN_REFLEN];
|
2005-07-18 13:31:02 +02:00
|
|
|
File file;
|
2006-01-17 08:40:00 +01:00
|
|
|
List_iterator_fast <partition_element> part_it(m_part_info->partitions);
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("create_handler_file");
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
no_parts= m_part_info->partitions.elements;
|
|
|
|
DBUG_PRINT("info", ("table name = %s, no_parts = %u", name,
|
|
|
|
no_parts));
|
2005-07-18 13:31:02 +02:00
|
|
|
tot_name_len= 0;
|
2006-01-17 08:40:00 +01:00
|
|
|
for (i= 0; i < no_parts; i++)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
part_elem= part_it++;
|
2006-01-17 08:40:00 +01:00
|
|
|
if (part_elem->part_state != PART_NORMAL &&
|
2006-02-03 18:05:29 +01:00
|
|
|
part_elem->part_state != PART_TO_BE_ADDED &&
|
|
|
|
part_elem->part_state != PART_CHANGED)
|
2006-01-17 08:40:00 +01:00
|
|
|
continue;
|
|
|
|
tablename_to_filename(part_elem->partition_name, part_name,
|
|
|
|
FN_REFLEN);
|
|
|
|
part_name_len= strlen(part_name);
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!m_is_sub_partitioned)
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
2005-07-18 13:31:02 +02:00
|
|
|
tot_name_len+= part_name_len + 1;
|
2006-01-17 08:40:00 +01:00
|
|
|
tot_parts++;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
else
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
List_iterator_fast <partition_element> sub_it(part_elem->subpartitions);
|
2005-07-18 13:31:02 +02:00
|
|
|
for (j= 0; j < m_part_info->no_subparts; j++)
|
|
|
|
{
|
|
|
|
subpart_elem= sub_it++;
|
2006-01-17 08:40:00 +01:00
|
|
|
tablename_to_filename(subpart_elem->partition_name,
|
|
|
|
subpart_name,
|
|
|
|
FN_REFLEN);
|
|
|
|
subpart_name_len= strlen(subpart_name);
|
|
|
|
tot_name_len+= part_name_len + subpart_name_len + 5;
|
|
|
|
tot_parts++;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
File format:
|
|
|
|
Length in words 4 byte
|
|
|
|
Checksum 4 byte
|
|
|
|
Total number of partitions 4 byte
|
|
|
|
Array of engine types n * 4 bytes where
|
|
|
|
n = (m_tot_parts + 3)/4
|
|
|
|
Length of name part in bytes 4 bytes
|
|
|
|
Name part m * 4 bytes where
|
|
|
|
m = ((length_name_part + 3)/4)*4
|
|
|
|
|
|
|
|
All padding bytes are zeroed
|
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
tot_partition_words= (tot_parts + 3) / 4;
|
2005-07-18 13:31:02 +02:00
|
|
|
tot_name_words= (tot_name_len + 3) / 4;
|
|
|
|
tot_len_words= 4 + tot_partition_words + tot_name_words;
|
|
|
|
tot_len_byte= 4 * tot_len_words;
|
|
|
|
if (!(file_buffer= (uchar *) my_malloc(tot_len_byte, MYF(MY_ZEROFILL))))
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
engine_array= (file_buffer + 12);
|
|
|
|
name_buffer_ptr= (char*) (file_buffer + ((4 + tot_partition_words) * 4));
|
|
|
|
part_it.rewind();
|
2006-01-17 08:40:00 +01:00
|
|
|
for (i= 0; i < no_parts; i++)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
part_elem= part_it++;
|
2006-01-17 08:40:00 +01:00
|
|
|
if (part_elem->part_state != PART_NORMAL &&
|
2006-02-03 18:05:29 +01:00
|
|
|
part_elem->part_state != PART_TO_BE_ADDED &&
|
|
|
|
part_elem->part_state != PART_CHANGED)
|
2006-01-17 08:40:00 +01:00
|
|
|
continue;
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!m_is_sub_partitioned)
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
tablename_to_filename(part_elem->partition_name, part_name, FN_REFLEN);
|
|
|
|
name_buffer_ptr= strmov(name_buffer_ptr, part_name)+1;
|
2005-12-22 06:39:02 +01:00
|
|
|
*engine_array= (uchar) ha_legacy_type(part_elem->engine_type);
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_PRINT("info", ("engine: %u", *engine_array));
|
|
|
|
engine_array++;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
List_iterator_fast <partition_element> sub_it(part_elem->subpartitions);
|
2005-07-18 13:31:02 +02:00
|
|
|
for (j= 0; j < m_part_info->no_subparts; j++)
|
|
|
|
{
|
|
|
|
subpart_elem= sub_it++;
|
2006-01-17 08:40:00 +01:00
|
|
|
tablename_to_filename(part_elem->partition_name, part_name,
|
|
|
|
FN_REFLEN);
|
|
|
|
tablename_to_filename(subpart_elem->partition_name, subpart_name,
|
|
|
|
FN_REFLEN);
|
2005-07-18 13:31:02 +02:00
|
|
|
name_buffer_ptr+= name_add(name_buffer_ptr,
|
2006-01-17 08:40:00 +01:00
|
|
|
part_name,
|
|
|
|
subpart_name);
|
2006-02-13 19:15:42 +01:00
|
|
|
*engine_array= (uchar) ha_legacy_type(subpart_elem->engine_type);
|
|
|
|
DBUG_PRINT("info", ("engine: %u", *engine_array));
|
2005-07-18 13:31:02 +02:00
|
|
|
engine_array++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
chksum= 0;
|
|
|
|
int4store(file_buffer, tot_len_words);
|
2006-01-17 08:40:00 +01:00
|
|
|
int4store(file_buffer + 8, tot_parts);
|
2005-07-18 13:31:02 +02:00
|
|
|
int4store(file_buffer + 12 + (tot_partition_words * 4), tot_name_len);
|
|
|
|
for (i= 0; i < tot_len_words; i++)
|
|
|
|
chksum^= uint4korr(file_buffer + 4 * i);
|
|
|
|
int4store(file_buffer + 4, chksum);
|
|
|
|
/*
|
|
|
|
Remove .frm extension and replace with .par
|
|
|
|
Create and write and close file
|
|
|
|
to be used at open, delete_table and rename_table
|
|
|
|
*/
|
2006-02-13 19:15:42 +01:00
|
|
|
fn_format(file_name, name, "", ha_par_ext, MY_APPEND_EXT);
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((file= my_create(file_name, CREATE_MODE, O_RDWR | O_TRUNC,
|
|
|
|
MYF(MY_WME))) >= 0)
|
|
|
|
{
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
result= my_write(file, (uchar *) file_buffer, tot_len_byte,
|
2007-05-31 16:45:22 +02:00
|
|
|
MYF(MY_WME | MY_NABP)) != 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
VOID(my_close(file, MYF(0)));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
result= TRUE;
|
|
|
|
my_free((char*) file_buffer, MYF(0));
|
|
|
|
DBUG_RETURN(result);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Clear handler variables and free some memory
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
clear_handler_file()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
void ha_partition::clear_handler_file()
|
|
|
|
{
|
2007-03-02 17:43:45 +01:00
|
|
|
if (m_engine_array)
|
|
|
|
plugin_unlock_list(NULL, m_engine_array, m_tot_parts);
|
2005-07-18 13:31:02 +02:00
|
|
|
my_free((char*) m_file_buffer, MYF(MY_ALLOW_ZERO_PTR));
|
2005-12-21 19:18:40 +01:00
|
|
|
my_free((char*) m_engine_array, MYF(MY_ALLOW_ZERO_PTR));
|
2005-07-18 13:31:02 +02:00
|
|
|
m_file_buffer= NULL;
|
|
|
|
m_engine_array= NULL;
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Create underlying handler objects
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
create_handlers()
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
mem_root Allocate memory through this
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
TRUE Error
|
|
|
|
FALSE Success
|
|
|
|
*/
|
2005-07-18 13:31:02 +02:00
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
bool ha_partition::create_handlers(MEM_ROOT *mem_root)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint i;
|
|
|
|
uint alloc_len= (m_tot_parts + 1) * sizeof(handler*);
|
2007-03-02 17:43:45 +01:00
|
|
|
handlerton *hton0;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("create_handlers");
|
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!(m_file= (handler **) alloc_root(mem_root, alloc_len)))
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(TRUE);
|
2006-07-10 23:08:42 +02:00
|
|
|
m_file_tot_parts= m_tot_parts;
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
bzero((char*) m_file, alloc_len);
|
2005-07-18 13:31:02 +02:00
|
|
|
for (i= 0; i < m_tot_parts; i++)
|
|
|
|
{
|
2007-03-02 17:43:45 +01:00
|
|
|
handlerton *hton= plugin_data(m_engine_array[i], handlerton*);
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!(m_file[i]= get_new_handler(table_share, mem_root,
|
2007-03-02 17:43:45 +01:00
|
|
|
hton)))
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(TRUE);
|
2007-03-02 17:43:45 +01:00
|
|
|
DBUG_PRINT("info", ("engine_type: %u", hton->db_type));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
/* For the moment we only support partition over the same table engine */
|
2007-03-02 17:43:45 +01:00
|
|
|
hton0= plugin_data(m_engine_array[0], handlerton*);
|
|
|
|
if (hton0 == myisam_hton)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("MyISAM"));
|
|
|
|
m_myisam= TRUE;
|
|
|
|
}
|
2005-12-21 19:18:40 +01:00
|
|
|
/* INNODB may not be compiled in... */
|
2007-03-02 17:43:45 +01:00
|
|
|
else if (ha_legacy_type(hton0) == DB_TYPE_INNODB)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("InnoDB"));
|
|
|
|
m_innodb= TRUE;
|
|
|
|
}
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Create underlying handler objects from partition info
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
new_handlers_from_part_info()
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
mem_root Allocate memory through this
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
TRUE Error
|
|
|
|
FALSE Success
|
|
|
|
*/
|
2005-07-18 13:31:02 +02:00
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
bool ha_partition::new_handlers_from_part_info(MEM_ROOT *mem_root)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
uint i, j, part_count;
|
2005-07-18 13:31:02 +02:00
|
|
|
partition_element *part_elem;
|
|
|
|
uint alloc_len= (m_tot_parts + 1) * sizeof(handler*);
|
|
|
|
List_iterator_fast <partition_element> part_it(m_part_info->partitions);
|
|
|
|
DBUG_ENTER("ha_partition::new_handlers_from_part_info");
|
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!(m_file= (handler **) alloc_root(mem_root, alloc_len)))
|
2006-01-17 08:40:00 +01:00
|
|
|
{
|
|
|
|
mem_alloc_error(alloc_len);
|
|
|
|
goto error_end;
|
|
|
|
}
|
2006-07-10 23:08:42 +02:00
|
|
|
m_file_tot_parts= m_tot_parts;
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
bzero((char*) m_file, alloc_len);
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ASSERT(m_part_info->no_parts > 0);
|
|
|
|
|
|
|
|
i= 0;
|
2006-01-17 08:40:00 +01:00
|
|
|
part_count= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
|
|
|
Don't know the size of the underlying storage engine, invent a number of
|
|
|
|
bytes allocated for error message if allocation fails
|
|
|
|
*/
|
|
|
|
do
|
|
|
|
{
|
|
|
|
part_elem= part_it++;
|
|
|
|
if (m_is_sub_partitioned)
|
|
|
|
{
|
|
|
|
for (j= 0; j < m_part_info->no_subparts; j++)
|
|
|
|
{
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!(m_file[part_count++]= get_new_handler(table_share, mem_root,
|
|
|
|
part_elem->engine_type)))
|
2005-07-18 13:31:02 +02:00
|
|
|
goto error;
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_PRINT("info", ("engine_type: %u",
|
|
|
|
(uint) ha_legacy_type(part_elem->engine_type)));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
}
|
2006-01-17 08:40:00 +01:00
|
|
|
else
|
|
|
|
{
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!(m_file[part_count++]= get_new_handler(table_share, mem_root,
|
2006-01-17 08:40:00 +01:00
|
|
|
part_elem->engine_type)))
|
|
|
|
goto error;
|
|
|
|
DBUG_PRINT("info", ("engine_type: %u",
|
|
|
|
(uint) ha_legacy_type(part_elem->engine_type)));
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (++i < m_part_info->no_parts);
|
2006-09-15 19:28:00 +02:00
|
|
|
if (part_elem->engine_type == myisam_hton)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("MyISAM"));
|
|
|
|
m_myisam= TRUE;
|
|
|
|
}
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
error:
|
2006-01-17 08:40:00 +01:00
|
|
|
mem_alloc_error(sizeof(handler));
|
|
|
|
error_end:
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Get info about partition engines and their names from the .par file
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
get_from_handler_file()
|
|
|
|
name Full path of table name
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
mem_root Allocate memory through this
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
TRUE Error
|
|
|
|
FALSE Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Open handler file to get partition names, engine types and number of
|
|
|
|
partitions.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
bool ha_partition::get_from_handler_file(const char *name, MEM_ROOT *mem_root)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
char buff[FN_REFLEN], *address_tot_name_len;
|
|
|
|
File file;
|
|
|
|
char *file_buffer, *name_buffer_ptr;
|
2005-12-21 19:18:40 +01:00
|
|
|
handlerton **engine_array;
|
2005-07-18 13:31:02 +02:00
|
|
|
uint i, len_bytes, len_words, tot_partition_words, tot_name_words, chksum;
|
|
|
|
DBUG_ENTER("ha_partition::get_from_handler_file");
|
|
|
|
DBUG_PRINT("enter", ("table name: '%s'", name));
|
|
|
|
|
|
|
|
if (m_file_buffer)
|
|
|
|
DBUG_RETURN(FALSE);
|
2005-12-31 06:01:26 +01:00
|
|
|
fn_format(buff, name, "", ha_par_ext, MY_APPEND_EXT);
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
/* Following could be done with my_stat to read in whole file */
|
|
|
|
if ((file= my_open(buff, O_RDONLY | O_SHARE, MYF(0))) < 0)
|
|
|
|
DBUG_RETURN(TRUE);
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
if (my_read(file, (uchar *) & buff[0], 8, MYF(MY_NABP)))
|
2005-07-18 13:31:02 +02:00
|
|
|
goto err1;
|
|
|
|
len_words= uint4korr(buff);
|
|
|
|
len_bytes= 4 * len_words;
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
if (!(file_buffer= (char*) my_malloc(len_bytes, MYF(0))))
|
2005-07-18 13:31:02 +02:00
|
|
|
goto err1;
|
|
|
|
VOID(my_seek(file, 0, MY_SEEK_SET, MYF(0)));
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
if (my_read(file, (uchar *) file_buffer, len_bytes, MYF(MY_NABP)))
|
2005-07-18 13:31:02 +02:00
|
|
|
goto err2;
|
|
|
|
|
|
|
|
chksum= 0;
|
|
|
|
for (i= 0; i < len_words; i++)
|
|
|
|
chksum ^= uint4korr((file_buffer) + 4 * i);
|
|
|
|
if (chksum)
|
|
|
|
goto err2;
|
|
|
|
m_tot_parts= uint4korr((file_buffer) + 8);
|
2006-01-17 08:40:00 +01:00
|
|
|
DBUG_PRINT("info", ("No of parts = %u", m_tot_parts));
|
2005-07-18 13:31:02 +02:00
|
|
|
tot_partition_words= (m_tot_parts + 3) / 4;
|
2007-03-02 17:43:45 +01:00
|
|
|
engine_array= (handlerton **) my_alloca(m_tot_parts * sizeof(handlerton*));
|
2005-12-21 19:18:40 +01:00
|
|
|
for (i= 0; i < m_tot_parts; i++)
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
engine_array[i]= ha_resolve_by_legacy_type(ha_thd(),
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
(enum legacy_db_type)
|
|
|
|
*(uchar *) ((file_buffer) + 12 + i));
|
2005-07-18 13:31:02 +02:00
|
|
|
address_tot_name_len= file_buffer + 12 + 4 * tot_partition_words;
|
|
|
|
tot_name_words= (uint4korr(address_tot_name_len) + 3) / 4;
|
|
|
|
if (len_words != (tot_partition_words + tot_name_words + 4))
|
2007-03-29 06:29:16 +02:00
|
|
|
goto err3;
|
2005-07-18 13:31:02 +02:00
|
|
|
name_buffer_ptr= file_buffer + 16 + 4 * tot_partition_words;
|
|
|
|
VOID(my_close(file, MYF(0)));
|
|
|
|
m_file_buffer= file_buffer; // Will be freed in clear_handler_file()
|
|
|
|
m_name_buffer_ptr= name_buffer_ptr;
|
2007-03-02 17:43:45 +01:00
|
|
|
|
|
|
|
if (!(m_engine_array= (plugin_ref*)
|
|
|
|
my_malloc(m_tot_parts * sizeof(plugin_ref), MYF(MY_WME))))
|
2007-03-29 06:29:16 +02:00
|
|
|
goto err3;
|
2007-03-02 17:43:45 +01:00
|
|
|
|
|
|
|
for (i= 0; i < m_tot_parts; i++)
|
|
|
|
m_engine_array[i]= ha_lock_engine(NULL, engine_array[i]);
|
2007-03-29 06:29:16 +02:00
|
|
|
|
|
|
|
my_afree((gptr) engine_array);
|
2007-03-02 17:43:45 +01:00
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (!m_file && create_handlers(mem_root))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
clear_handler_file();
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
}
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
|
2007-03-29 06:29:16 +02:00
|
|
|
err3:
|
|
|
|
my_afree((gptr) engine_array);
|
2005-07-18 13:31:02 +02:00
|
|
|
err2:
|
|
|
|
my_free(file_buffer, MYF(0));
|
|
|
|
err1:
|
|
|
|
VOID(my_close(file, MYF(0)));
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/****************************************************************************
|
|
|
|
MODULE open/close object
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Open handler object
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
open()
|
|
|
|
name Full path of table name
|
|
|
|
mode Open mode flags
|
|
|
|
test_if_locked ?
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Used for opening tables. The name will be the name of the file.
|
|
|
|
A table is opened when it needs to be opened. For instance
|
|
|
|
when a request comes in for a select on the table (tables are not
|
|
|
|
open and closed for each request, they are cached).
|
|
|
|
|
|
|
|
Called from handler.cc by handler::ha_open(). The server opens all tables
|
|
|
|
by calling ha_open() which then calls the handler specific open().
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::open(const char *name, int mode, uint test_if_locked)
|
|
|
|
{
|
|
|
|
char *name_buffer_ptr= m_name_buffer_ptr;
|
2006-01-29 01:22:32 +01:00
|
|
|
int error;
|
2005-07-18 13:31:02 +02:00
|
|
|
uint alloc_len;
|
2006-01-29 01:22:32 +01:00
|
|
|
handler **file;
|
|
|
|
char name_buff[FN_REFLEN];
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
bool is_not_tmp_table= (table_share->tmp_table == NO_TMP_TABLE);
|
2008-10-29 21:20:04 +01:00
|
|
|
ulonglong check_table_flags= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::open");
|
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_ASSERT(table->s == table_share);
|
2005-07-18 13:31:02 +02:00
|
|
|
ref_length= 0;
|
2006-01-17 08:40:00 +01:00
|
|
|
m_mode= mode;
|
|
|
|
m_open_test_lock= test_if_locked;
|
2005-07-18 13:31:02 +02:00
|
|
|
m_part_field_array= m_part_info->full_part_field_array;
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (get_from_handler_file(name, &table->mem_root))
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(1);
|
|
|
|
m_start_key.length= 0;
|
|
|
|
m_rec0= table->record[0];
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
m_rec_length= table_share->reclength;
|
2006-01-29 01:22:32 +01:00
|
|
|
alloc_len= m_tot_parts * (m_rec_length + PARTITION_BYTES_IN_POS);
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
alloc_len+= table_share->max_key_length;
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!m_ordered_rec_buffer)
|
|
|
|
{
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
if (!(m_ordered_rec_buffer= (uchar*)my_malloc(alloc_len, MYF(MY_WME))))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
}
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
We set-up one record per partition and each record has 2 bytes in
|
|
|
|
front where the partition id is written. This is used by ordered
|
|
|
|
index_read.
|
|
|
|
We also set-up a reference to the first record for temporary use in
|
|
|
|
setting up the scan.
|
|
|
|
*/
|
2005-11-07 17:15:23 +01:00
|
|
|
char *ptr= (char*)m_ordered_rec_buffer;
|
2005-07-18 13:31:02 +02:00
|
|
|
uint i= 0;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
int2store(ptr, i);
|
|
|
|
ptr+= m_rec_length + PARTITION_BYTES_IN_POS;
|
|
|
|
} while (++i < m_tot_parts);
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
m_start_key.key= (const uchar*)ptr;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
|
2008-10-29 21:20:04 +01:00
|
|
|
/* Initialize the bitmap we use to determine what partitions are used */
|
2007-02-28 10:25:49 +01:00
|
|
|
if (!is_clone)
|
|
|
|
{
|
|
|
|
if (bitmap_init(&(m_part_info->used_partitions), NULL, m_tot_parts, TRUE))
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
bitmap_set_all(&(m_part_info->used_partitions));
|
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
create_partition_name(name_buff, name, name_buffer_ptr, NORMAL_PART_NAME,
|
|
|
|
FALSE);
|
2005-11-23 21:45:02 +01:00
|
|
|
if ((error= (*file)->ha_open(table, (const char*) name_buff, mode,
|
2005-07-18 13:31:02 +02:00
|
|
|
test_if_locked)))
|
|
|
|
goto err_handler;
|
2005-11-19 02:02:27 +01:00
|
|
|
m_no_locks+= (*file)->lock_count();
|
2005-07-18 13:31:02 +02:00
|
|
|
name_buffer_ptr+= strlen(name_buffer_ptr) + 1;
|
|
|
|
set_if_bigger(ref_length, ((*file)->ref_length));
|
2008-10-29 21:20:04 +01:00
|
|
|
/*
|
|
|
|
Verify that all partitions have the same set of table flags.
|
|
|
|
Mask all flags that partitioning enables/disables.
|
|
|
|
*/
|
|
|
|
if (!check_table_flags)
|
|
|
|
{
|
|
|
|
check_table_flags= (((*file)->ha_table_flags() &
|
|
|
|
~(PARTITION_DISABLED_TABLE_FLAGS)) |
|
|
|
|
(PARTITION_ENABLED_TABLE_FLAGS));
|
|
|
|
}
|
|
|
|
else if (check_table_flags != (((*file)->ha_table_flags() &
|
|
|
|
~(PARTITION_DISABLED_TABLE_FLAGS)) |
|
|
|
|
(PARTITION_ENABLED_TABLE_FLAGS)))
|
|
|
|
{
|
|
|
|
error= HA_ERR_INITIALIZATION;
|
|
|
|
goto err_handler;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
2006-07-13 02:38:17 +02:00
|
|
|
key_used_on_scan= m_file[0]->key_used_on_scan;
|
|
|
|
implicit_emptied= m_file[0]->implicit_emptied;
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
|
|
|
Add 2 bytes for partition id in position ref length.
|
|
|
|
ref_length=max_in_all_partitions(ref_length) + PARTITION_BYTES_IN_POS
|
|
|
|
*/
|
|
|
|
ref_length+= PARTITION_BYTES_IN_POS;
|
|
|
|
m_ref_length= ref_length;
|
|
|
|
/*
|
|
|
|
Release buffer read from .par file. It will not be reused again after
|
|
|
|
being opened once.
|
|
|
|
*/
|
|
|
|
clear_handler_file();
|
|
|
|
/*
|
2008-10-29 21:20:04 +01:00
|
|
|
Initialize priority queue, initialized to reading forward.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
if ((error= init_queue(&m_queue, m_tot_parts, (uint) PARTITION_BYTES_IN_POS,
|
2005-07-18 13:31:02 +02:00
|
|
|
0, key_rec_cmp, (void*)this)))
|
|
|
|
goto err_handler;
|
2006-01-29 01:22:32 +01:00
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
/*
|
|
|
|
Use table_share->ha_data to share auto_increment_value among all handlers
|
|
|
|
for the same table.
|
|
|
|
*/
|
|
|
|
if (is_not_tmp_table)
|
|
|
|
pthread_mutex_lock(&table_share->mutex);
|
|
|
|
if (!table_share->ha_data)
|
|
|
|
{
|
|
|
|
HA_DATA_PARTITION *ha_data;
|
|
|
|
/* currently only needed for auto_increment */
|
|
|
|
table_share->ha_data= ha_data= (HA_DATA_PARTITION*)
|
|
|
|
alloc_root(&table_share->mem_root,
|
|
|
|
sizeof(HA_DATA_PARTITION));
|
|
|
|
if (!ha_data)
|
|
|
|
{
|
2008-10-01 12:40:05 +02:00
|
|
|
if (is_not_tmp_table)
|
|
|
|
pthread_mutex_unlock(&table_share->mutex);
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
goto err_handler;
|
|
|
|
}
|
|
|
|
DBUG_PRINT("info", ("table_share->ha_data 0x%p", ha_data));
|
|
|
|
bzero(ha_data, sizeof(HA_DATA_PARTITION));
|
|
|
|
}
|
|
|
|
if (is_not_tmp_table)
|
|
|
|
pthread_mutex_unlock(&table_share->mutex);
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
|
|
|
Some handlers update statistics as part of the open call. This will in
|
|
|
|
some cases corrupt the statistics of the partition handler and thus
|
|
|
|
to ensure we have correct statistics we call info from open after
|
|
|
|
calling open on all individual handlers.
|
|
|
|
*/
|
2008-10-29 21:20:04 +01:00
|
|
|
m_handler_status= handler_opened;
|
2005-07-18 13:31:02 +02:00
|
|
|
info(HA_STATUS_VARIABLE | HA_STATUS_CONST);
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
|
|
|
|
err_handler:
|
|
|
|
while (file-- != m_file)
|
|
|
|
(*file)->close();
|
2008-08-19 11:44:22 +02:00
|
|
|
if (!is_clone)
|
|
|
|
bitmap_free(&(m_part_info->used_partitions));
|
2006-04-10 18:34:18 +02:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
2007-02-27 20:01:03 +01:00
|
|
|
handler *ha_partition::clone(MEM_ROOT *mem_root)
|
|
|
|
{
|
2007-04-16 18:16:17 +02:00
|
|
|
handler *new_handler= get_new_handler(table->s, mem_root,
|
|
|
|
table->s->db_type());
|
2007-02-27 20:01:03 +01:00
|
|
|
((ha_partition*)new_handler)->m_part_info= m_part_info;
|
|
|
|
((ha_partition*)new_handler)->is_clone= TRUE;
|
|
|
|
if (new_handler && !new_handler->ha_open(table,
|
|
|
|
table->s->normalized_path.str,
|
|
|
|
table->db_stat,
|
|
|
|
HA_OPEN_IGNORE_IF_LOCKED))
|
|
|
|
return new_handler;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Close handler object
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
close()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Called from sql_base.cc, sql_select.cc, and table.cc.
|
|
|
|
In sql_select.cc it is only used to close up temporary tables or during
|
|
|
|
the process where a temporary table is converted over to being a
|
|
|
|
myisam table.
|
|
|
|
For sql_base.cc look at close_data_tables().
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::close(void)
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
bool first= TRUE;
|
2006-01-29 01:22:32 +01:00
|
|
|
handler **file;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::close");
|
2005-11-07 16:25:06 +01:00
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_ASSERT(table->s == table_share);
|
2006-01-17 08:40:00 +01:00
|
|
|
delete_queue(&m_queue);
|
2007-02-27 20:01:03 +01:00
|
|
|
if (!is_clone)
|
|
|
|
bitmap_free(&(m_part_info->used_partitions));
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
repeat:
|
2005-07-18 13:31:02 +02:00
|
|
|
do
|
|
|
|
{
|
|
|
|
(*file)->close();
|
|
|
|
} while (*(++file));
|
2006-01-29 01:22:32 +01:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
if (first && m_added_file && m_added_file[0])
|
|
|
|
{
|
|
|
|
file= m_added_file;
|
|
|
|
first= FALSE;
|
|
|
|
goto repeat;
|
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
|
2008-10-29 21:20:04 +01:00
|
|
|
m_handler_status= handler_closed;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE start/end statement
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
|
|
|
A number of methods to define various constants for the handler. In
|
|
|
|
the case of the partition handler we need to use some max and min
|
|
|
|
of the underlying handlers in most cases.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Set external locks on table
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
external_lock()
|
|
|
|
thd Thread object
|
|
|
|
lock_type Type of external lock
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
First you should go read the section "locking functions for mysql" in
|
|
|
|
lock.cc to understand this.
|
|
|
|
This create a lock on the table. If you are implementing a storage engine
|
|
|
|
that can handle transactions look at ha_berkeley.cc to see how you will
|
|
|
|
want to go about doing this. Otherwise you should consider calling
|
|
|
|
flock() here.
|
|
|
|
Originally this method was used to set locks on file level to enable
|
|
|
|
several MySQL Servers to work on the same data. For transactional
|
|
|
|
engines it has been "abused" to also mean start and end of statements
|
|
|
|
to enable proper rollback of statements and transactions. When LOCK
|
|
|
|
TABLES has been issued the start_stmt method takes over the role of
|
|
|
|
indicating start of statement but in this case there is no end of
|
|
|
|
statement indicator(?).
|
|
|
|
|
|
|
|
Called from lock.cc by lock_external() and unlock_external(). Also called
|
|
|
|
from sql_table.cc by copy_data_between_tables().
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::external_lock(THD *thd, int lock_type)
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
bool first= TRUE;
|
2005-07-18 13:31:02 +02:00
|
|
|
uint error;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::external_lock");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_ASSERT(!auto_increment_lock && !auto_increment_safe_stmt_log_lock);
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
2006-01-17 08:40:00 +01:00
|
|
|
m_lock_type= lock_type;
|
|
|
|
|
|
|
|
repeat:
|
2005-07-18 13:31:02 +02:00
|
|
|
do
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
DBUG_PRINT("info", ("external_lock(thd, %d) iteration %d",
|
2006-11-27 17:16:08 +01:00
|
|
|
lock_type, (int) (file - m_file)));
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((error= (*file)->ha_external_lock(thd, lock_type)))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (F_UNLCK != lock_type)
|
|
|
|
goto err_handler;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
} while (*(++file));
|
2006-01-29 01:22:32 +01:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
if (first && m_added_file && m_added_file[0])
|
|
|
|
{
|
|
|
|
DBUG_ASSERT(lock_type == F_UNLCK);
|
|
|
|
file= m_added_file;
|
|
|
|
first= FALSE;
|
|
|
|
goto repeat;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(0);
|
|
|
|
|
|
|
|
err_handler:
|
|
|
|
while (file-- != m_file)
|
2006-01-29 01:22:32 +01:00
|
|
|
{
|
2007-12-20 19:16:55 +01:00
|
|
|
(*file)->ha_external_lock(thd, F_UNLCK);
|
2006-01-29 01:22:32 +01:00
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Get the lock(s) for the table and perform conversion of locks if needed
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
store_lock()
|
|
|
|
thd Thread object
|
|
|
|
to Lock object array
|
|
|
|
lock_type Table lock type
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
The idea with handler::store_lock() is the following:
|
|
|
|
|
|
|
|
The statement decided which locks we should need for the table
|
|
|
|
for updates/deletes/inserts we get WRITE locks, for SELECT... we get
|
|
|
|
read locks.
|
|
|
|
|
|
|
|
Before adding the lock into the table lock handler (see thr_lock.c)
|
|
|
|
mysqld calls store lock with the requested locks. Store lock can now
|
|
|
|
modify a write lock to a read lock (or some other lock), ignore the
|
|
|
|
lock (if we don't want to use MySQL table locks at all) or add locks
|
|
|
|
for many tables (like we do when we are using a MERGE handler).
|
|
|
|
|
|
|
|
Berkeley DB for partition changes all WRITE locks to TL_WRITE_ALLOW_WRITE
|
|
|
|
(which signals that we are doing WRITES, but we are still allowing other
|
|
|
|
reader's and writer's.
|
|
|
|
|
|
|
|
When releasing locks, store_lock() is also called. In this case one
|
|
|
|
usually doesn't have to do anything.
|
|
|
|
|
|
|
|
store_lock is called when holding a global mutex to ensure that only
|
|
|
|
one thread at a time changes the locking information of tables.
|
|
|
|
|
|
|
|
In some exceptional cases MySQL may send a request for a TL_IGNORE;
|
|
|
|
This means that we are requesting the same lock as last time and this
|
|
|
|
should also be ignored. (This may happen when someone does a flush
|
|
|
|
table when we have opened a part of the tables, in which case mysqld
|
|
|
|
closes and reopens the tables and tries to get the same locks as last
|
|
|
|
time). In the future we will probably try to remove this.
|
|
|
|
|
|
|
|
Called from lock.cc by get_lock_data().
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
THR_LOCK_DATA **ha_partition::store_lock(THD *thd,
|
|
|
|
THR_LOCK_DATA **to,
|
|
|
|
enum thr_lock_type lock_type)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::store_lock");
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-11-27 17:16:08 +01:00
|
|
|
DBUG_PRINT("info", ("store lock %d iteration", (int) (file - m_file)));
|
2005-07-18 13:31:02 +02:00
|
|
|
to= (*file)->store_lock(thd, to, lock_type);
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(to);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Start a statement when table is locked
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
start_stmt()
|
|
|
|
thd Thread object
|
|
|
|
lock_type Type of external lock
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This method is called instead of external lock when the table is locked
|
|
|
|
before the statement is executed.
|
|
|
|
*/
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2005-10-06 12:45:30 +02:00
|
|
|
int ha_partition::start_stmt(THD *thd, thr_lock_type lock_type)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
int error= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::start_stmt");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2005-10-06 12:45:30 +02:00
|
|
|
if ((error= (*file)->start_stmt(thd, lock_type)))
|
2005-07-18 13:31:02 +02:00
|
|
|
break;
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Get number of lock objects returned in store_lock
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
lock_count()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
Number of locks returned in call to store_lock
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Returns the number of store locks needed in call to store lock.
|
|
|
|
We return number of partitions since we call store_lock on each
|
|
|
|
underlying handler. Assists the above functions in allocating
|
|
|
|
sufficient space for lock structures.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
uint ha_partition::lock_count() const
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::lock_count");
|
2006-01-29 01:22:32 +01:00
|
|
|
DBUG_PRINT("info", ("m_no_locks %d", m_no_locks));
|
2005-11-22 00:21:04 +01:00
|
|
|
DBUG_RETURN(m_no_locks);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Unlock last accessed row
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
unlock_row()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Record currently processed was not in the result set of the statement
|
|
|
|
and is thus unlocked. Used for UPDATE and DELETE queries.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
void ha_partition::unlock_row()
|
|
|
|
{
|
2008-11-10 21:13:24 +01:00
|
|
|
DBUG_ENTER("ha_partition::unlock_row");
|
2005-07-18 13:31:02 +02:00
|
|
|
m_file[m_last_part]->unlock_row();
|
2008-11-10 21:13:24 +01:00
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
2008-12-16 12:44:18 +01:00
|
|
|
/**
|
|
|
|
Check if semi consistent read was used
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
was_semi_consistent_read()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
TRUE Previous read was a semi consistent read
|
|
|
|
FALSE Previous read was not a semi consistent read
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
See handler.h:
|
|
|
|
In an UPDATE or DELETE, if the row under the cursor was locked by another
|
|
|
|
transaction, and the engine used an optimistic read of the last
|
|
|
|
committed row value under the cursor, then the engine returns 1 from this
|
|
|
|
function. MySQL must NOT try to update this optimistic value. If the
|
|
|
|
optimistic value does not match the WHERE condition, MySQL can decide to
|
|
|
|
skip over this row. Currently only works for InnoDB. This can be used to
|
|
|
|
avoid unnecessary lock waits.
|
|
|
|
|
|
|
|
If this method returns nonzero, it will also signal the storage
|
|
|
|
engine that the next read will be a locking re-read of the row.
|
|
|
|
*/
|
|
|
|
bool ha_partition::was_semi_consistent_read()
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::was_semi_consistent_read");
|
|
|
|
DBUG_ASSERT(m_last_part < m_tot_parts &&
|
|
|
|
bitmap_is_set(&(m_part_info->used_partitions), m_last_part));
|
|
|
|
DBUG_RETURN(m_file[m_last_part]->was_semi_consistent_read());
|
|
|
|
}
|
2008-11-10 21:13:24 +01:00
|
|
|
|
|
|
|
/**
|
|
|
|
Use semi consistent read if possible
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
try_semi_consistent_read()
|
|
|
|
yes Turn on semi consistent read
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
See handler.h:
|
|
|
|
Tell the engine whether it should avoid unnecessary lock waits.
|
|
|
|
If yes, in an UPDATE or DELETE, if the row under the cursor was locked
|
|
|
|
by another transaction, the engine may try an optimistic read of
|
|
|
|
the last committed row value under the cursor.
|
|
|
|
Note: prune_partitions are already called before this call, so using
|
|
|
|
pruning is OK.
|
|
|
|
*/
|
|
|
|
void ha_partition::try_semi_consistent_read(bool yes)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::try_semi_consistent_read");
|
|
|
|
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
{
|
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
(*file)->try_semi_consistent_read(yes);
|
|
|
|
}
|
|
|
|
DBUG_VOID_RETURN;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE change record
|
|
|
|
****************************************************************************/
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Insert a row to the table
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
write_row()
|
|
|
|
buf The row in MySQL Row Format
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
write_row() inserts a row. buf() is a byte array of data, normally
|
|
|
|
record[0].
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
You can use the field information to extract the data from the native byte
|
|
|
|
array type.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
Example of this would be:
|
|
|
|
for (Field **field=table->field ; *field ; field++)
|
|
|
|
{
|
|
|
|
...
|
|
|
|
}
|
|
|
|
|
|
|
|
See ha_tina.cc for a variant of extracting all of the data as strings.
|
|
|
|
ha_berkeley.cc has a variant of how to store it intact by "packing" it
|
|
|
|
for ha_berkeley's own native storage type.
|
|
|
|
|
|
|
|
Called from item_sum.cc, item_sum.cc, sql_acl.cc, sql_insert.cc,
|
|
|
|
sql_insert.cc, sql_select.cc, sql_table.cc, sql_udf.cc, and sql_update.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
ADDITIONAL INFO:
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-02-16 21:19:00 +01:00
|
|
|
We have to set timestamp fields and auto_increment fields, because those
|
|
|
|
may be used in determining which partition the row should be written to.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::write_row(uchar * buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint32 part_id;
|
|
|
|
int error;
|
2006-01-17 09:25:12 +01:00
|
|
|
longlong func_value;
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
bool have_auto_increment= table->next_number_field && buf == table->record[0];
|
2007-10-17 20:40:23 +02:00
|
|
|
my_bitmap_map *old_map;
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
2007-12-19 20:15:02 +01:00
|
|
|
THD *thd= ha_thd();
|
2008-08-15 20:26:25 +02:00
|
|
|
timestamp_auto_set_type orig_timestamp_type= table->timestamp_field_type;
|
2005-07-18 13:31:02 +02:00
|
|
|
#ifdef NOT_NEEDED
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
uchar *rec0= m_rec0;
|
2005-07-18 13:31:02 +02:00
|
|
|
#endif
|
|
|
|
DBUG_ENTER("ha_partition::write_row");
|
|
|
|
DBUG_ASSERT(buf == m_rec0);
|
|
|
|
|
2006-02-16 21:19:00 +01:00
|
|
|
/* If we have a timestamp column, update it to the current time */
|
|
|
|
if (table->timestamp_field_type & TIMESTAMP_AUTO_SET_ON_INSERT)
|
|
|
|
table->timestamp_field->set_time();
|
2008-08-15 20:26:25 +02:00
|
|
|
table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET;
|
2006-02-16 21:19:00 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
If we have an auto_increment column and we are writing a changed row
|
|
|
|
or a new row, then update the auto_increment value in the record.
|
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (have_auto_increment)
|
2007-04-24 15:42:46 +02:00
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (!ha_data->auto_inc_initialized &&
|
|
|
|
!table->s->next_number_keypart)
|
2007-10-17 20:40:23 +02:00
|
|
|
{
|
|
|
|
/*
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
If auto_increment in table_share is not initialized, start by
|
|
|
|
initializing it.
|
2007-10-17 20:40:23 +02:00
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
info(HA_STATUS_AUTO);
|
2007-10-17 20:40:23 +02:00
|
|
|
}
|
2007-09-09 05:26:12 +02:00
|
|
|
error= update_auto_increment();
|
|
|
|
|
|
|
|
/*
|
|
|
|
If we have failed to set the auto-increment value for this row,
|
|
|
|
it is highly likely that we will not be able to insert it into
|
|
|
|
the correct partition. We must check and fail if neccessary.
|
|
|
|
*/
|
|
|
|
if (error)
|
2007-10-17 20:40:23 +02:00
|
|
|
goto exit;
|
2007-04-24 15:42:46 +02:00
|
|
|
}
|
2006-02-16 21:19:00 +01:00
|
|
|
|
2007-10-17 20:40:23 +02:00
|
|
|
old_map= dbug_tmp_use_all_columns(table, table->read_set);
|
2005-07-18 13:31:02 +02:00
|
|
|
#ifdef NOT_NEEDED
|
|
|
|
if (likely(buf == rec0))
|
|
|
|
#endif
|
2006-01-17 09:25:12 +01:00
|
|
|
error= m_part_info->get_partition_id(m_part_info, &part_id,
|
|
|
|
&func_value);
|
2005-07-18 13:31:02 +02:00
|
|
|
#ifdef NOT_NEEDED
|
|
|
|
else
|
|
|
|
{
|
|
|
|
set_field_ptr(m_part_field_array, buf, rec0);
|
2006-01-17 09:25:12 +01:00
|
|
|
error= m_part_info->get_partition_id(m_part_info, &part_id,
|
|
|
|
&func_value);
|
2005-07-18 13:31:02 +02:00
|
|
|
set_field_ptr(m_part_field_array, rec0, buf);
|
|
|
|
}
|
|
|
|
#endif
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
dbug_tmp_restore_column_map(table->read_set, old_map);
|
2005-07-18 13:31:02 +02:00
|
|
|
if (unlikely(error))
|
2006-10-02 21:52:29 +02:00
|
|
|
{
|
|
|
|
m_part_info->err_value= func_value;
|
2007-04-24 15:42:46 +02:00
|
|
|
goto exit;
|
2006-10-02 21:52:29 +02:00
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
m_last_part= part_id;
|
|
|
|
DBUG_PRINT("info", ("Insert in partition %d", part_id));
|
2007-12-19 20:15:02 +01:00
|
|
|
tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
|
|
|
|
error= m_file[part_id]->ha_write_row(buf);
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (have_auto_increment && !table->s->next_number_keypart)
|
|
|
|
set_auto_increment_if_higher(table->next_number_field->val_int());
|
2007-12-19 20:15:02 +01:00
|
|
|
reenable_binlog(thd);
|
2007-04-24 15:42:46 +02:00
|
|
|
exit:
|
2008-08-15 20:26:25 +02:00
|
|
|
table->timestamp_field_type= orig_timestamp_type;
|
2007-04-24 15:42:46 +02:00
|
|
|
DBUG_RETURN(error);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Update an existing row
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
update_row()
|
|
|
|
old_data Old record in MySQL Row Format
|
|
|
|
new_data New record in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Yes, update_row() does what you expect, it updates a row. old_data will
|
|
|
|
have the previous row record in it, while new_data will have the newest
|
|
|
|
data in it.
|
|
|
|
Keep in mind that the server can do updates based on ordering if an
|
|
|
|
ORDER BY clause was used. Consecutive ordering is not guarenteed.
|
|
|
|
|
|
|
|
Called from sql_select.cc, sql_acl.cc, sql_update.cc, and sql_insert.cc.
|
|
|
|
new_data is always record[0]
|
|
|
|
old_data is normally record[1] but may be anything
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::update_row(const uchar *old_data, uchar *new_data)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2007-12-19 20:15:02 +01:00
|
|
|
THD *thd= ha_thd();
|
2005-07-18 13:31:02 +02:00
|
|
|
uint32 new_part_id, old_part_id;
|
2007-11-12 18:11:31 +01:00
|
|
|
int error= 0;
|
2006-01-17 09:25:12 +01:00
|
|
|
longlong func_value;
|
2007-11-12 18:11:31 +01:00
|
|
|
timestamp_auto_set_type orig_timestamp_type= table->timestamp_field_type;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::update_row");
|
|
|
|
|
2007-11-12 18:11:31 +01:00
|
|
|
/*
|
|
|
|
We need to set timestamp field once before we calculate
|
|
|
|
the partition. Then we disable timestamp calculations
|
|
|
|
inside m_file[*]->update_row() methods
|
|
|
|
*/
|
|
|
|
if (orig_timestamp_type & TIMESTAMP_AUTO_SET_ON_UPDATE)
|
|
|
|
table->timestamp_field->set_time();
|
2008-08-15 20:26:25 +02:00
|
|
|
table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET;
|
2007-11-12 18:11:31 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((error= get_parts_for_update(old_data, new_data, table->record[0],
|
2006-01-17 09:25:12 +01:00
|
|
|
m_part_info, &old_part_id, &new_part_id,
|
|
|
|
&func_value)))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-10-02 21:52:29 +02:00
|
|
|
m_part_info->err_value= func_value;
|
2007-11-12 18:11:31 +01:00
|
|
|
goto exit;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
m_last_part= new_part_id;
|
|
|
|
if (new_part_id == old_part_id)
|
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("Update in partition %d", new_part_id));
|
2007-12-19 20:15:02 +01:00
|
|
|
tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
|
|
|
|
error= m_file[new_part_id]->ha_update_row(old_data, new_data);
|
|
|
|
reenable_binlog(thd);
|
2007-11-12 18:11:31 +01:00
|
|
|
goto exit;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("Update from partition %d to partition %d",
|
|
|
|
old_part_id, new_part_id));
|
2007-12-19 20:15:02 +01:00
|
|
|
tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
|
|
|
|
error= m_file[new_part_id]->ha_write_row(new_data);
|
|
|
|
reenable_binlog(thd);
|
|
|
|
if (error)
|
2007-11-12 18:11:31 +01:00
|
|
|
goto exit;
|
2007-12-19 20:15:02 +01:00
|
|
|
|
|
|
|
tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
|
|
|
|
error= m_file[old_part_id]->ha_delete_row(old_data);
|
|
|
|
reenable_binlog(thd);
|
|
|
|
if (error)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
#ifdef IN_THE_FUTURE
|
|
|
|
(void) m_file[new_part_id]->delete_last_inserted_row(new_data);
|
|
|
|
#endif
|
2007-11-12 18:11:31 +01:00
|
|
|
goto exit;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
}
|
2007-11-12 18:11:31 +01:00
|
|
|
|
|
|
|
exit:
|
2008-10-23 22:14:07 +02:00
|
|
|
/*
|
|
|
|
if updating an auto_increment column, update
|
|
|
|
table_share->ha_data->next_auto_inc_val if needed.
|
|
|
|
(not to be used if auto_increment on secondary field in a multi-column
|
|
|
|
index)
|
|
|
|
mysql_update does not set table->next_number_field, so we use
|
|
|
|
table->found_next_number_field instead.
|
|
|
|
*/
|
|
|
|
if (table->found_next_number_field && new_data == table->record[0] &&
|
|
|
|
!table->s->next_number_keypart)
|
|
|
|
{
|
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
|
|
|
if (!ha_data->auto_inc_initialized)
|
|
|
|
info(HA_STATUS_AUTO);
|
|
|
|
set_auto_increment_if_higher(table->found_next_number_field->val_int());
|
|
|
|
}
|
2007-11-12 18:11:31 +01:00
|
|
|
table->timestamp_field_type= orig_timestamp_type;
|
|
|
|
DBUG_RETURN(error);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Remove an existing row
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
delete_row
|
|
|
|
buf Deleted row in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error Code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This will delete a row. buf will contain a copy of the row to be deleted.
|
|
|
|
The server will call this right after the current row has been read
|
|
|
|
(from either a previous rnd_xxx() or index_xxx() call).
|
|
|
|
If you keep a pointer to the last row or can access a primary key it will
|
|
|
|
make doing the deletion quite a bit easier.
|
|
|
|
Keep in mind that the server does no guarentee consecutive deletions.
|
|
|
|
ORDER BY clauses can be used.
|
|
|
|
|
|
|
|
Called in sql_acl.cc and sql_udf.cc to manage internal table information.
|
|
|
|
Called in sql_delete.cc, sql_insert.cc, and sql_select.cc. In sql_select
|
|
|
|
it is used for removing duplicates while in insert it is used for REPLACE
|
|
|
|
calls.
|
|
|
|
|
|
|
|
buf is either record[0] or record[1]
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::delete_row(const uchar *buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint32 part_id;
|
|
|
|
int error;
|
2007-12-20 19:16:55 +01:00
|
|
|
THD *thd= ha_thd();
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::delete_row");
|
|
|
|
|
|
|
|
if ((error= get_part_for_delete(buf, m_rec0, m_part_info, &part_id)))
|
|
|
|
{
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
m_last_part= part_id;
|
2007-12-20 19:16:55 +01:00
|
|
|
tmp_disable_binlog(thd);
|
|
|
|
error= m_file[part_id]->ha_delete_row(buf);
|
|
|
|
reenable_binlog(thd);
|
|
|
|
DBUG_RETURN(error);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Delete all rows in a table
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
delete_all_rows()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error Code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Used to delete all rows in a table. Both for cases of truncate and
|
|
|
|
for cases where the optimizer realizes that all rows will be
|
|
|
|
removed as a result of a SQL statement.
|
|
|
|
|
|
|
|
Called from item_sum.cc by Item_func_group_concat::clear(),
|
|
|
|
Item_sum_count_distinct::clear(), and Item_func_group_concat::clear().
|
|
|
|
Called from sql_delete.cc by mysql_delete().
|
|
|
|
Called from sql_select.cc by JOIN::reinit().
|
|
|
|
Called from sql_union.cc by st_select_lex_unit::exec().
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::delete_all_rows()
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
handler **file;
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
THD *thd= ha_thd();
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::delete_all_rows");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (thd->lex->sql_command == SQLCOM_TRUNCATE)
|
|
|
|
{
|
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
|
|
|
lock_auto_increment();
|
|
|
|
ha_data->next_auto_inc_val= 0;
|
|
|
|
ha_data->auto_inc_initialized= FALSE;
|
|
|
|
unlock_auto_increment();
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((error= (*file)->ha_delete_all_rows()))
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Start a large batch of insert rows
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
start_bulk_insert()
|
|
|
|
rows Number of rows to insert
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
rows == 0 means we will probably insert many rows
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
void ha_partition::start_bulk_insert(ha_rows rows)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::start_bulk_insert");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2007-03-11 19:30:54 +01:00
|
|
|
rows= rows ? rows/m_tot_parts + 1 : 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-06-02 22:21:32 +02:00
|
|
|
(*file)->ha_start_bulk_insert(rows);
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Finish a large batch of insert rows
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
end_bulk_insert()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
int ha_partition::end_bulk_insert()
|
|
|
|
{
|
|
|
|
int error= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::end_bulk_insert");
|
|
|
|
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
int tmp;
|
2006-06-02 22:21:32 +02:00
|
|
|
if ((tmp= (*file)->ha_end_bulk_insert()))
|
2005-07-18 13:31:02 +02:00
|
|
|
error= tmp;
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/****************************************************************************
|
|
|
|
MODULE full table scan
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
|
|
|
Initialize engine for random reads
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
ha_partition::rnd_init()
|
|
|
|
scan 0 Initialize for random reads through rnd_pos()
|
|
|
|
1 Initialize for random scan through rnd_next()
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
DESCRIPTION
|
|
|
|
rnd_init() is called when the server wants the storage engine to do a
|
|
|
|
table scan or when the server wants to access data through rnd_pos.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
When scan is used we will scan one handler partition at a time.
|
|
|
|
When preparing for rnd_pos we will init all handler partitions.
|
|
|
|
No extra cache handling is needed when scannning is not performed.
|
|
|
|
|
|
|
|
Before initialising we will call rnd_end to ensure that we clean up from
|
|
|
|
any previous incarnation of a table scan.
|
|
|
|
Called from filesort.cc, records.cc, sql_handler.cc, sql_select.cc,
|
|
|
|
sql_table.cc, and sql_update.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::rnd_init(bool scan)
|
|
|
|
{
|
|
|
|
int error;
|
2006-01-29 01:22:32 +01:00
|
|
|
uint i= 0;
|
|
|
|
uint32 part_id;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::rnd_init");
|
|
|
|
|
2007-07-04 21:55:26 +02:00
|
|
|
/*
|
|
|
|
For operations that may need to change data, we may need to extend
|
|
|
|
read_set.
|
|
|
|
*/
|
|
|
|
if (m_lock_type == F_WRLCK)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
If write_set contains any of the fields used in partition and
|
|
|
|
subpartition expression, we need to set all bits in read_set because
|
|
|
|
the row may need to be inserted in a different [sub]partition. In
|
|
|
|
other words update_row() can be converted into write_row(), which
|
|
|
|
requires a complete record.
|
|
|
|
*/
|
|
|
|
if (bitmap_is_overlapping(&m_part_info->full_part_field_set,
|
|
|
|
table->write_set))
|
|
|
|
bitmap_set_all(table->read_set);
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
Some handlers only read fields as specified by the bitmap for the
|
|
|
|
read set. For partitioned handlers we always require that the
|
|
|
|
fields of the partition functions are read such that we can
|
|
|
|
calculate the partition id to place updated and deleted records.
|
|
|
|
*/
|
|
|
|
bitmap_union(table->read_set, &m_part_info->full_part_field_set);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-01-29 01:22:32 +01:00
|
|
|
/* Now we see what the index of our first important partition is */
|
2006-11-27 00:47:38 +01:00
|
|
|
DBUG_PRINT("info", ("m_part_info->used_partitions: 0x%lx",
|
|
|
|
(long) m_part_info->used_partitions.bitmap));
|
2006-01-29 01:22:32 +01:00
|
|
|
part_id= bitmap_get_first_set(&(m_part_info->used_partitions));
|
|
|
|
DBUG_PRINT("info", ("m_part_spec.start_part %d", part_id));
|
|
|
|
|
|
|
|
if (MY_BIT_NONE == part_id)
|
2006-03-29 13:27:36 +02:00
|
|
|
{
|
|
|
|
error= 0;
|
2006-01-29 01:22:32 +01:00
|
|
|
goto err1;
|
2006-03-29 13:27:36 +02:00
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
We have a partition and we are scanning with rnd_next
|
|
|
|
so we bump our cache
|
|
|
|
*/
|
|
|
|
DBUG_PRINT("info", ("rnd_init on partition %d", part_id));
|
2005-07-18 13:31:02 +02:00
|
|
|
if (scan)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
rnd_end() is needed for partitioning to reset internal data if scan
|
|
|
|
is already in use
|
|
|
|
*/
|
|
|
|
rnd_end();
|
2006-01-29 01:22:32 +01:00
|
|
|
late_extra_cache(part_id);
|
|
|
|
if ((error= m_file[part_id]->ha_rnd_init(scan)))
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
for (i= part_id; i < m_tot_parts; i++)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), i))
|
|
|
|
{
|
|
|
|
if ((error= m_file[i]->ha_rnd_init(scan)))
|
|
|
|
goto err;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
m_scan_value= scan;
|
|
|
|
m_part_spec.start_part= part_id;
|
|
|
|
m_part_spec.end_part= m_tot_parts - 1;
|
|
|
|
DBUG_PRINT("info", ("m_scan_value=%d", m_scan_value));
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(0);
|
|
|
|
|
|
|
|
err:
|
2006-01-29 01:22:32 +01:00
|
|
|
while ((int)--i >= (int)part_id)
|
|
|
|
{
|
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), i))
|
2006-03-29 13:27:36 +02:00
|
|
|
m_file[i]->ha_rnd_end();
|
2006-01-29 01:22:32 +01:00
|
|
|
}
|
|
|
|
err1:
|
|
|
|
m_scan_value= 2;
|
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
End of a table scan
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
rnd_end()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
int ha_partition::rnd_end()
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::rnd_end");
|
|
|
|
switch (m_scan_value) {
|
|
|
|
case 2: // Error
|
|
|
|
break;
|
2006-01-29 01:22:32 +01:00
|
|
|
case 1:
|
|
|
|
if (NO_CURRENT_PART_ID != m_part_spec.start_part) // Table scan
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
late_extra_no_cache(m_part_spec.start_part);
|
|
|
|
m_file[m_part_spec.start_part]->ha_rnd_end();
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0:
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
(*file)->ha_rnd_end();
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
m_scan_value= 2;
|
2006-01-29 01:22:32 +01:00
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
read next row during full table scan (scan in random row order)
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
rnd_next()
|
|
|
|
buf buffer that should be filled with data
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
DESCRIPTION
|
|
|
|
This is called for each row of the table scan. When you run out of records
|
|
|
|
you should return HA_ERR_END_OF_FILE.
|
|
|
|
The Field structure for the table is the key to getting data into buf
|
|
|
|
in a manner that will allow the server to understand it.
|
|
|
|
|
|
|
|
Called from filesort.cc, records.cc, sql_handler.cc, sql_select.cc,
|
|
|
|
sql_table.cc, and sql_update.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::rnd_next(uchar *buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
handler *file;
|
2005-07-18 13:31:02 +02:00
|
|
|
int result= HA_ERR_END_OF_FILE;
|
2006-01-29 01:22:32 +01:00
|
|
|
uint part_id= m_part_spec.start_part;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::rnd_next");
|
|
|
|
|
2006-01-29 01:22:32 +01:00
|
|
|
if (NO_CURRENT_PART_ID == part_id)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
The original set of partitions to scan was empty and thus we report
|
|
|
|
the result here.
|
|
|
|
*/
|
|
|
|
goto end;
|
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
|
|
|
|
DBUG_ASSERT(m_scan_value == 1);
|
|
|
|
file= m_file[part_id];
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
while (TRUE)
|
|
|
|
{
|
2008-12-04 10:47:25 +01:00
|
|
|
result= file->rnd_next(buf);
|
2006-01-29 01:22:32 +01:00
|
|
|
if (!result)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
m_last_part= part_id;
|
2006-01-29 01:22:32 +01:00
|
|
|
m_part_spec.start_part= part_id;
|
2005-07-18 13:31:02 +02:00
|
|
|
table->status= 0;
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
2006-01-29 01:22:32 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
if we get here, then the current partition rnd_next returned failure
|
|
|
|
*/
|
|
|
|
if (result == HA_ERR_RECORD_DELETED)
|
|
|
|
continue; // Probably MyISAM
|
|
|
|
|
|
|
|
if (result != HA_ERR_END_OF_FILE)
|
2007-06-25 11:42:35 +02:00
|
|
|
goto end_dont_reset_start_part; // Return error
|
2006-01-29 01:22:32 +01:00
|
|
|
|
|
|
|
/* End current partition */
|
|
|
|
late_extra_no_cache(part_id);
|
|
|
|
DBUG_PRINT("info", ("rnd_end on partition %d", part_id));
|
|
|
|
if ((result= file->ha_rnd_end()))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* Shift to next partition */
|
|
|
|
while (++part_id < m_tot_parts &&
|
|
|
|
!bitmap_is_set(&(m_part_info->used_partitions), part_id))
|
|
|
|
;
|
|
|
|
if (part_id >= m_tot_parts)
|
|
|
|
{
|
|
|
|
result= HA_ERR_END_OF_FILE;
|
|
|
|
break;
|
|
|
|
}
|
2008-08-20 17:29:14 +02:00
|
|
|
m_last_part= part_id;
|
|
|
|
m_part_spec.start_part= part_id;
|
2006-01-29 01:22:32 +01:00
|
|
|
file= m_file[part_id];
|
|
|
|
DBUG_PRINT("info", ("rnd_init on partition %d", part_id));
|
|
|
|
if ((result= file->ha_rnd_init(1)))
|
|
|
|
break;
|
|
|
|
late_extra_cache(part_id);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
end:
|
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
2007-06-25 11:42:35 +02:00
|
|
|
end_dont_reset_start_part:
|
2005-07-18 13:31:02 +02:00
|
|
|
table->status= STATUS_NOT_FOUND;
|
|
|
|
DBUG_RETURN(result);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Save position of current row
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
position()
|
|
|
|
record Current record in MySQL Row Format
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
DESCRIPTION
|
|
|
|
position() is called after each call to rnd_next() if the data needs
|
|
|
|
to be ordered. You can do something like the following to store
|
|
|
|
the position:
|
|
|
|
ha_store_ptr(ref, ref_length, current_position);
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
The server uses ref to store data. ref_length in the above case is
|
|
|
|
the size needed to store current_position. ref is just a byte array
|
|
|
|
that the server will maintain. If you are using offsets to mark rows, then
|
|
|
|
current_position should be the offset. If it is a primary key like in
|
|
|
|
BDB, then it needs to be a primary key.
|
|
|
|
|
|
|
|
Called from filesort.cc, sql_select.cc, sql_delete.cc and sql_update.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
void ha_partition::position(const uchar *record)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2007-09-07 15:41:49 +02:00
|
|
|
handler *file= m_file[m_last_part];
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::position");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
file->position(record);
|
2006-01-17 08:40:00 +01:00
|
|
|
int2store(ref, m_last_part);
|
2005-07-18 13:31:02 +02:00
|
|
|
memcpy((ref + PARTITION_BYTES_IN_POS), file->ref,
|
|
|
|
(ref_length - PARTITION_BYTES_IN_POS));
|
|
|
|
|
|
|
|
#ifdef SUPPORTING_PARTITION_OVER_DIFFERENT_ENGINES
|
|
|
|
#ifdef HAVE_purify
|
2006-01-29 01:22:32 +01:00
|
|
|
bzero(ref + PARTITION_BYTES_IN_POS + ref_length,
|
|
|
|
max_ref_length-ref_length);
|
2005-07-18 13:31:02 +02:00
|
|
|
#endif /* HAVE_purify */
|
|
|
|
#endif
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
2007-08-24 18:36:51 +02:00
|
|
|
|
|
|
|
void ha_partition::column_bitmaps_signal()
|
|
|
|
{
|
|
|
|
handler::column_bitmaps_signal();
|
|
|
|
bitmap_union(table->read_set, &m_part_info->full_part_field_set);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Read row using position
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
rnd_pos()
|
|
|
|
out:buf Row read in MySQL Row Format
|
|
|
|
position Position of read row
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This is like rnd_next, but you are given a position to use
|
|
|
|
to determine the row. The position will be of the type that you stored in
|
|
|
|
ref. You can use ha_get_ptr(pos,ref_length) to retrieve whatever key
|
|
|
|
or position you saved when position() was called.
|
|
|
|
Called from filesort.cc records.cc sql_insert.cc sql_select.cc
|
|
|
|
sql_update.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::rnd_pos(uchar * buf, uchar *pos)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint part_id;
|
|
|
|
handler *file;
|
|
|
|
DBUG_ENTER("ha_partition::rnd_pos");
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
part_id= uint2korr((const uchar *) pos);
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ASSERT(part_id < m_tot_parts);
|
|
|
|
file= m_file[part_id];
|
|
|
|
m_last_part= part_id;
|
|
|
|
DBUG_RETURN(file->rnd_pos(buf, (pos + PARTITION_BYTES_IN_POS)));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2007-09-07 15:41:49 +02:00
|
|
|
/*
|
|
|
|
Read row using position using given record to find
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
rnd_pos_by_record()
|
|
|
|
record Current record in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
this works as position()+rnd_pos() functions, but does some extra work,
|
|
|
|
calculating m_last_part - the partition to where the 'record'
|
|
|
|
should go.
|
|
|
|
|
|
|
|
called from replication (log_event.cc)
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::rnd_pos_by_record(uchar *record)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::rnd_pos_by_record");
|
|
|
|
|
|
|
|
if (unlikely(get_part_for_delete(record, m_rec0, m_part_info, &m_last_part)))
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
|
|
|
|
DBUG_RETURN(handler::rnd_pos_by_record(record));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/****************************************************************************
|
|
|
|
MODULE index scan
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
|
|
|
Positions an index cursor to the index specified in the handle. Fetches the
|
|
|
|
row if available. If the key value is null, begin at the first key of the
|
|
|
|
index.
|
|
|
|
|
|
|
|
There are loads of optimisations possible here for the partition handler.
|
|
|
|
The same optimisations can also be checked for full table scan although
|
|
|
|
only through conditions and not from index ranges.
|
|
|
|
Phase one optimisations:
|
|
|
|
Check if the fields of the partition function are bound. If so only use
|
|
|
|
the single partition it becomes bound to.
|
|
|
|
Phase two optimisations:
|
|
|
|
If it can be deducted through range or list partitioning that only a
|
|
|
|
subset of the partitions are used, then only use those partitions.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2008-10-29 21:20:04 +01:00
|
|
|
Initialize handler before start of index scan
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_init()
|
|
|
|
inx Index number
|
|
|
|
sorted Is rows to be returned in sorted order
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
index_init is always called before starting index scans (except when
|
|
|
|
starting through index_read_idx and using read_range variants).
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::index_init(uint inx, bool sorted)
|
|
|
|
{
|
|
|
|
int error= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::index_init");
|
|
|
|
|
2008-10-10 12:01:01 +02:00
|
|
|
DBUG_PRINT("info", ("inx %u sorted %u", inx, sorted));
|
2005-07-18 13:31:02 +02:00
|
|
|
active_index= inx;
|
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
|
|
|
m_start_key.length= 0;
|
|
|
|
m_ordered= sorted;
|
2008-10-10 12:01:01 +02:00
|
|
|
m_curr_key_info[0]= table->key_info+inx;
|
|
|
|
if (m_pkey_is_clustered && table->s->primary_key != MAX_KEY)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
if PK is clustered, then the key cmp must use the pk to
|
|
|
|
differentiate between equal key in given index.
|
|
|
|
*/
|
|
|
|
DBUG_PRINT("info", ("Clustered pk, using pk as secondary cmp"));
|
|
|
|
m_curr_key_info[1]= table->key_info+table->s->primary_key;
|
|
|
|
m_curr_key_info[2]= NULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
m_curr_key_info[1]= NULL;
|
2007-07-04 21:55:26 +02:00
|
|
|
/*
|
|
|
|
Some handlers only read fields as specified by the bitmap for the
|
|
|
|
read set. For partitioned handlers we always require that the
|
|
|
|
fields of the partition functions are read such that we can
|
|
|
|
calculate the partition id to place updated and deleted records.
|
|
|
|
But this is required for operations that may need to change data only.
|
|
|
|
*/
|
|
|
|
if (m_lock_type == F_WRLCK)
|
|
|
|
bitmap_union(table->read_set, &m_part_info->full_part_field_set);
|
2007-11-28 01:02:05 +01:00
|
|
|
else if (sorted)
|
2007-09-13 15:33:40 +02:00
|
|
|
{
|
|
|
|
/*
|
2007-11-28 01:02:05 +01:00
|
|
|
An ordered scan is requested. We must make sure all fields of the
|
|
|
|
used index are in the read set, as partitioning requires them for
|
|
|
|
sorting (see ha_partition::handle_ordered_index_scan).
|
|
|
|
|
|
|
|
The SQL layer may request an ordered index scan without having index
|
|
|
|
fields in the read set when
|
|
|
|
- it needs to do an ordered scan over an index prefix.
|
|
|
|
- it evaluates ORDER BY with SELECT COUNT(*) FROM t1.
|
2007-09-13 15:33:40 +02:00
|
|
|
|
|
|
|
TODO: handle COUNT(*) queries via unordered scan.
|
|
|
|
*/
|
|
|
|
uint i;
|
2008-10-10 12:01:01 +02:00
|
|
|
KEY **key_info= m_curr_key_info;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
for (i= 0; i < (*key_info)->key_parts; i++)
|
|
|
|
bitmap_set_bit(table->read_set,
|
|
|
|
(*key_info)->key_part[i].field->field_index);
|
|
|
|
} while (*(++key_info));
|
2007-09-13 15:33:40 +02:00
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
/* TODO RONM: Change to index_init() when code is stable */
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
if ((error= (*file)->ha_index_init(inx, sorted)))
|
|
|
|
{
|
|
|
|
DBUG_ASSERT(0); // Should never happen
|
|
|
|
break;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
End of index scan
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_end()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
index_end is called at the end of an index scan to clean up any
|
|
|
|
things needed to clean up.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::index_end()
|
|
|
|
{
|
|
|
|
int error= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::index_end");
|
|
|
|
|
|
|
|
active_index= MAX_KEY;
|
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
int tmp;
|
|
|
|
/* TODO RONM: Change to index_end() when code is stable */
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
if ((tmp= (*file)->ha_index_end()))
|
|
|
|
error= tmp;
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Read one record in an index scan and start an index scan
|
|
|
|
|
|
|
|
SYNOPSIS
|
2008-10-10 12:01:01 +02:00
|
|
|
index_read_map()
|
2006-01-17 08:40:00 +01:00
|
|
|
buf Read row in MySQL Row Format
|
|
|
|
key Key parts in consecutive order
|
2008-10-10 12:01:01 +02:00
|
|
|
keypart_map Which part of key is used
|
2006-01-17 08:40:00 +01:00
|
|
|
find_flag What type of key condition is used
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
2008-10-10 12:01:01 +02:00
|
|
|
index_read_map starts a new index scan using a start key. The MySQL Server
|
2006-01-17 08:40:00 +01:00
|
|
|
will check the end key on its own. Thus to function properly the
|
|
|
|
partitioned handler need to ensure that it delivers records in the sort
|
|
|
|
order of the MySQL Server.
|
2008-10-10 12:01:01 +02:00
|
|
|
index_read_map can be restarted without calling index_end on the previous
|
|
|
|
index scan and without calling index_init. In this case the index_read_map
|
2006-01-17 08:40:00 +01:00
|
|
|
is on the same index as the previous index_scan. This is particularly
|
|
|
|
used in conjuntion with multi read ranges.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2007-08-13 15:11:25 +02:00
|
|
|
int ha_partition::index_read_map(uchar *buf, const uchar *key,
|
|
|
|
key_part_map keypart_map,
|
|
|
|
enum ha_rkey_function find_flag)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2007-08-13 15:11:25 +02:00
|
|
|
DBUG_ENTER("ha_partition::index_read_map");
|
2005-07-18 13:31:02 +02:00
|
|
|
end_range= 0;
|
2006-07-15 09:38:34 +02:00
|
|
|
m_index_scan_type= partition_index_read;
|
2008-09-18 21:49:34 +02:00
|
|
|
m_start_key.key= key;
|
|
|
|
m_start_key.keypart_map= keypart_map;
|
|
|
|
m_start_key.flag= find_flag;
|
|
|
|
DBUG_RETURN(common_index_read(buf, TRUE));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Common routine for a number of index_read variants
|
|
|
|
|
|
|
|
SYNOPSIS
|
2008-09-18 21:49:34 +02:00
|
|
|
ha_partition::common_index_read()
|
|
|
|
buf Buffer where the record should be returned
|
|
|
|
have_start_key TRUE <=> the left endpoint is available, i.e.
|
|
|
|
we're in index_read call or in read_range_first
|
|
|
|
call and the range has left endpoint
|
|
|
|
|
|
|
|
FALSE <=> there is no left endpoint (we're in
|
|
|
|
read_range_first() call and the range has no left
|
|
|
|
endpoint)
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Start scanning the range (when invoked from read_range_first()) or doing
|
|
|
|
an index lookup (when invoked from index_read_XXX):
|
|
|
|
- If possible, perform partition selection
|
|
|
|
- Find the set of partitions we're going to use
|
|
|
|
- Depending on whether we need ordering:
|
|
|
|
NO: Get the first record from first used partition (see
|
|
|
|
handle_unordered_scan_next_partition)
|
|
|
|
YES: Fill the priority queue and get the record that is the first in
|
|
|
|
the ordering
|
|
|
|
|
|
|
|
RETURN
|
|
|
|
0 OK
|
|
|
|
other HA_ERR_END_OF_FILE or other error code.
|
2006-01-17 08:40:00 +01:00
|
|
|
*/
|
|
|
|
|
2008-09-18 21:49:34 +02:00
|
|
|
int ha_partition::common_index_read(uchar *buf, bool have_start_key)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
int error;
|
2008-09-18 21:49:34 +02:00
|
|
|
uint key_len;
|
2006-07-15 09:38:34 +02:00
|
|
|
bool reverse_order= FALSE;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::common_index_read");
|
2008-09-18 21:49:34 +02:00
|
|
|
LINT_INIT(key_len); /* used if have_start_key==TRUE */
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2008-10-10 12:01:01 +02:00
|
|
|
DBUG_PRINT("info", ("m_ordered %u m_ordered_scan_ong %u have_start_key %u",
|
|
|
|
m_ordered, m_ordered_scan_ongoing, have_start_key));
|
|
|
|
|
2008-09-18 21:49:34 +02:00
|
|
|
if (have_start_key)
|
|
|
|
{
|
|
|
|
m_start_key.length= key_len= calculate_key_len(table, active_index,
|
|
|
|
m_start_key.key,
|
|
|
|
m_start_key.keypart_map);
|
2008-10-10 12:01:01 +02:00
|
|
|
DBUG_ASSERT(key_len);
|
2008-09-18 21:49:34 +02:00
|
|
|
}
|
|
|
|
if ((error= partition_scan_set_up(buf, have_start_key)))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
2008-09-18 21:49:34 +02:00
|
|
|
|
|
|
|
if (have_start_key &&
|
|
|
|
(m_start_key.flag == HA_READ_PREFIX_LAST ||
|
|
|
|
m_start_key.flag == HA_READ_PREFIX_LAST_OR_PREV ||
|
|
|
|
m_start_key.flag == HA_READ_BEFORE_KEY))
|
2006-07-15 09:38:34 +02:00
|
|
|
{
|
|
|
|
reverse_order= TRUE;
|
|
|
|
m_ordered_scan_ongoing= TRUE;
|
|
|
|
}
|
2008-10-10 12:01:01 +02:00
|
|
|
DBUG_PRINT("info", ("m_ordered %u m_o_scan_ong %u have_start_key %u",
|
|
|
|
m_ordered, m_ordered_scan_ongoing, have_start_key));
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!m_ordered_scan_ongoing ||
|
2008-09-18 21:49:34 +02:00
|
|
|
(have_start_key && m_start_key.flag == HA_READ_KEY_EXACT &&
|
2008-10-10 12:01:01 +02:00
|
|
|
!m_pkey_is_clustered &&
|
|
|
|
key_len >= m_curr_key_info[0]->key_length))
|
2008-09-18 21:49:34 +02:00
|
|
|
{
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
|
|
|
We use unordered index scan either when read_range is used and flag
|
|
|
|
is set to not use ordered or when an exact key is used and in this
|
|
|
|
case all records will be sorted equal and thus the sort order of the
|
|
|
|
resulting records doesn't matter.
|
|
|
|
We also use an unordered index scan when the number of partitions to
|
|
|
|
scan is only one.
|
|
|
|
The unordered index scan will use the partition set created.
|
|
|
|
Need to set unordered scan ongoing since we can come here even when
|
|
|
|
it isn't set.
|
|
|
|
*/
|
2008-10-30 09:25:25 +01:00
|
|
|
DBUG_PRINT("info", ("doing unordered scan"));
|
2005-07-18 13:31:02 +02:00
|
|
|
m_ordered_scan_ongoing= FALSE;
|
|
|
|
error= handle_unordered_scan_next_partition(buf);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
In all other cases we will use the ordered index scan. This will use
|
|
|
|
the partition set created by the get_partition_set method.
|
|
|
|
*/
|
2006-07-15 09:38:34 +02:00
|
|
|
error= handle_ordered_index_scan(buf, reverse_order);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Start an index scan from leftmost record and return first record
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_first()
|
|
|
|
buf Read row in MySQL Row Format
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
index_first() asks for the first key in the index.
|
|
|
|
This is similar to index_read except that there is no start key since
|
|
|
|
the scan starts from the leftmost entry and proceeds forward with
|
|
|
|
index_next.
|
|
|
|
|
|
|
|
Called from opt_range.cc, opt_sum.cc, sql_handler.cc,
|
|
|
|
and sql_select.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::index_first(uchar * buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_first");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
end_range= 0;
|
|
|
|
m_index_scan_type= partition_index_first;
|
|
|
|
DBUG_RETURN(common_first_last(buf));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Start an index scan from rightmost record and return first record
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_last()
|
|
|
|
buf Read row in MySQL Row Format
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
index_last() asks for the last key in the index.
|
|
|
|
This is similar to index_read except that there is no start key since
|
|
|
|
the scan starts from the rightmost entry and proceeds forward with
|
|
|
|
index_prev.
|
|
|
|
|
|
|
|
Called from opt_range.cc, opt_sum.cc, sql_handler.cc,
|
|
|
|
and sql_select.cc.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::index_last(uchar * buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_last");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
m_index_scan_type= partition_index_last;
|
|
|
|
DBUG_RETURN(common_first_last(buf));
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Common routine for index_first/index_last
|
|
|
|
|
|
|
|
SYNOPSIS
|
2008-09-18 21:49:34 +02:00
|
|
|
ha_partition::common_first_last()
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
see index_first for rest
|
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::common_first_last(uchar *buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
int error;
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((error= partition_scan_set_up(buf, FALSE)))
|
|
|
|
return error;
|
2006-06-22 16:46:02 +02:00
|
|
|
if (!m_ordered_scan_ongoing &&
|
|
|
|
m_index_scan_type != partition_index_last)
|
2005-07-18 13:31:02 +02:00
|
|
|
return handle_unordered_scan_next_partition(buf);
|
2006-07-15 09:38:34 +02:00
|
|
|
return handle_ordered_index_scan(buf, FALSE);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Read last using key
|
|
|
|
|
|
|
|
SYNOPSIS
|
2008-10-10 12:01:01 +02:00
|
|
|
index_read_last_map()
|
2006-01-17 08:40:00 +01:00
|
|
|
buf Read row in MySQL Row Format
|
|
|
|
key Key
|
2007-08-13 15:11:25 +02:00
|
|
|
keypart_map Which part of key is used
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This is used in join_read_last_key to optimise away an ORDER BY.
|
|
|
|
Can only be used on indexes supporting HA_READ_ORDER
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2007-08-13 15:11:25 +02:00
|
|
|
int ha_partition::index_read_last_map(uchar *buf, const uchar *key,
|
|
|
|
key_part_map keypart_map)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_read_last");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
m_ordered= TRUE; // Safety measure
|
2006-07-15 09:38:34 +02:00
|
|
|
end_range= 0;
|
|
|
|
m_index_scan_type= partition_index_read_last;
|
2008-09-18 21:49:34 +02:00
|
|
|
m_start_key.key= key;
|
|
|
|
m_start_key.keypart_map= keypart_map;
|
|
|
|
m_start_key.flag= HA_READ_PREFIX_LAST;
|
|
|
|
DBUG_RETURN(common_index_read(buf, TRUE));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Read next record in a forward index scan
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_next()
|
|
|
|
buf Read row in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Used to read forward through the index.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::index_next(uchar * buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_next");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
|
|
|
TODO(low priority):
|
|
|
|
If we want partition to work with the HANDLER commands, we
|
|
|
|
must be able to do index_last() -> index_prev() -> index_next()
|
|
|
|
*/
|
|
|
|
DBUG_ASSERT(m_index_scan_type != partition_index_last);
|
|
|
|
if (!m_ordered_scan_ongoing)
|
|
|
|
{
|
|
|
|
DBUG_RETURN(handle_unordered_next(buf, FALSE));
|
|
|
|
}
|
|
|
|
DBUG_RETURN(handle_ordered_next(buf, FALSE));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Read next record special
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_next_same()
|
|
|
|
buf Read row in MySQL Row Format
|
|
|
|
key Key
|
|
|
|
keylen Length of key
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This routine is used to read the next but only if the key is the same
|
|
|
|
as supplied in the call.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::index_next_same(uchar *buf, const uchar *key, uint keylen)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_next_same");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ASSERT(keylen == m_start_key.length);
|
|
|
|
DBUG_ASSERT(m_index_scan_type != partition_index_last);
|
|
|
|
if (!m_ordered_scan_ongoing)
|
|
|
|
DBUG_RETURN(handle_unordered_next(buf, TRUE));
|
|
|
|
DBUG_RETURN(handle_ordered_next(buf, TRUE));
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Read next record when performing index scan backwards
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
index_prev()
|
|
|
|
buf Read row in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Used to read backwards through the index.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::index_prev(uchar * buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_prev");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/* TODO: read comment in index_next */
|
|
|
|
DBUG_ASSERT(m_index_scan_type != partition_index_first);
|
|
|
|
DBUG_RETURN(handle_ordered_prev(buf));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Start a read of one range with start and end key
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
read_range_first()
|
|
|
|
start_key Specification of start key
|
|
|
|
end_key Specification of end key
|
|
|
|
eq_range_arg Is it equal range
|
|
|
|
sorted Should records be returned in sorted order
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
We reimplement read_range_first since we don't want the compare_key
|
|
|
|
check at the end. This is already performed in the partition handler.
|
|
|
|
read_range_next is very much different due to that we need to scan
|
|
|
|
all underlying handlers.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::read_range_first(const key_range *start_key,
|
|
|
|
const key_range *end_key,
|
|
|
|
bool eq_range_arg, bool sorted)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
DBUG_ENTER("ha_partition::read_range_first");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
m_ordered= sorted;
|
|
|
|
eq_range= eq_range_arg;
|
|
|
|
end_range= 0;
|
|
|
|
if (end_key)
|
|
|
|
{
|
|
|
|
end_range= &save_end_range;
|
|
|
|
save_end_range= *end_key;
|
|
|
|
key_compare_result_on_equal=
|
|
|
|
((end_key->flag == HA_READ_BEFORE_KEY) ? 1 :
|
|
|
|
(end_key->flag == HA_READ_AFTER_KEY) ? -1 : 0);
|
|
|
|
}
|
|
|
|
|
2008-10-10 12:01:01 +02:00
|
|
|
range_key_part= m_curr_key_info[0]->key_part;
|
2008-09-18 21:49:34 +02:00
|
|
|
if (start_key)
|
|
|
|
m_start_key= *start_key;
|
2005-07-18 13:31:02 +02:00
|
|
|
else
|
2008-09-18 21:49:34 +02:00
|
|
|
m_start_key.key= NULL;
|
|
|
|
|
|
|
|
m_index_scan_type= partition_read_range;
|
|
|
|
error= common_index_read(m_rec0, test(start_key));
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Read next record in read of a range with start and end key
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
read_range_next()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
int ha_partition::read_range_next()
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::read_range_next");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2008-11-05 15:53:28 +01:00
|
|
|
if (m_ordered_scan_ongoing)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2008-09-18 21:49:34 +02:00
|
|
|
DBUG_RETURN(handle_ordered_next(table->record[0], eq_range));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
2008-09-18 21:49:34 +02:00
|
|
|
DBUG_RETURN(handle_unordered_next(table->record[0], eq_range));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
2008-09-18 21:49:34 +02:00
|
|
|
Common routine to set up index scans
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
SYNOPSIS
|
2008-09-18 21:49:34 +02:00
|
|
|
ha_partition::partition_scan_set_up()
|
|
|
|
buf Buffer to later return record in (this function
|
|
|
|
needs it to calculcate partitioning function
|
|
|
|
values)
|
|
|
|
|
|
|
|
idx_read_flag TRUE <=> m_start_key has range start endpoint which
|
|
|
|
probably can be used to determine the set of partitions
|
|
|
|
to scan.
|
|
|
|
FALSE <=> there is no start endpoint.
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Find out which partitions we'll need to read when scanning the specified
|
|
|
|
range.
|
|
|
|
|
|
|
|
If we need to scan only one partition, set m_ordered_scan_ongoing=FALSE
|
|
|
|
as we will not need to do merge ordering.
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::partition_scan_set_up(uchar * buf, bool idx_read_flag)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::partition_scan_set_up");
|
|
|
|
|
|
|
|
if (idx_read_flag)
|
|
|
|
get_partition_set(table,buf,active_index,&m_start_key,&m_part_spec);
|
|
|
|
else
|
2006-01-29 01:22:32 +01:00
|
|
|
{
|
|
|
|
m_part_spec.start_part= 0;
|
|
|
|
m_part_spec.end_part= m_tot_parts - 1;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
if (m_part_spec.start_part > m_part_spec.end_part)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
We discovered a partition set but the set was empty so we report
|
|
|
|
key not found.
|
|
|
|
*/
|
|
|
|
DBUG_PRINT("info", ("scan with no partition to scan"));
|
|
|
|
DBUG_RETURN(HA_ERR_END_OF_FILE);
|
|
|
|
}
|
|
|
|
if (m_part_spec.start_part == m_part_spec.end_part)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
We discovered a single partition to scan, this never needs to be
|
|
|
|
performed using the ordered index scan.
|
|
|
|
*/
|
|
|
|
DBUG_PRINT("info", ("index scan using the single partition %d",
|
|
|
|
m_part_spec.start_part));
|
|
|
|
m_ordered_scan_ongoing= FALSE;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
Set m_ordered_scan_ongoing according how the scan should be done
|
2006-01-29 01:22:32 +01:00
|
|
|
Only exact partitions are discovered atm by get_partition_set.
|
|
|
|
Verify this, also bitmap must have at least one bit set otherwise
|
|
|
|
the result from this table is the empty set.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
2006-01-29 01:22:32 +01:00
|
|
|
uint start_part= bitmap_get_first_set(&(m_part_info->used_partitions));
|
|
|
|
if (start_part == MY_BIT_NONE)
|
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("scan with no partition to scan"));
|
|
|
|
DBUG_RETURN(HA_ERR_END_OF_FILE);
|
|
|
|
}
|
|
|
|
if (start_part > m_part_spec.start_part)
|
|
|
|
m_part_spec.start_part= start_part;
|
|
|
|
DBUG_ASSERT(m_part_spec.start_part < m_tot_parts);
|
2005-07-18 13:31:02 +02:00
|
|
|
m_ordered_scan_ongoing= m_ordered;
|
|
|
|
}
|
|
|
|
DBUG_ASSERT(m_part_spec.start_part < m_tot_parts &&
|
|
|
|
m_part_spec.end_part < m_tot_parts);
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
Unordered Index Scan Routines
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Common routine to handle index_next with unordered results
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
handle_unordered_next()
|
|
|
|
out:buf Read row in MySQL Row Format
|
|
|
|
next_same Called from index_next_same
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
HA_ERR_END_OF_FILE End of scan
|
|
|
|
0 Success
|
|
|
|
other Error code
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
These routines are used to scan partitions without considering order.
|
|
|
|
This is performed in two situations.
|
|
|
|
1) In read_multi_range this is the normal case
|
|
|
|
2) When performing any type of index_read, index_first, index_last where
|
|
|
|
all fields in the partition function is bound. In this case the index
|
|
|
|
scan is performed on only one partition and thus it isn't necessary to
|
|
|
|
perform any sort.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::handle_unordered_next(uchar *buf, bool is_next_same)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2007-12-19 20:15:02 +01:00
|
|
|
handler *file= m_file[m_part_spec.start_part];
|
2005-07-18 13:31:02 +02:00
|
|
|
int error;
|
|
|
|
DBUG_ENTER("ha_partition::handle_unordered_next");
|
|
|
|
|
|
|
|
/*
|
2008-09-18 21:49:34 +02:00
|
|
|
We should consider if this should be split into three functions as
|
|
|
|
partition_read_range is_next_same are always local constants
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
2008-09-18 21:49:34 +02:00
|
|
|
|
|
|
|
if (m_index_scan_type == partition_read_range)
|
|
|
|
{
|
|
|
|
if (!(error= file->read_range_next()))
|
|
|
|
{
|
|
|
|
m_last_part= m_part_spec.start_part;
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (is_next_same)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
if (!(error= file->index_next_same(buf, m_start_key.key,
|
|
|
|
m_start_key.length)))
|
|
|
|
{
|
|
|
|
m_last_part= m_part_spec.start_part;
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
}
|
2008-09-18 21:49:34 +02:00
|
|
|
else
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2008-09-18 21:49:34 +02:00
|
|
|
if (!(error= file->index_next(buf)))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
m_last_part= m_part_spec.start_part;
|
|
|
|
DBUG_RETURN(0); // Row was in range
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (error == HA_ERR_END_OF_FILE)
|
|
|
|
{
|
|
|
|
m_part_spec.start_part++; // Start using next part
|
|
|
|
error= handle_unordered_scan_next_partition(buf);
|
|
|
|
}
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Handle index_next when changing to new partition
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
handle_unordered_scan_next_partition()
|
|
|
|
buf Read row in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
HA_ERR_END_OF_FILE End of scan
|
|
|
|
0 Success
|
|
|
|
other Error code
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This routine is used to start the index scan on the next partition.
|
|
|
|
Both initial start and after completing scan on one partition.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::handle_unordered_scan_next_partition(uchar * buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint i;
|
|
|
|
DBUG_ENTER("ha_partition::handle_unordered_scan_next_partition");
|
|
|
|
|
|
|
|
for (i= m_part_spec.start_part; i <= m_part_spec.end_part; i++)
|
|
|
|
{
|
|
|
|
int error;
|
2006-01-29 01:22:32 +01:00
|
|
|
handler *file;
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-29 01:22:32 +01:00
|
|
|
if (!(bitmap_is_set(&(m_part_info->used_partitions), i)))
|
|
|
|
continue;
|
|
|
|
file= m_file[i];
|
2005-07-18 13:31:02 +02:00
|
|
|
m_part_spec.start_part= i;
|
|
|
|
switch (m_index_scan_type) {
|
2008-09-18 21:49:34 +02:00
|
|
|
case partition_read_range:
|
|
|
|
DBUG_PRINT("info", ("read_range_first on partition %d", i));
|
|
|
|
error= file->read_range_first(m_start_key.key? &m_start_key: NULL,
|
|
|
|
end_range, eq_range, FALSE);
|
|
|
|
break;
|
2005-07-18 13:31:02 +02:00
|
|
|
case partition_index_read:
|
|
|
|
DBUG_PRINT("info", ("index_read on partition %d", i));
|
2007-08-13 15:11:25 +02:00
|
|
|
error= file->index_read_map(buf, m_start_key.key,
|
|
|
|
m_start_key.keypart_map,
|
|
|
|
m_start_key.flag);
|
2005-07-18 13:31:02 +02:00
|
|
|
break;
|
|
|
|
case partition_index_first:
|
|
|
|
DBUG_PRINT("info", ("index_first on partition %d", i));
|
2008-10-06 15:14:20 +02:00
|
|
|
/* MyISAM engine can fail if we call index_first() when indexes disabled */
|
|
|
|
/* that happens if the table is empty. */
|
|
|
|
/* Here we use file->stats.records instead of file->records() because */
|
|
|
|
/* file->records() is supposed to return an EXACT count, and it can be */
|
|
|
|
/* possibly slow. We don't need an exact number, an approximate one- from*/
|
|
|
|
/* the last ::info() call - is sufficient. */
|
|
|
|
if (file->stats.records == 0)
|
|
|
|
{
|
|
|
|
error= HA_ERR_END_OF_FILE;
|
|
|
|
break;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
error= file->index_first(buf);
|
|
|
|
break;
|
2006-09-19 08:56:02 +02:00
|
|
|
case partition_index_first_unordered:
|
|
|
|
/*
|
|
|
|
We perform a scan without sorting and this means that we
|
|
|
|
should not use the index_first since not all handlers
|
|
|
|
support it and it is also unnecessary to restrict sort
|
|
|
|
order.
|
|
|
|
*/
|
|
|
|
DBUG_PRINT("info", ("read_range_first on partition %d", i));
|
|
|
|
table->record[0]= buf;
|
|
|
|
error= file->read_range_first(0, end_range, eq_range, 0);
|
|
|
|
table->record[0]= m_rec0;
|
|
|
|
break;
|
2005-07-18 13:31:02 +02:00
|
|
|
default:
|
|
|
|
DBUG_ASSERT(FALSE);
|
|
|
|
DBUG_RETURN(1);
|
|
|
|
}
|
|
|
|
if (!error)
|
|
|
|
{
|
2008-09-18 21:49:34 +02:00
|
|
|
m_last_part= i;
|
|
|
|
DBUG_RETURN(0);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
if ((error != HA_ERR_END_OF_FILE) && (error != HA_ERR_KEY_NOT_FOUND))
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
DBUG_PRINT("info", ("HA_ERR_END_OF_FILE on partition %d", i));
|
|
|
|
}
|
|
|
|
m_part_spec.start_part= NO_CURRENT_PART_ID;
|
|
|
|
DBUG_RETURN(HA_ERR_END_OF_FILE);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Common routine to start index scan with ordered results
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
SYNOPSIS
|
|
|
|
handle_ordered_index_scan()
|
|
|
|
out:buf Read row in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
HA_ERR_END_OF_FILE End of scan
|
|
|
|
0 Success
|
|
|
|
other Error code
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This part contains the logic to handle index scans that require ordered
|
|
|
|
output. This includes all except those started by read_range_first with
|
|
|
|
the flag ordered set to FALSE. Thus most direct index_read and all
|
|
|
|
index_first and index_last.
|
|
|
|
|
|
|
|
We implement ordering by keeping one record plus a key buffer for each
|
|
|
|
partition. Every time a new entry is requested we will fetch a new
|
|
|
|
entry from the partition that is currently not filled with an entry.
|
|
|
|
Then the entry is put into its proper sort position.
|
|
|
|
|
|
|
|
Returning a record is done by getting the top record, copying the
|
|
|
|
record to the request buffer and setting the partition as empty on
|
|
|
|
entries.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::handle_ordered_index_scan(uchar *buf, bool reverse_order)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
uint i;
|
|
|
|
uint j= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
bool found= FALSE;
|
|
|
|
DBUG_ENTER("ha_partition::handle_ordered_index_scan");
|
|
|
|
|
|
|
|
m_top_entry= NO_CURRENT_PART_ID;
|
2006-01-17 08:40:00 +01:00
|
|
|
queue_remove_all(&m_queue);
|
2006-01-29 01:22:32 +01:00
|
|
|
|
|
|
|
DBUG_PRINT("info", ("m_part_spec.start_part %d", m_part_spec.start_part));
|
2005-07-18 13:31:02 +02:00
|
|
|
for (i= m_part_spec.start_part; i <= m_part_spec.end_part; i++)
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (!(bitmap_is_set(&(m_part_info->used_partitions), i)))
|
|
|
|
continue;
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
uchar *rec_buf_ptr= rec_buf(i);
|
2006-01-29 01:22:32 +01:00
|
|
|
int error;
|
2005-07-18 13:31:02 +02:00
|
|
|
handler *file= m_file[i];
|
|
|
|
|
|
|
|
switch (m_index_scan_type) {
|
|
|
|
case partition_index_read:
|
2007-08-13 15:11:25 +02:00
|
|
|
error= file->index_read_map(rec_buf_ptr,
|
|
|
|
m_start_key.key,
|
|
|
|
m_start_key.keypart_map,
|
|
|
|
m_start_key.flag);
|
2005-07-18 13:31:02 +02:00
|
|
|
break;
|
|
|
|
case partition_index_first:
|
2008-10-06 15:14:20 +02:00
|
|
|
/* MyISAM engine can fail if we call index_first() when indexes disabled */
|
|
|
|
/* that happens if the table is empty. */
|
|
|
|
/* Here we use file->stats.records instead of file->records() because */
|
|
|
|
/* file->records() is supposed to return an EXACT count, and it can be */
|
|
|
|
/* possibly slow. We don't need an exact number, an approximate one- from*/
|
|
|
|
/* the last ::info() call - is sufficient. */
|
|
|
|
if (file->stats.records == 0)
|
|
|
|
{
|
|
|
|
error= HA_ERR_END_OF_FILE;
|
|
|
|
break;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
error= file->index_first(rec_buf_ptr);
|
|
|
|
reverse_order= FALSE;
|
|
|
|
break;
|
|
|
|
case partition_index_last:
|
2008-10-06 15:14:20 +02:00
|
|
|
/* MyISAM engine can fail if we call index_last() when indexes disabled */
|
|
|
|
/* that happens if the table is empty. */
|
|
|
|
/* Here we use file->stats.records instead of file->records() because */
|
|
|
|
/* file->records() is supposed to return an EXACT count, and it can be */
|
|
|
|
/* possibly slow. We don't need an exact number, an approximate one- from*/
|
|
|
|
/* the last ::info() call - is sufficient. */
|
|
|
|
if (file->stats.records == 0)
|
|
|
|
{
|
|
|
|
error= HA_ERR_END_OF_FILE;
|
|
|
|
break;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
error= file->index_last(rec_buf_ptr);
|
|
|
|
reverse_order= TRUE;
|
|
|
|
break;
|
2006-07-15 09:38:34 +02:00
|
|
|
case partition_index_read_last:
|
2007-08-13 15:11:25 +02:00
|
|
|
error= file->index_read_last_map(rec_buf_ptr,
|
|
|
|
m_start_key.key,
|
|
|
|
m_start_key.keypart_map);
|
2006-07-15 09:38:34 +02:00
|
|
|
reverse_order= TRUE;
|
|
|
|
break;
|
2008-09-18 21:49:34 +02:00
|
|
|
case partition_read_range:
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
This can only read record to table->record[0], as it was set when
|
|
|
|
the table was being opened. We have to memcpy data ourselves.
|
|
|
|
*/
|
2008-11-24 17:24:03 +01:00
|
|
|
error= file->read_range_first(m_start_key.key? &m_start_key: NULL,
|
|
|
|
end_range, eq_range, TRUE);
|
2008-09-18 21:49:34 +02:00
|
|
|
memcpy(rec_buf_ptr, table->record[0], m_rec_length);
|
|
|
|
reverse_order= FALSE;
|
|
|
|
break;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
default:
|
|
|
|
DBUG_ASSERT(FALSE);
|
|
|
|
DBUG_RETURN(HA_ERR_END_OF_FILE);
|
|
|
|
}
|
|
|
|
if (!error)
|
|
|
|
{
|
|
|
|
found= TRUE;
|
|
|
|
/*
|
2008-10-29 21:20:04 +01:00
|
|
|
Initialize queue without order first, simply insert
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
queue_element(&m_queue, j++)= (uchar*)queue_buf(i);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
else if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
|
|
|
|
{
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (found)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
We found at least one partition with data, now sort all entries and
|
|
|
|
after that read the first entry and copy it to the buffer to return in.
|
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
queue_set_max_at_top(&m_queue, reverse_order);
|
|
|
|
queue_set_cmp_arg(&m_queue, (void*)m_curr_key_info);
|
|
|
|
m_queue.elements= j;
|
|
|
|
queue_fix(&m_queue);
|
2005-07-18 13:31:02 +02:00
|
|
|
return_top_record(buf);
|
2007-01-12 12:46:20 +01:00
|
|
|
table->status= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_PRINT("info", ("Record returned from partition %d", m_top_entry));
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
DBUG_RETURN(HA_ERR_END_OF_FILE);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Return the top record in sort order
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
return_top_record()
|
|
|
|
out:buf Row returned in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
void ha_partition::return_top_record(uchar *buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint part_id;
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
uchar *key_buffer= queue_top(&m_queue);
|
|
|
|
uchar *rec_buffer= key_buffer + PARTITION_BYTES_IN_POS;
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
part_id= uint2korr(key_buffer);
|
|
|
|
memcpy(buf, rec_buffer, m_rec_length);
|
|
|
|
m_last_part= part_id;
|
|
|
|
m_top_entry= part_id;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Common routine to handle index_next with ordered results
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
handle_ordered_next()
|
|
|
|
out:buf Read row in MySQL Row Format
|
|
|
|
next_same Called from index_next_same
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
HA_ERR_END_OF_FILE End of scan
|
|
|
|
0 Success
|
|
|
|
other Error code
|
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::handle_ordered_next(uchar *buf, bool is_next_same)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
uint part_id= m_top_entry;
|
|
|
|
handler *file= m_file[part_id];
|
|
|
|
DBUG_ENTER("ha_partition::handle_ordered_next");
|
2008-09-18 21:49:34 +02:00
|
|
|
|
|
|
|
if (m_index_scan_type == partition_read_range)
|
|
|
|
{
|
|
|
|
error= file->read_range_next();
|
|
|
|
memcpy(rec_buf(part_id), table->record[0], m_rec_length);
|
|
|
|
}
|
|
|
|
else if (!is_next_same)
|
2005-07-18 13:31:02 +02:00
|
|
|
error= file->index_next(rec_buf(part_id));
|
|
|
|
else
|
|
|
|
error= file->index_next_same(rec_buf(part_id), m_start_key.key,
|
|
|
|
m_start_key.length);
|
|
|
|
if (error)
|
|
|
|
{
|
|
|
|
if (error == HA_ERR_END_OF_FILE)
|
|
|
|
{
|
|
|
|
/* Return next buffered row */
|
2006-01-17 08:40:00 +01:00
|
|
|
queue_remove(&m_queue, (uint) 0);
|
|
|
|
if (m_queue.elements)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("Record returned from partition %u (2)",
|
|
|
|
m_top_entry));
|
|
|
|
return_top_record(buf);
|
2007-01-12 12:46:20 +01:00
|
|
|
table->status= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
error= 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
2006-01-17 08:40:00 +01:00
|
|
|
queue_replaced(&m_queue);
|
2005-07-18 13:31:02 +02:00
|
|
|
return_top_record(buf);
|
|
|
|
DBUG_PRINT("info", ("Record returned from partition %u", m_top_entry));
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Common routine to handle index_prev with ordered results
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
handle_ordered_prev()
|
|
|
|
out:buf Read row in MySQL Row Format
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
HA_ERR_END_OF_FILE End of scan
|
|
|
|
0 Success
|
|
|
|
other Error code
|
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::handle_ordered_prev(uchar *buf)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
uint part_id= m_top_entry;
|
|
|
|
handler *file= m_file[part_id];
|
|
|
|
DBUG_ENTER("ha_partition::handle_ordered_prev");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((error= file->index_prev(rec_buf(part_id))))
|
|
|
|
{
|
|
|
|
if (error == HA_ERR_END_OF_FILE)
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
queue_remove(&m_queue, (uint) 0);
|
|
|
|
if (m_queue.elements)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
return_top_record(buf);
|
|
|
|
DBUG_PRINT("info", ("Record returned from partition %d (2)",
|
|
|
|
m_top_entry));
|
|
|
|
error= 0;
|
2007-01-12 12:46:20 +01:00
|
|
|
table->status= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
DBUG_RETURN(error);
|
|
|
|
}
|
2006-01-17 08:40:00 +01:00
|
|
|
queue_replaced(&m_queue);
|
2005-07-18 13:31:02 +02:00
|
|
|
return_top_record(buf);
|
|
|
|
DBUG_PRINT("info", ("Record returned from partition %d", m_top_entry));
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE information calls
|
|
|
|
****************************************************************************/
|
|
|
|
|
|
|
|
/*
|
|
|
|
These are all first approximations of the extra, info, scan_time
|
|
|
|
and read_time calls
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
General method to gather info from handler
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
info()
|
|
|
|
flag Specifies what info is requested
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
::info() is used to return information to the optimizer.
|
|
|
|
Currently this table handler doesn't implement most of the fields
|
|
|
|
really needed. SHOW also makes use of this data
|
|
|
|
Another note, if your handler doesn't proved exact record count,
|
|
|
|
you will probably want to have the following in your code:
|
|
|
|
if (records < 2)
|
|
|
|
records = 2;
|
|
|
|
The reason is that the server will optimize for cases of only a single
|
|
|
|
record. If in a table scan you don't know the number of records
|
|
|
|
it will probably be better to set records to two so you can return
|
|
|
|
as many records as you need.
|
|
|
|
|
|
|
|
Along with records a few more variables you may wish to set are:
|
|
|
|
records
|
|
|
|
deleted
|
|
|
|
data_file_length
|
|
|
|
index_file_length
|
|
|
|
delete_length
|
|
|
|
check_time
|
|
|
|
Take a look at the public variables in handler.h for more information.
|
|
|
|
|
|
|
|
Called in:
|
|
|
|
filesort.cc
|
|
|
|
ha_heap.cc
|
|
|
|
item_sum.cc
|
|
|
|
opt_sum.cc
|
|
|
|
sql_delete.cc
|
|
|
|
sql_delete.cc
|
|
|
|
sql_derived.cc
|
|
|
|
sql_select.cc
|
|
|
|
sql_select.cc
|
|
|
|
sql_select.cc
|
|
|
|
sql_select.cc
|
|
|
|
sql_select.cc
|
|
|
|
sql_show.cc
|
|
|
|
sql_show.cc
|
|
|
|
sql_show.cc
|
|
|
|
sql_show.cc
|
|
|
|
sql_table.cc
|
|
|
|
sql_union.cc
|
|
|
|
sql_update.cc
|
|
|
|
|
|
|
|
Some flags that are not implemented
|
|
|
|
HA_STATUS_POS:
|
|
|
|
This parameter is never used from the MySQL Server. It is checked in a
|
|
|
|
place in MyISAM so could potentially be used by MyISAM specific
|
|
|
|
programs.
|
|
|
|
HA_STATUS_NO_LOCK:
|
|
|
|
This is declared and often used. It's only used by MyISAM.
|
|
|
|
It means that MySQL doesn't need the absolute latest statistics
|
|
|
|
information. This may save the handler from doing internal locks while
|
|
|
|
retrieving statistics data.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2006-10-18 14:31:48 +02:00
|
|
|
int ha_partition::info(uint flag)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_ENTER("ha_partition::info");
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
if (flag & HA_STATUS_AUTO)
|
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
bool auto_inc_is_first_in_idx= (table_share->next_number_keypart == 0);
|
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_PRINT("info", ("HA_STATUS_AUTO"));
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (!table->found_next_number_field)
|
|
|
|
stats.auto_increment_value= 0;
|
|
|
|
else if (ha_data->auto_inc_initialized)
|
2006-08-08 14:47:58 +02:00
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
lock_auto_increment();
|
|
|
|
stats.auto_increment_value= ha_data->next_auto_inc_val;
|
|
|
|
unlock_auto_increment();
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
lock_auto_increment();
|
|
|
|
/* to avoid two concurrent initializations, check again when locked */
|
|
|
|
if (ha_data->auto_inc_initialized)
|
|
|
|
stats.auto_increment_value= ha_data->next_auto_inc_val;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
handler *file, **file_array;
|
|
|
|
ulonglong auto_increment_value= 0;
|
|
|
|
file_array= m_file;
|
|
|
|
DBUG_PRINT("info",
|
|
|
|
("checking all partitions for auto_increment_value"));
|
|
|
|
do
|
|
|
|
{
|
|
|
|
file= *file_array;
|
|
|
|
file->info(HA_STATUS_AUTO);
|
|
|
|
set_if_bigger(auto_increment_value,
|
|
|
|
file->stats.auto_increment_value);
|
|
|
|
} while (*(++file_array));
|
|
|
|
|
|
|
|
DBUG_ASSERT(auto_increment_value);
|
|
|
|
stats.auto_increment_value= auto_increment_value;
|
|
|
|
if (auto_inc_is_first_in_idx)
|
|
|
|
{
|
|
|
|
set_if_bigger(ha_data->next_auto_inc_val, auto_increment_value);
|
|
|
|
ha_data->auto_inc_initialized= TRUE;
|
|
|
|
DBUG_PRINT("info", ("initializing next_auto_inc_val to %lu",
|
|
|
|
(ulong) ha_data->next_auto_inc_val));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
unlock_auto_increment();
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
if (flag & HA_STATUS_VARIABLE)
|
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("HA_STATUS_VARIABLE"));
|
|
|
|
/*
|
|
|
|
Calculates statistical variables
|
|
|
|
records: Estimate of number records in table
|
2008-12-19 09:23:15 +01:00
|
|
|
We report sum (always at least 2 if not empty)
|
2005-07-18 13:31:02 +02:00
|
|
|
deleted: Estimate of number holes in the table due to
|
|
|
|
deletes
|
|
|
|
We report sum
|
|
|
|
data_file_length: Length of data file, in principle bytes in table
|
|
|
|
We report sum
|
|
|
|
index_file_length: Length of index file, in principle bytes in
|
|
|
|
indexes in the table
|
|
|
|
We report sum
|
2006-05-04 20:38:42 +02:00
|
|
|
delete_length: Length of free space easily used by new records in table
|
|
|
|
We report sum
|
2005-07-18 13:31:02 +02:00
|
|
|
mean_record_length:Mean record length in the table
|
|
|
|
We calculate this
|
|
|
|
check_time: Time of last check (only applicable to MyISAM)
|
|
|
|
We report last time of all underlying handlers
|
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
handler *file, **file_array;
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
stats.records= 0;
|
|
|
|
stats.deleted= 0;
|
|
|
|
stats.data_file_length= 0;
|
|
|
|
stats.index_file_length= 0;
|
|
|
|
stats.check_time= 0;
|
2006-06-04 20:05:22 +02:00
|
|
|
stats.delete_length= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
file_array= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file_array - m_file)))
|
|
|
|
{
|
|
|
|
file= *file_array;
|
|
|
|
file->info(HA_STATUS_VARIABLE);
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
stats.records+= file->stats.records;
|
|
|
|
stats.deleted+= file->stats.deleted;
|
|
|
|
stats.data_file_length+= file->stats.data_file_length;
|
|
|
|
stats.index_file_length+= file->stats.index_file_length;
|
2006-06-05 05:16:08 +02:00
|
|
|
stats.delete_length+= file->stats.delete_length;
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (file->stats.check_time > stats.check_time)
|
|
|
|
stats.check_time= file->stats.check_time;
|
2006-01-29 01:22:32 +01:00
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file_array));
|
2008-12-19 09:23:15 +01:00
|
|
|
if (stats.records && stats.records < 2 &&
|
2008-10-29 21:20:04 +01:00
|
|
|
!(m_file[0]->ha_table_flags() & HA_STATS_RECORDS_IS_EXACT))
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
stats.records= 2;
|
|
|
|
if (stats.records > 0)
|
|
|
|
stats.mean_rec_length= (ulong) (stats.data_file_length / stats.records);
|
2005-09-20 05:32:07 +02:00
|
|
|
else
|
2008-12-19 09:23:15 +01:00
|
|
|
stats.mean_rec_length= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
if (flag & HA_STATUS_CONST)
|
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("HA_STATUS_CONST"));
|
|
|
|
/*
|
|
|
|
Recalculate loads of constant variables. MyISAM also sets things
|
|
|
|
directly on the table share object.
|
|
|
|
|
|
|
|
Check whether this should be fixed since handlers should not
|
|
|
|
change things directly on the table object.
|
|
|
|
|
|
|
|
Monty comment: This should NOT be changed! It's the handlers
|
|
|
|
responsibility to correct table->s->keys_xxxx information if keys
|
|
|
|
have been disabled.
|
|
|
|
|
|
|
|
The most important parameters set here is records per key on
|
|
|
|
all indexes. block_size and primar key ref_length.
|
|
|
|
|
|
|
|
For each index there is an array of rec_per_key.
|
|
|
|
As an example if we have an index with three attributes a,b and c
|
|
|
|
we will have an array of 3 rec_per_key.
|
|
|
|
rec_per_key[0] is an estimate of number of records divided by
|
|
|
|
number of unique values of the field a.
|
|
|
|
rec_per_key[1] is an estimate of the number of records divided
|
|
|
|
by the number of unique combinations of the fields a and b.
|
|
|
|
rec_per_key[2] is an estimate of the number of records divided
|
|
|
|
by the number of unique combinations of the fields a,b and c.
|
|
|
|
|
|
|
|
Many handlers only set the value of rec_per_key when all fields
|
|
|
|
are bound (rec_per_key[2] in the example above).
|
|
|
|
|
|
|
|
If the handler doesn't support statistics, it should set all of the
|
|
|
|
above to 0.
|
|
|
|
|
|
|
|
We will allow the first handler to set the rec_per_key and use
|
|
|
|
this as an estimate on the total table.
|
|
|
|
|
|
|
|
max_data_file_length: Maximum data file length
|
|
|
|
We ignore it, is only used in
|
|
|
|
SHOW TABLE STATUS
|
|
|
|
max_index_file_length: Maximum index file length
|
|
|
|
We ignore it since it is never used
|
|
|
|
block_size: Block size used
|
|
|
|
We set it to the value of the first handler
|
|
|
|
ref_length: We set this to the value calculated
|
|
|
|
and stored in local object
|
|
|
|
create_time: Creation time of table
|
|
|
|
Set by first handler
|
|
|
|
|
|
|
|
So we calculate these constants by using the variables on the first
|
|
|
|
handler.
|
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
handler *file;
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
file= m_file[0];
|
|
|
|
file->info(HA_STATUS_CONST);
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
stats.create_time= file->stats.create_time;
|
2005-07-18 13:31:02 +02:00
|
|
|
ref_length= m_ref_length;
|
|
|
|
}
|
|
|
|
if (flag & HA_STATUS_ERRKEY)
|
|
|
|
{
|
|
|
|
handler *file= m_file[m_last_part];
|
|
|
|
DBUG_PRINT("info", ("info: HA_STATUS_ERRKEY"));
|
|
|
|
/*
|
|
|
|
This flag is used to get index number of the unique index that
|
|
|
|
reported duplicate key
|
|
|
|
We will report the errkey on the last handler used and ignore the rest
|
2009-02-20 16:56:32 +01:00
|
|
|
Note: all engines does not support HA_STATUS_ERRKEY, so set errkey.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
2009-02-20 16:56:32 +01:00
|
|
|
file->errkey= errkey;
|
2005-07-18 13:31:02 +02:00
|
|
|
file->info(HA_STATUS_ERRKEY);
|
2009-02-20 16:56:32 +01:00
|
|
|
errkey= file->errkey;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
if (flag & HA_STATUS_TIME)
|
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
handler *file, **file_array;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_PRINT("info", ("info: HA_STATUS_TIME"));
|
|
|
|
/*
|
|
|
|
This flag is used to set the latest update time of the table.
|
|
|
|
Used by SHOW commands
|
|
|
|
We will report the maximum of these times
|
|
|
|
*/
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
stats.update_time= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
file_array= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
file= *file_array;
|
|
|
|
file->info(HA_STATUS_TIME);
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (file->stats.update_time > stats.update_time)
|
|
|
|
stats.update_time= file->stats.update_time;
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file_array));
|
|
|
|
}
|
2006-10-18 14:31:48 +02:00
|
|
|
DBUG_RETURN(0);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-10 16:44:04 +01:00
|
|
|
void ha_partition::get_dynamic_partition_info(PARTITION_INFO *stat_info,
|
|
|
|
uint part_id)
|
|
|
|
{
|
|
|
|
handler *file= m_file[part_id];
|
|
|
|
file->info(HA_STATUS_CONST | HA_STATUS_TIME | HA_STATUS_VARIABLE |
|
|
|
|
HA_STATUS_NO_LOCK);
|
|
|
|
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
stat_info->records= file->stats.records;
|
|
|
|
stat_info->mean_rec_length= file->stats.mean_rec_length;
|
|
|
|
stat_info->data_file_length= file->stats.data_file_length;
|
|
|
|
stat_info->max_data_file_length= file->stats.max_data_file_length;
|
|
|
|
stat_info->index_file_length= file->stats.index_file_length;
|
|
|
|
stat_info->delete_length= file->stats.delete_length;
|
|
|
|
stat_info->create_time= file->stats.create_time;
|
|
|
|
stat_info->update_time= file->stats.update_time;
|
|
|
|
stat_info->check_time= file->stats.check_time;
|
2006-01-10 16:44:04 +01:00
|
|
|
stat_info->check_sum= 0;
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
if (file->ha_table_flags() & HA_HAS_CHECKSUM)
|
2006-01-10 16:44:04 +01:00
|
|
|
stat_info->check_sum= file->checksum();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
General function to prepare handler for certain behavior
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
extra()
|
|
|
|
operation Operation type for extra call
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
2005-07-18 13:31:02 +02:00
|
|
|
extra() is called whenever the server wishes to send a hint to
|
|
|
|
the storage engine. The MyISAM engine implements the most hints.
|
|
|
|
|
|
|
|
We divide the parameters into the following categories:
|
|
|
|
1) Parameters used by most handlers
|
|
|
|
2) Parameters used by some non-MyISAM handlers
|
|
|
|
3) Parameters used only by MyISAM
|
|
|
|
4) Parameters only used by temporary tables for query processing
|
|
|
|
5) Parameters only used by MyISAM internally
|
|
|
|
6) Parameters not used at all
|
2007-10-04 14:56:33 +02:00
|
|
|
7) Parameters only used by federated tables for query processing
|
|
|
|
8) Parameters only used by NDB
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
The partition handler need to handle category 1), 2) and 3).
|
|
|
|
|
|
|
|
1) Parameters used by most handlers
|
|
|
|
-----------------------------------
|
|
|
|
HA_EXTRA_RESET:
|
|
|
|
This option is used by most handlers and it resets the handler state
|
|
|
|
to the same state as after an open call. This includes releasing
|
|
|
|
any READ CACHE or WRITE CACHE or other internal buffer used.
|
|
|
|
|
|
|
|
It is called from the reset method in the handler interface. There are
|
|
|
|
three instances where this is called.
|
|
|
|
1) After completing a INSERT ... SELECT ... query the handler for the
|
|
|
|
table inserted into is reset
|
|
|
|
2) It is called from close_thread_table which in turn is called from
|
|
|
|
close_thread_tables except in the case where the tables are locked
|
|
|
|
in which case ha_commit_stmt is called instead.
|
2007-02-28 22:25:50 +01:00
|
|
|
It is only called from here if refresh_version hasn't changed and the
|
2005-07-18 13:31:02 +02:00
|
|
|
table is not an old table when calling close_thread_table.
|
|
|
|
close_thread_tables is called from many places as a general clean up
|
|
|
|
function after completing a query.
|
|
|
|
3) It is called when deleting the QUICK_RANGE_SELECT object if the
|
|
|
|
QUICK_RANGE_SELECT object had its own handler object. It is called
|
|
|
|
immediatley before close of this local handler object.
|
|
|
|
HA_EXTRA_KEYREAD:
|
|
|
|
HA_EXTRA_NO_KEYREAD:
|
|
|
|
These parameters are used to provide an optimisation hint to the handler.
|
|
|
|
If HA_EXTRA_KEYREAD is set it is enough to read the index fields, for
|
|
|
|
many handlers this means that the index-only scans can be used and it
|
|
|
|
is not necessary to use the real records to satisfy this part of the
|
|
|
|
query. Index-only scans is a very important optimisation for disk-based
|
|
|
|
indexes. For main-memory indexes most indexes contain a reference to the
|
|
|
|
record and thus KEYREAD only says that it is enough to read key fields.
|
|
|
|
HA_EXTRA_NO_KEYREAD disables this for the handler, also HA_EXTRA_RESET
|
|
|
|
will disable this option.
|
|
|
|
The handler will set HA_KEYREAD_ONLY in its table flags to indicate this
|
|
|
|
feature is supported.
|
|
|
|
HA_EXTRA_FLUSH:
|
2007-02-28 22:25:50 +01:00
|
|
|
Indication to flush tables to disk, is supposed to be used to
|
2005-07-18 13:31:02 +02:00
|
|
|
ensure disk based tables are flushed at end of query execution.
|
2007-02-28 22:25:50 +01:00
|
|
|
Currently is never used.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
2) Parameters used by some non-MyISAM handlers
|
|
|
|
----------------------------------------------
|
|
|
|
HA_EXTRA_KEYREAD_PRESERVE_FIELDS:
|
|
|
|
This is a strictly InnoDB feature that is more or less undocumented.
|
|
|
|
When it is activated InnoDB copies field by field from its fetch
|
|
|
|
cache instead of all fields in one memcpy. Have no idea what the
|
|
|
|
purpose of this is.
|
|
|
|
Cut from include/my_base.h:
|
|
|
|
When using HA_EXTRA_KEYREAD, overwrite only key member fields and keep
|
|
|
|
other fields intact. When this is off (by default) InnoDB will use memcpy
|
|
|
|
to overwrite entire row.
|
|
|
|
HA_EXTRA_IGNORE_DUP_KEY:
|
|
|
|
HA_EXTRA_NO_IGNORE_DUP_KEY:
|
|
|
|
Informs the handler to we will not stop the transaction if we get an
|
|
|
|
duplicate key errors during insert/upate.
|
|
|
|
Always called in pair, triggered by INSERT IGNORE and other similar
|
|
|
|
SQL constructs.
|
|
|
|
Not used by MyISAM.
|
|
|
|
|
|
|
|
3) Parameters used only by MyISAM
|
|
|
|
---------------------------------
|
|
|
|
HA_EXTRA_NORMAL:
|
|
|
|
Only used in MyISAM to reset quick mode, not implemented by any other
|
|
|
|
handler. Quick mode is also reset in MyISAM by HA_EXTRA_RESET.
|
|
|
|
|
|
|
|
It is called after completing a successful DELETE query if the QUICK
|
|
|
|
option is set.
|
|
|
|
|
|
|
|
HA_EXTRA_QUICK:
|
|
|
|
When the user does DELETE QUICK FROM table where-clause; this extra
|
|
|
|
option is called before the delete query is performed and
|
|
|
|
HA_EXTRA_NORMAL is called after the delete query is completed.
|
|
|
|
Temporary tables used internally in MySQL always set this option
|
|
|
|
|
|
|
|
The meaning of quick mode is that when deleting in a B-tree no merging
|
|
|
|
of leafs is performed. This is a common method and many large DBMS's
|
|
|
|
actually only support this quick mode since it is very difficult to
|
|
|
|
merge leaves in a tree used by many threads concurrently.
|
|
|
|
|
|
|
|
HA_EXTRA_CACHE:
|
|
|
|
This flag is usually set with extra_opt along with a cache size.
|
|
|
|
The size of this buffer is set by the user variable
|
|
|
|
record_buffer_size. The value of this cache size is the amount of
|
|
|
|
data read from disk in each fetch when performing a table scan.
|
|
|
|
This means that before scanning a table it is normal to call
|
|
|
|
extra with HA_EXTRA_CACHE and when the scan is completed to call
|
|
|
|
HA_EXTRA_NO_CACHE to release the cache memory.
|
|
|
|
|
|
|
|
Some special care is taken when using this extra parameter since there
|
|
|
|
could be a write ongoing on the table in the same statement. In this
|
|
|
|
one has to take special care since there might be a WRITE CACHE as
|
|
|
|
well. HA_EXTRA_CACHE specifies using a READ CACHE and using
|
|
|
|
READ CACHE and WRITE CACHE at the same time is not possible.
|
|
|
|
|
|
|
|
Only MyISAM currently use this option.
|
|
|
|
|
|
|
|
It is set when doing full table scans using rr_sequential and
|
|
|
|
reset when completing such a scan with end_read_record
|
|
|
|
(resetting means calling extra with HA_EXTRA_NO_CACHE).
|
|
|
|
|
|
|
|
It is set in filesort.cc for MyISAM internal tables and it is set in
|
|
|
|
a multi-update where HA_EXTRA_CACHE is called on a temporary result
|
|
|
|
table and after that ha_rnd_init(0) on table to be updated
|
|
|
|
and immediately after that HA_EXTRA_NO_CACHE on table to be updated.
|
|
|
|
|
|
|
|
Apart from that it is always used from init_read_record but not when
|
|
|
|
used from UPDATE statements. It is not used from DELETE statements
|
|
|
|
with ORDER BY and LIMIT but it is used in normal scan loop in DELETE
|
|
|
|
statements. The reason here is that DELETE's in MyISAM doesn't move
|
|
|
|
existings data rows.
|
|
|
|
|
|
|
|
It is also set in copy_data_between_tables when scanning the old table
|
|
|
|
to copy over to the new table.
|
|
|
|
And it is set in join_init_read_record where quick objects are used
|
|
|
|
to perform a scan on the table. In this case the full table scan can
|
|
|
|
even be performed multiple times as part of the nested loop join.
|
|
|
|
|
|
|
|
For purposes of the partition handler it is obviously necessary to have
|
|
|
|
special treatment of this extra call. If we would simply pass this
|
|
|
|
extra call down to each handler we would allocate
|
|
|
|
cache size * no of partitions amount of memory and this is not
|
|
|
|
necessary since we will only scan one partition at a time when doing
|
|
|
|
full table scans.
|
|
|
|
|
|
|
|
Thus we treat it by first checking whether we have MyISAM handlers in
|
|
|
|
the table, if not we simply ignore the call and if we have we will
|
|
|
|
record the call but will not call any underlying handler yet. Then
|
|
|
|
when performing the sequential scan we will check this recorded value
|
|
|
|
and call extra_opt whenever we start scanning a new partition.
|
|
|
|
|
|
|
|
monty: Neads to be fixed so that it's passed to all handlers when we
|
|
|
|
move to another partition during table scan.
|
|
|
|
|
|
|
|
HA_EXTRA_NO_CACHE:
|
|
|
|
When performing a UNION SELECT HA_EXTRA_NO_CACHE is called from the
|
|
|
|
flush method in the select_union class.
|
|
|
|
It is used to some extent when insert delayed inserts.
|
|
|
|
See HA_EXTRA_RESET_STATE for use in conjunction with delete_all_rows().
|
|
|
|
|
|
|
|
It should be ok to call HA_EXTRA_NO_CACHE on all underlying handlers
|
|
|
|
if they are MyISAM handlers. Other handlers we can ignore the call
|
|
|
|
for. If no cache is in use they will quickly return after finding
|
|
|
|
this out. And we also ensure that all caches are disabled and no one
|
|
|
|
is left by mistake.
|
|
|
|
In the future this call will probably be deleted an we will instead call
|
|
|
|
::reset();
|
|
|
|
|
|
|
|
HA_EXTRA_WRITE_CACHE:
|
|
|
|
See above, called from various places. It is mostly used when we
|
|
|
|
do INSERT ... SELECT
|
|
|
|
No special handling to save cache space is developed currently.
|
|
|
|
|
|
|
|
HA_EXTRA_PREPARE_FOR_UPDATE:
|
|
|
|
This is called as part of a multi-table update. When the table to be
|
|
|
|
updated is also scanned then this informs MyISAM handler to drop any
|
|
|
|
caches if dynamic records are used (fixed size records do not care
|
|
|
|
about this call). We pass this along to all underlying MyISAM handlers
|
|
|
|
and ignore it for the rest.
|
|
|
|
|
2007-10-11 17:07:40 +02:00
|
|
|
HA_EXTRA_PREPARE_FOR_DROP:
|
2005-07-18 13:31:02 +02:00
|
|
|
Only used by MyISAM, called in preparation for a DROP TABLE.
|
|
|
|
It's used mostly by Windows that cannot handle dropping an open file.
|
|
|
|
On other platforms it has the same effect as HA_EXTRA_FORCE_REOPEN.
|
|
|
|
|
2007-10-11 17:07:40 +02:00
|
|
|
HA_EXTRA_PREPARE_FOR_RENAME:
|
|
|
|
Informs the handler we are about to attempt a rename of the table.
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
HA_EXTRA_READCHECK:
|
|
|
|
HA_EXTRA_NO_READCHECK:
|
|
|
|
Only one call to HA_EXTRA_NO_READCHECK from ha_open where it says that
|
|
|
|
this is not needed in SQL. The reason for this call is that MyISAM sets
|
|
|
|
the READ_CHECK_USED in the open call so the call is needed for MyISAM
|
|
|
|
to reset this feature.
|
|
|
|
The idea with this parameter was to inform of doing/not doing a read
|
|
|
|
check before applying an update. Since SQL always performs a read before
|
|
|
|
applying the update No Read Check is needed in MyISAM as well.
|
|
|
|
|
|
|
|
This is a cut from Docs/myisam.txt
|
|
|
|
Sometimes you might want to force an update without checking whether
|
|
|
|
another user has changed the record since you last read it. This is
|
|
|
|
somewhat dangerous, so it should ideally not be used. That can be
|
|
|
|
accomplished by wrapping the mi_update() call in two calls to mi_extra(),
|
|
|
|
using these functions:
|
|
|
|
HA_EXTRA_NO_READCHECK=5 No readcheck on update
|
|
|
|
HA_EXTRA_READCHECK=6 Use readcheck (def)
|
|
|
|
|
|
|
|
HA_EXTRA_FORCE_REOPEN:
|
|
|
|
Only used by MyISAM, called when altering table, closing tables to
|
|
|
|
enforce a reopen of the table files.
|
|
|
|
|
|
|
|
4) Parameters only used by temporary tables for query processing
|
|
|
|
----------------------------------------------------------------
|
|
|
|
HA_EXTRA_RESET_STATE:
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
Same as reset() except that buffers are not released. If there is
|
2005-07-18 13:31:02 +02:00
|
|
|
a READ CACHE it is reinit'ed. A cache is reinit'ed to restart reading
|
|
|
|
or to change type of cache between READ CACHE and WRITE CACHE.
|
|
|
|
|
|
|
|
This extra function is always called immediately before calling
|
|
|
|
delete_all_rows on the handler for temporary tables.
|
|
|
|
There are cases however when HA_EXTRA_RESET_STATE isn't called in
|
|
|
|
a similar case for a temporary table in sql_union.cc and in two other
|
|
|
|
cases HA_EXTRA_NO_CACHE is called before and HA_EXTRA_WRITE_CACHE
|
|
|
|
called afterwards.
|
|
|
|
The case with HA_EXTRA_NO_CACHE and HA_EXTRA_WRITE_CACHE means
|
|
|
|
disable caching, delete all rows and enable WRITE CACHE. This is
|
|
|
|
used for temporary tables containing distinct sums and a
|
|
|
|
functional group.
|
|
|
|
|
|
|
|
The only case that delete_all_rows is called on non-temporary tables
|
|
|
|
is in sql_delete.cc when DELETE FROM table; is called by a user.
|
|
|
|
In this case no special extra calls are performed before or after this
|
|
|
|
call.
|
|
|
|
|
|
|
|
The partition handler should not need to bother about this one. It
|
|
|
|
should never be called.
|
|
|
|
|
|
|
|
HA_EXTRA_NO_ROWS:
|
|
|
|
Don't insert rows indication to HEAP and MyISAM, only used by temporary
|
|
|
|
tables used in query processing.
|
|
|
|
Not handled by partition handler.
|
|
|
|
|
|
|
|
5) Parameters only used by MyISAM internally
|
|
|
|
--------------------------------------------
|
|
|
|
HA_EXTRA_REINIT_CACHE:
|
2008-10-29 21:20:04 +01:00
|
|
|
This call reinitializes the READ CACHE described above if there is one
|
2005-07-18 13:31:02 +02:00
|
|
|
and otherwise the call is ignored.
|
|
|
|
|
|
|
|
We can thus safely call it on all underlying handlers if they are
|
|
|
|
MyISAM handlers. It is however never called so we don't handle it at all.
|
|
|
|
HA_EXTRA_FLUSH_CACHE:
|
|
|
|
Flush WRITE CACHE in MyISAM. It is only from one place in the code.
|
|
|
|
This is in sql_insert.cc where it is called if the table_flags doesn't
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
contain HA_DUPLICATE_POS. The only handler having the HA_DUPLICATE_POS
|
|
|
|
set is the MyISAM handler and so the only handler not receiving this
|
|
|
|
call is MyISAM.
|
2005-07-18 13:31:02 +02:00
|
|
|
Thus in effect this call is called but never used. Could be removed
|
|
|
|
from sql_insert.cc
|
|
|
|
HA_EXTRA_NO_USER_CHANGE:
|
|
|
|
Only used by MyISAM, never called.
|
|
|
|
Simulates lock_type as locked.
|
|
|
|
HA_EXTRA_WAIT_LOCK:
|
|
|
|
HA_EXTRA_WAIT_NOLOCK:
|
|
|
|
Only used by MyISAM, called from MyISAM handler but never from server
|
|
|
|
code on top of the handler.
|
|
|
|
Sets lock_wait on/off
|
|
|
|
HA_EXTRA_NO_KEYS:
|
|
|
|
Only used MyISAM, only used internally in MyISAM handler, never called
|
|
|
|
from server level.
|
|
|
|
HA_EXTRA_KEYREAD_CHANGE_POS:
|
|
|
|
HA_EXTRA_REMEMBER_POS:
|
|
|
|
HA_EXTRA_RESTORE_POS:
|
|
|
|
HA_EXTRA_PRELOAD_BUFFER_SIZE:
|
|
|
|
HA_EXTRA_CHANGE_KEY_TO_DUP:
|
|
|
|
HA_EXTRA_CHANGE_KEY_TO_UNIQUE:
|
|
|
|
Only used by MyISAM, never called.
|
|
|
|
|
|
|
|
6) Parameters not used at all
|
|
|
|
-----------------------------
|
|
|
|
HA_EXTRA_KEY_CACHE:
|
|
|
|
HA_EXTRA_NO_KEY_CACHE:
|
|
|
|
This parameters are no longer used and could be removed.
|
2007-06-30 18:17:20 +02:00
|
|
|
|
|
|
|
7) Parameters only used by federated tables for query processing
|
|
|
|
----------------------------------------------------------------
|
|
|
|
HA_EXTRA_INSERT_WITH_UPDATE:
|
|
|
|
Inform handler that an "INSERT...ON DUPLICATE KEY UPDATE" will be
|
|
|
|
executed. This condition is unset by HA_EXTRA_NO_IGNORE_DUP_KEY.
|
2007-10-04 14:56:33 +02:00
|
|
|
|
|
|
|
8) Parameters only used by NDB
|
|
|
|
------------------------------
|
|
|
|
HA_EXTRA_DELETE_CANNOT_BATCH:
|
|
|
|
HA_EXTRA_UPDATE_CANNOT_BATCH:
|
|
|
|
Inform handler that delete_row()/update_row() cannot batch deletes/updates
|
|
|
|
and should perform them immediately. This may be needed when table has
|
|
|
|
AFTER DELETE/UPDATE triggers which access to subject table.
|
|
|
|
These flags are reset by the handler::extra(HA_EXTRA_RESET) call.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::extra(enum ha_extra_function operation)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition:extra");
|
|
|
|
DBUG_PRINT("info", ("operation: %d", (int) operation));
|
|
|
|
|
|
|
|
switch (operation) {
|
|
|
|
/* Category 1), used by most handlers */
|
|
|
|
case HA_EXTRA_KEYREAD:
|
|
|
|
case HA_EXTRA_NO_KEYREAD:
|
|
|
|
case HA_EXTRA_FLUSH:
|
|
|
|
DBUG_RETURN(loop_extra(operation));
|
|
|
|
|
|
|
|
/* Category 2), used by non-MyISAM handlers */
|
|
|
|
case HA_EXTRA_IGNORE_DUP_KEY:
|
|
|
|
case HA_EXTRA_NO_IGNORE_DUP_KEY:
|
|
|
|
case HA_EXTRA_KEYREAD_PRESERVE_FIELDS:
|
|
|
|
{
|
|
|
|
if (!m_myisam)
|
|
|
|
DBUG_RETURN(loop_extra(operation));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Category 3), used by MyISAM handlers */
|
2007-10-11 17:07:40 +02:00
|
|
|
case HA_EXTRA_PREPARE_FOR_RENAME:
|
|
|
|
DBUG_RETURN(prepare_for_rename());
|
2006-03-04 21:21:27 +01:00
|
|
|
break;
|
2005-07-18 13:31:02 +02:00
|
|
|
case HA_EXTRA_NORMAL:
|
|
|
|
case HA_EXTRA_QUICK:
|
|
|
|
case HA_EXTRA_PREPARE_FOR_UPDATE:
|
|
|
|
case HA_EXTRA_FORCE_REOPEN:
|
2007-10-11 17:07:40 +02:00
|
|
|
case HA_EXTRA_PREPARE_FOR_DROP:
|
2006-02-15 20:20:57 +01:00
|
|
|
case HA_EXTRA_FLUSH_CACHE:
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
if (m_myisam)
|
|
|
|
DBUG_RETURN(loop_extra(operation));
|
|
|
|
break;
|
|
|
|
}
|
2008-10-29 21:20:04 +01:00
|
|
|
case HA_EXTRA_NO_READCHECK:
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
This is only done as a part of ha_open, which is also used in
|
|
|
|
ha_partition::open, so no need to do anything.
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
case HA_EXTRA_CACHE:
|
|
|
|
{
|
|
|
|
prepare_extra_cache(0);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case HA_EXTRA_NO_CACHE:
|
2006-03-20 15:58:21 +01:00
|
|
|
case HA_EXTRA_WRITE_CACHE:
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
m_extra_cache= FALSE;
|
|
|
|
m_extra_cache_size= 0;
|
|
|
|
DBUG_RETURN(loop_extra(operation));
|
|
|
|
}
|
2006-03-22 11:45:14 +01:00
|
|
|
case HA_EXTRA_IGNORE_NO_KEY:
|
|
|
|
case HA_EXTRA_NO_IGNORE_NO_KEY:
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
Ignore as these are specific to NDB for handling
|
|
|
|
idempotency
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
2006-07-06 11:33:23 +02:00
|
|
|
case HA_EXTRA_WRITE_CAN_REPLACE:
|
|
|
|
case HA_EXTRA_WRITE_CANNOT_REPLACE:
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
Informs handler that write_row() can replace rows which conflict
|
|
|
|
with row being inserted by PK/unique key without reporting error
|
|
|
|
to the SQL-layer.
|
|
|
|
|
|
|
|
This optimization is not safe for partitioned table in general case
|
|
|
|
since we may have to put new version of row into partition which is
|
|
|
|
different from partition in which old version resides (for example
|
|
|
|
when we partition by non-PK column or by some column which is not
|
|
|
|
part of unique key which were violated).
|
|
|
|
And since NDB which is the only engine at the moment that supports
|
|
|
|
this optimization handles partitioning on its own we simple disable
|
|
|
|
it here. (BTW for NDB this optimization is safe since it supports
|
|
|
|
only KEY partitioning and won't use this optimization for tables
|
|
|
|
which have additional unique constraints).
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
2007-06-30 18:17:20 +02:00
|
|
|
/* Category 7), used by federated handlers */
|
|
|
|
case HA_EXTRA_INSERT_WITH_UPDATE:
|
|
|
|
DBUG_RETURN(loop_extra(operation));
|
2007-10-04 14:56:33 +02:00
|
|
|
/* Category 8) Parameters only used by NDB */
|
|
|
|
case HA_EXTRA_DELETE_CANNOT_BATCH:
|
|
|
|
case HA_EXTRA_UPDATE_CANNOT_BATCH:
|
|
|
|
{
|
|
|
|
/* Currently only NDB use the *_CANNOT_BATCH */
|
|
|
|
break;
|
|
|
|
}
|
2009-07-31 14:38:18 +02:00
|
|
|
/*
|
|
|
|
http://dev.mysql.com/doc/refman/5.1/en/partitioning-limitations.html
|
|
|
|
says we no longer support logging to partitioned tables, so we fail
|
|
|
|
here.
|
|
|
|
*/
|
|
|
|
case HA_EXTRA_MARK_AS_LOG_TABLE:
|
|
|
|
DBUG_RETURN(ER_UNSUPORTED_LOG_ENGINE);
|
2005-07-18 13:31:02 +02:00
|
|
|
default:
|
|
|
|
{
|
|
|
|
/* Temporary crash to discover what is wrong */
|
|
|
|
DBUG_ASSERT(0);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Special extra call to reset extra parameters
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
reset()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
|
|
|
|
DESCRIPTION
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
Called at end of each statement to reste buffers
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::reset(void)
|
|
|
|
{
|
|
|
|
int result= 0, tmp;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::reset");
|
2005-12-22 10:29:00 +01:00
|
|
|
if (m_part_info)
|
2006-01-29 01:22:32 +01:00
|
|
|
bitmap_set_all(&m_part_info->used_partitions);
|
|
|
|
file= m_file;
|
2005-07-18 13:31:02 +02:00
|
|
|
do
|
|
|
|
{
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((tmp= (*file)->ha_reset()))
|
2005-07-18 13:31:02 +02:00
|
|
|
result= tmp;
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(result);
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Special extra method for HA_EXTRA_CACHE with cachesize as extra parameter
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
extra_opt()
|
|
|
|
operation Must be HA_EXTRA_CACHE
|
|
|
|
cachesize Size of cache in full table scan
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
int ha_partition::extra_opt(enum ha_extra_function operation, ulong cachesize)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::extra_opt()");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ASSERT(HA_EXTRA_CACHE == operation);
|
|
|
|
prepare_extra_cache(cachesize);
|
|
|
|
DBUG_RETURN(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Call extra on handler with HA_EXTRA_CACHE and cachesize
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
prepare_extra_cache()
|
|
|
|
cachesize Size of cache for full table scan
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
void ha_partition::prepare_extra_cache(uint cachesize)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::prepare_extra_cache()");
|
|
|
|
|
|
|
|
m_extra_cache= TRUE;
|
|
|
|
m_extra_cache_size= cachesize;
|
|
|
|
if (m_part_spec.start_part != NO_CURRENT_PART_ID)
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
late_extra_cache(m_part_spec.start_part);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-03-04 21:21:27 +01:00
|
|
|
/*
|
|
|
|
Prepares our new and reorged handlers for rename or delete
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
prepare_for_delete()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2007-10-11 17:07:40 +02:00
|
|
|
int ha_partition::prepare_for_rename()
|
2006-03-04 21:21:27 +01:00
|
|
|
{
|
|
|
|
int result= 0, tmp;
|
|
|
|
handler **file;
|
2007-10-11 17:07:40 +02:00
|
|
|
DBUG_ENTER("ha_partition::prepare_for_rename()");
|
2006-03-04 21:21:27 +01:00
|
|
|
|
|
|
|
if (m_new_file != NULL)
|
|
|
|
{
|
|
|
|
for (file= m_new_file; *file; file++)
|
2007-10-11 17:07:40 +02:00
|
|
|
if ((tmp= (*file)->extra(HA_EXTRA_PREPARE_FOR_RENAME)))
|
2006-03-04 21:21:27 +01:00
|
|
|
result= tmp;
|
|
|
|
for (file= m_reorged_file; *file; file++)
|
2007-10-11 17:07:40 +02:00
|
|
|
if ((tmp= (*file)->extra(HA_EXTRA_PREPARE_FOR_RENAME)))
|
2006-03-07 06:18:48 +01:00
|
|
|
result= tmp;
|
|
|
|
DBUG_RETURN(result);
|
2006-03-04 21:21:27 +01:00
|
|
|
}
|
2006-03-07 06:18:48 +01:00
|
|
|
|
2007-10-11 17:07:40 +02:00
|
|
|
DBUG_RETURN(loop_extra(HA_EXTRA_PREPARE_FOR_RENAME));
|
2006-03-04 21:21:27 +01:00
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Call extra on all partitions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
loop_extra()
|
|
|
|
operation extra operation type
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
>0 Error code
|
|
|
|
0 Success
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
int ha_partition::loop_extra(enum ha_extra_function operation)
|
|
|
|
{
|
|
|
|
int result= 0, tmp;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::loop_extra()");
|
2006-03-04 21:21:27 +01:00
|
|
|
|
2006-01-29 01:22:32 +01:00
|
|
|
/*
|
|
|
|
TODO, 5.2: this is where you could possibly add optimisations to add the bitmap
|
|
|
|
_if_ a SELECT.
|
|
|
|
*/
|
2005-07-18 13:31:02 +02:00
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
{
|
|
|
|
if ((tmp= (*file)->extra(operation)))
|
|
|
|
result= tmp;
|
|
|
|
}
|
|
|
|
DBUG_RETURN(result);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Call extra(HA_EXTRA_CACHE) on next partition_id
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
late_extra_cache()
|
|
|
|
partition_id Partition id to call extra on
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
void ha_partition::late_extra_cache(uint partition_id)
|
|
|
|
{
|
|
|
|
handler *file;
|
|
|
|
DBUG_ENTER("ha_partition::late_extra_cache");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!m_extra_cache)
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
file= m_file[partition_id];
|
|
|
|
if (m_extra_cache_size == 0)
|
|
|
|
VOID(file->extra(HA_EXTRA_CACHE));
|
|
|
|
else
|
|
|
|
VOID(file->extra_opt(HA_EXTRA_CACHE, m_extra_cache_size));
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Call extra(HA_EXTRA_NO_CACHE) on next partition_id
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
late_extra_no_cache()
|
|
|
|
partition_id Partition id to call extra on
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
NONE
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
void ha_partition::late_extra_no_cache(uint partition_id)
|
|
|
|
{
|
|
|
|
handler *file;
|
|
|
|
DBUG_ENTER("ha_partition::late_extra_no_cache");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if (!m_extra_cache)
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
file= m_file[partition_id];
|
|
|
|
VOID(file->extra(HA_EXTRA_NO_CACHE));
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE optimiser support
|
|
|
|
****************************************************************************/
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Get keys to use for scanning
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
keys_to_use_for_scanning()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
key_map of keys usable for scanning
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
const key_map *ha_partition::keys_to_use_for_scanning()
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::keys_to_use_for_scanning");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(m_file[0]->keys_to_use_for_scanning());
|
|
|
|
}
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
Return time for a scan of the table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
scan_time()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
time for scan
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
double ha_partition::scan_time()
|
|
|
|
{
|
|
|
|
double scan_time= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::scan_time");
|
|
|
|
|
|
|
|
for (file= m_file; *file; file++)
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
scan_time+= (*file)->scan_time();
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(scan_time);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Get time to read
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
read_time()
|
|
|
|
index Index number used
|
|
|
|
ranges Number of ranges
|
|
|
|
rows Number of rows
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
time for read
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
This will be optimised later to include whether or not the index can
|
|
|
|
be used with partitioning. To achieve we need to add another parameter
|
|
|
|
that specifies how many of the index fields that are bound in the ranges.
|
|
|
|
Possibly added as a new call to handlers.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
double ha_partition::read_time(uint index, uint ranges, ha_rows rows)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::read_time");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(m_file[0]->read_time(index, ranges, rows));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Find number of records in a range
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
records_in_range()
|
|
|
|
inx Index number
|
|
|
|
min_key Start of range
|
|
|
|
max_key End of range
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
Number of rows in range
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
DESCRIPTION
|
|
|
|
Given a starting key, and an ending key estimate the number of rows that
|
|
|
|
will exist between the two. end_key may be empty which in case determine
|
|
|
|
if start_key matches any rows.
|
2005-07-18 13:31:02 +02:00
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
Called from opt_range.cc by check_quick_keys().
|
|
|
|
|
|
|
|
monty: MUST be called for each range and added.
|
|
|
|
Note that MySQL will assume that if this returns 0 there is no
|
|
|
|
matching rows for the range!
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
ha_rows ha_partition::records_in_range(uint inx, key_range *min_key,
|
|
|
|
key_range *max_key)
|
|
|
|
{
|
|
|
|
handler **file;
|
2006-01-29 01:22:32 +01:00
|
|
|
ha_rows in_range= 0;
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_ENTER("ha_partition::records_in_range");
|
|
|
|
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
{
|
|
|
|
ha_rows tmp_in_range= (*file)->records_in_range(inx, min_key, max_key);
|
|
|
|
if (tmp_in_range == HA_POS_ERROR)
|
|
|
|
DBUG_RETURN(tmp_in_range);
|
|
|
|
in_range+= tmp_in_range;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(in_range);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Estimate upper bound of number of rows
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
estimate_rows_upper_bound()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
Number of rows
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
ha_rows ha_partition::estimate_rows_upper_bound()
|
|
|
|
{
|
|
|
|
ha_rows rows, tot_rows= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::estimate_rows_upper_bound");
|
|
|
|
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
2006-01-29 01:22:32 +01:00
|
|
|
if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
|
|
|
|
{
|
|
|
|
rows= (*file)->estimate_rows_upper_bound();
|
|
|
|
if (rows == HA_POS_ERROR)
|
|
|
|
DBUG_RETURN(HA_POS_ERROR);
|
|
|
|
tot_rows+= rows;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(tot_rows);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-07-07 22:42:19 +02:00
|
|
|
/**
|
|
|
|
Number of rows in table. see handler.h
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
records()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
Number of total rows in a partitioned table.
|
|
|
|
*/
|
|
|
|
|
|
|
|
ha_rows ha_partition::records()
|
|
|
|
{
|
|
|
|
ha_rows rows, tot_rows= 0;
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::records");
|
|
|
|
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
rows= (*file)->records();
|
|
|
|
if (rows == HA_POS_ERROR)
|
|
|
|
DBUG_RETURN(HA_POS_ERROR);
|
|
|
|
tot_rows+= rows;
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(tot_rows);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
/*
|
|
|
|
Is it ok to switch to a new engine for this table
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
can_switch_engine()
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
TRUE Ok
|
|
|
|
FALSE Not ok
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Used to ensure that tables with foreign key constraints are not moved
|
|
|
|
to engines without foreign key support.
|
|
|
|
*/
|
|
|
|
|
|
|
|
bool ha_partition::can_switch_engines()
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
DBUG_ENTER("ha_partition::can_switch_engines");
|
|
|
|
|
|
|
|
file= m_file;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
if (!(*file)->can_switch_engines())
|
|
|
|
DBUG_RETURN(FALSE);
|
|
|
|
} while (*(++file));
|
|
|
|
DBUG_RETURN(TRUE);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Is table cache supported
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
table_cache_type()
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
uint8 ha_partition::table_cache_type()
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::table_cache_type");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(m_file[0]->table_cache_type());
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE print messages
|
|
|
|
****************************************************************************/
|
|
|
|
|
|
|
|
const char *ha_partition::index_type(uint inx)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::index_type");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_RETURN(m_file[0]->index_type(inx));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-03-08 20:06:01 +01:00
|
|
|
enum row_type ha_partition::get_row_type() const
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
enum row_type type= (*m_file)->get_row_type();
|
|
|
|
|
|
|
|
for (file= m_file, file++; *file; file++)
|
|
|
|
{
|
|
|
|
enum row_type part_type= (*file)->get_row_type();
|
|
|
|
if (part_type != type)
|
|
|
|
return ROW_TYPE_NOT_USED;
|
|
|
|
}
|
|
|
|
|
|
|
|
return type;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
void ha_partition::print_error(int error, myf errflag)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::print_error");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/* Should probably look for my own errors first */
|
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
|
|
|
DBUG_PRINT("enter", ("error: %d", error));
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-12-15 13:20:56 +01:00
|
|
|
if (error == HA_ERR_NO_PARTITION_FOUND)
|
2006-06-15 01:40:06 +02:00
|
|
|
m_part_info->print_no_partition_found(table);
|
2005-12-15 13:20:56 +01:00
|
|
|
else
|
2006-08-08 14:52:51 +02:00
|
|
|
m_file[m_last_part]->print_error(error, errflag);
|
2005-07-18 13:31:02 +02:00
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
bool ha_partition::get_error_message(int error, String *buf)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::get_error_message");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/* Should probably look for my own errors first */
|
2006-08-08 14:52:51 +02:00
|
|
|
DBUG_RETURN(m_file[m_last_part]->get_error_message(error, buf));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE handler characteristics
|
|
|
|
****************************************************************************/
|
2008-10-05 00:40:30 +02:00
|
|
|
/**
|
|
|
|
alter_table_flags must be on handler/table level, not on hton level
|
|
|
|
due to the ha_partition hton does not know what the underlying hton is.
|
|
|
|
*/
|
|
|
|
uint ha_partition::alter_table_flags(uint flags)
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::alter_table_flags");
|
|
|
|
DBUG_RETURN(ht->alter_table_flags(flags) |
|
|
|
|
m_file[0]->alter_table_flags(flags));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
check if copy of data is needed in alter table.
|
|
|
|
*/
|
|
|
|
bool ha_partition::check_if_incompatible_data(HA_CREATE_INFO *create_info,
|
|
|
|
uint table_changes)
|
|
|
|
{
|
|
|
|
handler **file;
|
2008-10-30 09:25:25 +01:00
|
|
|
bool ret= COMPATIBLE_DATA_YES;
|
2008-10-05 00:40:30 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
The check for any partitioning related changes have already been done
|
|
|
|
in mysql_alter_table (by fix_partition_func), so it is only up to
|
|
|
|
the underlying handlers.
|
|
|
|
*/
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
if ((ret= (*file)->check_if_incompatible_data(create_info,
|
|
|
|
table_changes)) !=
|
|
|
|
COMPATIBLE_DATA_YES)
|
|
|
|
break;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
Support of fast or online add/drop index
|
|
|
|
*/
|
|
|
|
int ha_partition::add_index(TABLE *table_arg, KEY *key_info, uint num_of_keys)
|
|
|
|
{
|
|
|
|
handler **file;
|
2008-10-30 09:25:25 +01:00
|
|
|
int ret= 0;
|
2008-10-05 00:40:30 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
There has already been a check in fix_partition_func in mysql_alter_table
|
|
|
|
before this call, which checks for unique/primary key violations of the
|
|
|
|
partitioning function. So no need for extra check here.
|
|
|
|
*/
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
if ((ret= (*file)->add_index(table_arg, key_info, num_of_keys)))
|
|
|
|
break;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int ha_partition::prepare_drop_index(TABLE *table_arg, uint *key_num,
|
|
|
|
uint num_of_keys)
|
|
|
|
{
|
|
|
|
handler **file;
|
2008-10-30 09:25:25 +01:00
|
|
|
int ret= 0;
|
2008-10-05 00:40:30 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
DROP INDEX does not affect partitioning.
|
|
|
|
*/
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
if ((ret= (*file)->prepare_drop_index(table_arg, key_num, num_of_keys)))
|
|
|
|
break;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int ha_partition::final_drop_index(TABLE *table_arg)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
int ret= HA_ERR_WRONG_COMMAND;
|
|
|
|
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
if ((ret= (*file)->final_drop_index(table_arg)))
|
|
|
|
break;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/*
|
|
|
|
If frm_error() is called then we will use this to to find out what file
|
|
|
|
extensions exist for the storage engine. This is also used by the default
|
|
|
|
rename_table and delete_table method in handler.cc.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static const char *ha_partition_ext[]=
|
|
|
|
{
|
|
|
|
ha_par_ext, NullS
|
|
|
|
};
|
|
|
|
|
|
|
|
const char **ha_partition::bas_ext() const
|
|
|
|
{ return ha_partition_ext; }
|
|
|
|
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
uint ha_partition::min_of_the_max_uint(
|
|
|
|
uint (handler::*operator_func)(void) const) const
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
uint min_of_the_max= ((*m_file)->*operator_func)();
|
|
|
|
|
|
|
|
for (file= m_file+1; *file; file++)
|
|
|
|
{
|
|
|
|
uint tmp= ((*file)->*operator_func)();
|
|
|
|
set_if_smaller(min_of_the_max, tmp);
|
|
|
|
}
|
|
|
|
return min_of_the_max;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::max_supported_key_parts() const
|
|
|
|
{
|
|
|
|
return min_of_the_max_uint(&handler::max_supported_key_parts);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::max_supported_key_length() const
|
|
|
|
{
|
|
|
|
return min_of_the_max_uint(&handler::max_supported_key_length);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::max_supported_key_part_length() const
|
|
|
|
{
|
|
|
|
return min_of_the_max_uint(&handler::max_supported_key_part_length);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::max_supported_record_length() const
|
|
|
|
{
|
|
|
|
return min_of_the_max_uint(&handler::max_supported_record_length);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::max_supported_keys() const
|
|
|
|
{
|
|
|
|
return min_of_the_max_uint(&handler::max_supported_keys);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::extra_rec_buf_length() const
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
uint max= (*m_file)->extra_rec_buf_length();
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
for (file= m_file, file++; *file; file++)
|
|
|
|
if (max < (*file)->extra_rec_buf_length())
|
|
|
|
max= (*file)->extra_rec_buf_length();
|
|
|
|
return max;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
uint ha_partition::min_record_length(uint options) const
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
uint max= (*m_file)->min_record_length(options);
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
for (file= m_file, file++; *file; file++)
|
|
|
|
if (max < (*file)->min_record_length(options))
|
|
|
|
max= (*file)->min_record_length(options);
|
|
|
|
return max;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE compare records
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
2006-01-17 08:40:00 +01:00
|
|
|
Compare two positions
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
cmp_ref()
|
|
|
|
ref1 First position
|
|
|
|
ref2 Second position
|
|
|
|
|
|
|
|
RETURN VALUE
|
|
|
|
<0 ref1 < ref2
|
|
|
|
0 Equal
|
|
|
|
>0 ref1 > ref2
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
We get two references and need to check if those records are the same.
|
|
|
|
If they belong to different partitions we decide that they are not
|
|
|
|
the same record. Otherwise we use the particular handler to decide if
|
|
|
|
they are the same. Sort in partition id order if not equal.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
int ha_partition::cmp_ref(const uchar *ref1, const uchar *ref2)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
uint part_id;
|
|
|
|
my_ptrdiff_t diff1, diff2;
|
|
|
|
handler *file;
|
|
|
|
DBUG_ENTER("ha_partition::cmp_ref");
|
2006-01-17 08:40:00 +01:00
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
if ((ref1[0] == ref2[0]) && (ref1[1] == ref2[1]))
|
|
|
|
{
|
2006-01-17 08:40:00 +01:00
|
|
|
part_id= uint2korr(ref1);
|
2005-07-18 13:31:02 +02:00
|
|
|
file= m_file[part_id];
|
|
|
|
DBUG_ASSERT(part_id < m_tot_parts);
|
|
|
|
DBUG_RETURN(file->cmp_ref((ref1 + PARTITION_BYTES_IN_POS),
|
|
|
|
(ref2 + PARTITION_BYTES_IN_POS)));
|
|
|
|
}
|
|
|
|
diff1= ref2[1] - ref1[1];
|
|
|
|
diff2= ref2[0] - ref1[0];
|
|
|
|
if (diff1 > 0)
|
|
|
|
{
|
|
|
|
DBUG_RETURN(-1);
|
|
|
|
}
|
|
|
|
if (diff1 < 0)
|
|
|
|
{
|
|
|
|
DBUG_RETURN(+1);
|
|
|
|
}
|
|
|
|
if (diff2 > 0)
|
|
|
|
{
|
|
|
|
DBUG_RETURN(-1);
|
|
|
|
}
|
|
|
|
DBUG_RETURN(+1);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************************************************
|
|
|
|
MODULE auto increment
|
|
|
|
****************************************************************************/
|
|
|
|
|
2006-01-17 08:40:00 +01:00
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
int ha_partition::reset_auto_increment(ulonglong value)
|
|
|
|
{
|
|
|
|
handler **file= m_file;
|
|
|
|
int res;
|
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
|
|
|
DBUG_ENTER("ha_partition::reset_auto_increment");
|
|
|
|
lock_auto_increment();
|
|
|
|
ha_data->auto_inc_initialized= FALSE;
|
|
|
|
ha_data->next_auto_inc_val= 0;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
if ((res= (*file)->ha_reset_auto_increment(value)) != 0)
|
|
|
|
break;
|
|
|
|
} while (*(++file));
|
|
|
|
unlock_auto_increment();
|
|
|
|
DBUG_RETURN(res);
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
/**
|
2005-07-18 13:31:02 +02:00
|
|
|
This method is called by update_auto_increment which in turn is called
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
by the individual handlers as part of write_row. We use the
|
|
|
|
table_share->ha_data->next_auto_inc_val, or search all
|
|
|
|
partitions for the highest auto_increment_value if not initialized or
|
|
|
|
if auto_increment field is a secondary part of a key, we must search
|
|
|
|
every partition when holding a mutex to be sure of correctness.
|
2005-07-18 13:31:02 +02:00
|
|
|
*/
|
|
|
|
|
2006-06-02 22:21:32 +02:00
|
|
|
void ha_partition::get_auto_increment(ulonglong offset, ulonglong increment,
|
|
|
|
ulonglong nb_desired_values,
|
|
|
|
ulonglong *first_value,
|
|
|
|
ulonglong *nb_reserved_values)
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::get_auto_increment");
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_PRINT("info", ("offset: %lu inc: %lu desired_values: %lu "
|
|
|
|
"first_value: %lu", (ulong) offset, (ulong) increment,
|
|
|
|
(ulong) nb_desired_values, (ulong) *first_value));
|
|
|
|
DBUG_ASSERT(increment && nb_desired_values);
|
|
|
|
*first_value= 0;
|
|
|
|
if (table->s->next_number_keypart)
|
2006-04-10 17:54:42 +02:00
|
|
|
{
|
2006-06-02 22:21:32 +02:00
|
|
|
/*
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
next_number_keypart is != 0 if the auto_increment column is a secondary
|
|
|
|
column in the index (it is allowed in MyISAM)
|
2006-06-02 22:21:32 +02:00
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_PRINT("info", ("next_number_keypart != 0"));
|
|
|
|
ulonglong nb_reserved_values_part;
|
|
|
|
ulonglong first_value_part, max_first_value;
|
|
|
|
handler **file= m_file;
|
|
|
|
first_value_part= max_first_value= *first_value;
|
|
|
|
/* Must lock and find highest value among all partitions. */
|
|
|
|
lock_auto_increment();
|
|
|
|
do
|
|
|
|
{
|
|
|
|
/* Only nb_desired_values = 1 makes sense */
|
|
|
|
(*file)->get_auto_increment(offset, increment, 1,
|
|
|
|
&first_value_part, &nb_reserved_values_part);
|
|
|
|
if (first_value_part == ~(ulonglong)(0)) // error in one partition
|
|
|
|
{
|
|
|
|
*first_value= first_value_part;
|
|
|
|
/* log that the error was between table/partition handler */
|
|
|
|
sql_print_error("Partition failed to reserve auto_increment value");
|
|
|
|
unlock_auto_increment();
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
|
|
|
DBUG_PRINT("info", ("first_value_part: %lu", (ulong) first_value_part));
|
|
|
|
set_if_bigger(max_first_value, first_value_part);
|
|
|
|
} while (*(++file));
|
|
|
|
*first_value= max_first_value;
|
|
|
|
*nb_reserved_values= 1;
|
|
|
|
unlock_auto_increment();
|
2006-06-02 22:21:32 +02:00
|
|
|
}
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
else
|
2006-06-02 22:21:32 +02:00
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
THD *thd= ha_thd();
|
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
2007-09-09 05:26:12 +02:00
|
|
|
/*
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
This is initialized in the beginning of the first write_row call.
|
2007-09-09 05:26:12 +02:00
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
DBUG_ASSERT(ha_data->auto_inc_initialized);
|
2007-09-09 05:26:12 +02:00
|
|
|
/*
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
Get a lock for handling the auto_increment in table_share->ha_data
|
|
|
|
for avoiding two concurrent statements getting the same number.
|
|
|
|
*/
|
|
|
|
|
|
|
|
lock_auto_increment();
|
|
|
|
|
|
|
|
/*
|
|
|
|
In a multi-row insert statement like INSERT SELECT and LOAD DATA
|
|
|
|
where the number of candidate rows to insert is not known in advance
|
|
|
|
we must hold a lock/mutex for the whole statement if we have statement
|
|
|
|
based replication. Because the statement-based binary log contains
|
|
|
|
only the first generated value used by the statement, and slaves assumes
|
|
|
|
all other generated values used by this statement were consecutive to
|
|
|
|
this first one, we must exclusively lock the generator until the statement
|
|
|
|
is done.
|
2007-09-09 05:26:12 +02:00
|
|
|
*/
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (!auto_increment_safe_stmt_log_lock &&
|
|
|
|
thd->lex->sql_command != SQLCOM_INSERT &&
|
|
|
|
mysql_bin_log.is_open() &&
|
|
|
|
!thd->current_stmt_binlog_row_based &&
|
|
|
|
(thd->options & OPTION_BIN_LOG))
|
|
|
|
{
|
|
|
|
DBUG_PRINT("info", ("locking auto_increment_safe_stmt_log_lock"));
|
|
|
|
auto_increment_safe_stmt_log_lock= TRUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* this gets corrected (for offset/increment) in update_auto_increment */
|
|
|
|
*first_value= ha_data->next_auto_inc_val;
|
|
|
|
ha_data->next_auto_inc_val+= nb_desired_values * increment;
|
|
|
|
|
|
|
|
unlock_auto_increment();
|
|
|
|
DBUG_PRINT("info", ("*first_value: %lu", (ulong) *first_value));
|
|
|
|
*nb_reserved_values= nb_desired_values;
|
2006-04-10 17:54:42 +02:00
|
|
|
}
|
2006-06-02 22:21:32 +02:00
|
|
|
DBUG_VOID_RETURN;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
2006-06-02 22:21:32 +02:00
|
|
|
void ha_partition::release_auto_increment()
|
|
|
|
{
|
|
|
|
DBUG_ENTER("ha_partition::release_auto_increment");
|
|
|
|
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
if (table->s->next_number_keypart)
|
2006-06-02 22:21:32 +02:00
|
|
|
{
|
Bug#38804: Query deadlock causes all tables to be inaccessible.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
2008-09-08 15:30:01 +02:00
|
|
|
for (uint i= 0; i < m_tot_parts; i++)
|
|
|
|
m_file[i]->ha_release_auto_increment();
|
|
|
|
}
|
|
|
|
else if (next_insert_id)
|
|
|
|
{
|
|
|
|
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
|
|
|
|
ulonglong next_auto_inc_val;
|
|
|
|
lock_auto_increment();
|
|
|
|
next_auto_inc_val= ha_data->next_auto_inc_val;
|
|
|
|
if (next_insert_id < next_auto_inc_val &&
|
|
|
|
auto_inc_interval_for_cur_row.maximum() >= next_auto_inc_val)
|
|
|
|
ha_data->next_auto_inc_val= next_insert_id;
|
|
|
|
DBUG_PRINT("info", ("ha_data->next_auto_inc_val: %lu",
|
|
|
|
(ulong) ha_data->next_auto_inc_val));
|
|
|
|
|
|
|
|
/* Unlock the multi row statement lock taken in get_auto_increment */
|
|
|
|
if (auto_increment_safe_stmt_log_lock)
|
|
|
|
{
|
|
|
|
auto_increment_safe_stmt_log_lock= FALSE;
|
|
|
|
DBUG_PRINT("info", ("unlocking auto_increment_safe_stmt_log_lock"));
|
|
|
|
}
|
|
|
|
|
|
|
|
unlock_auto_increment();
|
2006-06-02 22:21:32 +02:00
|
|
|
}
|
|
|
|
DBUG_VOID_RETURN;
|
|
|
|
}
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
/****************************************************************************
|
2008-10-29 21:20:04 +01:00
|
|
|
MODULE initialize handler for HANDLER call
|
2005-07-18 13:31:02 +02:00
|
|
|
****************************************************************************/
|
|
|
|
|
|
|
|
void ha_partition::init_table_handle_for_HANDLER()
|
|
|
|
{
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-05-04 21:15:25 +02:00
|
|
|
/****************************************************************************
|
|
|
|
MODULE enable/disable indexes
|
|
|
|
****************************************************************************/
|
|
|
|
|
|
|
|
/*
|
|
|
|
Disable indexes for a while
|
|
|
|
SYNOPSIS
|
|
|
|
disable_indexes()
|
|
|
|
mode Mode
|
|
|
|
RETURN VALUES
|
|
|
|
0 Success
|
|
|
|
!= 0 Error
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::disable_indexes(uint mode)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
int error= 0;
|
|
|
|
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
{
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((error= (*file)->ha_disable_indexes(mode)))
|
2006-05-04 21:15:25 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Enable indexes again
|
|
|
|
SYNOPSIS
|
|
|
|
enable_indexes()
|
|
|
|
mode Mode
|
|
|
|
RETURN VALUES
|
|
|
|
0 Success
|
|
|
|
!= 0 Error
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::enable_indexes(uint mode)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
int error= 0;
|
|
|
|
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
{
|
2007-12-20 19:16:55 +01:00
|
|
|
if ((error= (*file)->ha_enable_indexes(mode)))
|
2006-05-04 21:15:25 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Check if indexes are disabled
|
|
|
|
SYNOPSIS
|
|
|
|
indexes_are_disabled()
|
|
|
|
|
|
|
|
RETURN VALUES
|
|
|
|
0 Indexes are enabled
|
|
|
|
!= 0 Indexes are disabled
|
|
|
|
*/
|
|
|
|
|
|
|
|
int ha_partition::indexes_are_disabled(void)
|
|
|
|
{
|
|
|
|
handler **file;
|
|
|
|
int error= 0;
|
|
|
|
|
|
|
|
for (file= m_file; *file; file++)
|
|
|
|
{
|
|
|
|
if ((error= (*file)->indexes_are_disabled()))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-07-18 13:31:02 +02:00
|
|
|
/****************************************************************************
|
|
|
|
MODULE Partition Share
|
|
|
|
****************************************************************************/
|
|
|
|
/*
|
|
|
|
Service routines for ... methods.
|
|
|
|
-------------------------------------------------------------------------
|
|
|
|
Variables for partition share methods. A hash used to track open tables.
|
|
|
|
A mutex for the hash table and an init variable to check if hash table
|
2008-10-29 21:20:04 +01:00
|
|
|
is initialized.
|
2005-07-18 13:31:02 +02:00
|
|
|
There is also a constant ending of the partition handler file name.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifdef NOT_USED
|
|
|
|
static HASH partition_open_tables;
|
|
|
|
static pthread_mutex_t partition_mutex;
|
|
|
|
static int partition_init= 0;
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Function we use in the creation of our hash to get key.
|
|
|
|
*/
|
2006-01-17 08:40:00 +01:00
|
|
|
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
static uchar *partition_get_key(PARTITION_SHARE *share, size_t *length,
|
2005-07-18 13:31:02 +02:00
|
|
|
my_bool not_used __attribute__ ((unused)))
|
|
|
|
{
|
|
|
|
*length= share->table_name_length;
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
return (uchar *) share->table_name;
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
Example of simple lock controls. The "share" it creates is structure we
|
|
|
|
will pass to each partition handler. Do you have to have one of these?
|
|
|
|
Well, you have pieces that are used for locking, and they are needed to
|
|
|
|
function.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static PARTITION_SHARE *get_share(const char *table_name, TABLE *table)
|
|
|
|
{
|
|
|
|
PARTITION_SHARE *share;
|
|
|
|
uint length;
|
|
|
|
char *tmp_name;
|
|
|
|
|
|
|
|
/*
|
|
|
|
So why does this exist? There is no way currently to init a storage
|
|
|
|
engine.
|
|
|
|
Innodb and BDB both have modifications to the server to allow them to
|
|
|
|
do this. Since you will not want to do this, this is probably the next
|
|
|
|
best method.
|
|
|
|
*/
|
|
|
|
if (!partition_init)
|
|
|
|
{
|
|
|
|
/* Hijack a mutex for init'ing the storage engine */
|
|
|
|
pthread_mutex_lock(&LOCK_mysql_create_db);
|
|
|
|
if (!partition_init)
|
|
|
|
{
|
|
|
|
partition_init++;
|
|
|
|
VOID(pthread_mutex_init(&partition_mutex, MY_MUTEX_INIT_FAST));
|
|
|
|
(void) hash_init(&partition_open_tables, system_charset_info, 32, 0, 0,
|
|
|
|
(hash_get_key) partition_get_key, 0, 0);
|
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&LOCK_mysql_create_db);
|
|
|
|
}
|
|
|
|
pthread_mutex_lock(&partition_mutex);
|
|
|
|
length= (uint) strlen(table_name);
|
|
|
|
|
|
|
|
if (!(share= (PARTITION_SHARE *) hash_search(&partition_open_tables,
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
(uchar *) table_name, length)))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
if (!(share= (PARTITION_SHARE *)
|
|
|
|
my_multi_malloc(MYF(MY_WME | MY_ZEROFILL),
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
&share, (uint) sizeof(*share),
|
|
|
|
&tmp_name, (uint) length + 1, NullS)))
|
2005-07-18 13:31:02 +02:00
|
|
|
{
|
|
|
|
pthread_mutex_unlock(&partition_mutex);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
share->use_count= 0;
|
|
|
|
share->table_name_length= length;
|
|
|
|
share->table_name= tmp_name;
|
|
|
|
strmov(share->table_name, table_name);
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
if (my_hash_insert(&partition_open_tables, (uchar *) share))
|
2005-07-18 13:31:02 +02:00
|
|
|
goto error;
|
|
|
|
thr_lock_init(&share->lock);
|
|
|
|
pthread_mutex_init(&share->mutex, MY_MUTEX_INIT_FAST);
|
|
|
|
}
|
|
|
|
share->use_count++;
|
|
|
|
pthread_mutex_unlock(&partition_mutex);
|
|
|
|
|
|
|
|
return share;
|
|
|
|
|
|
|
|
error:
|
|
|
|
pthread_mutex_unlock(&partition_mutex);
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
my_free((uchar*) share, MYF(0));
|
2005-07-18 13:31:02 +02:00
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
Free lock controls. We call this whenever we close a table. If the table
|
|
|
|
had the last reference to the share then we free memory associated with
|
|
|
|
it.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int free_share(PARTITION_SHARE *share)
|
|
|
|
{
|
|
|
|
pthread_mutex_lock(&partition_mutex);
|
|
|
|
if (!--share->use_count)
|
|
|
|
{
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
hash_delete(&partition_open_tables, (uchar *) share);
|
2005-07-18 13:31:02 +02:00
|
|
|
thr_lock_delete(&share->lock);
|
|
|
|
pthread_mutex_destroy(&share->mutex);
|
WL#3817: Simplify string / memory area types and make things more consistent (first part)
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
|
|
|
my_free((uchar*) share, MYF(0));
|
2005-07-18 13:31:02 +02:00
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&partition_mutex);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* NOT_USED */
|
2006-04-13 22:49:29 +02:00
|
|
|
|
2006-05-28 14:51:01 +02:00
|
|
|
struct st_mysql_storage_engine partition_storage_engine=
|
2006-09-27 06:26:04 +02:00
|
|
|
{ MYSQL_HANDLERTON_INTERFACE_VERSION };
|
2006-04-13 22:49:29 +02:00
|
|
|
|
|
|
|
mysql_declare_plugin(partition)
|
|
|
|
{
|
|
|
|
MYSQL_STORAGE_ENGINE_PLUGIN,
|
2006-05-28 14:51:01 +02:00
|
|
|
&partition_storage_engine,
|
|
|
|
"partition",
|
2006-04-13 22:49:29 +02:00
|
|
|
"Mikael Ronstrom, MySQL AB",
|
2006-05-28 14:51:01 +02:00
|
|
|
"Partition Storage Engine Helper",
|
2006-10-05 09:41:29 +02:00
|
|
|
PLUGIN_LICENSE_GPL,
|
2006-05-28 14:51:01 +02:00
|
|
|
partition_initialize, /* Plugin Init */
|
2006-04-13 22:49:29 +02:00
|
|
|
NULL, /* Plugin Deinit */
|
2006-05-04 18:39:47 +02:00
|
|
|
0x0100, /* 1.0 */
|
2006-08-30 23:27:29 +02:00
|
|
|
NULL, /* status variables */
|
|
|
|
NULL, /* system variables */
|
|
|
|
NULL /* config options */
|
2006-04-13 22:49:29 +02:00
|
|
|
}
|
|
|
|
mysql_declare_plugin_end;
|
|
|
|
|
|
|
|
#endif
|