mariadb/storage/archive/ha_archive.h
Davi Arnaut a5efb91dea Bug#49938: Failing assertion: inode or deadlock in fsp/fsp0fsp.c
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock

- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.

- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.

Problem description:

The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.

Solution:

The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.

Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.

mysql-test/suite/innodb/t/innodb-truncate.test:
  Add test cases for truncate and foreign key checks.
  Also test that InnoDB resets auto-increment on truncate.
mysql-test/suite/innodb/t/innodb.test:
  FK is not necessary, test is related to auto-increment.
  
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
mysql-test/suite/innodb/t/innodb_mysql.test:
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
  
  Use delete instead of truncate, test is used to check
  the interaction of FKs, triggers and delete.
mysql-test/suite/parts/inc/partition_check.inc:
  Fix typo.
mysql-test/suite/sys_vars/t/foreign_key_checks_func.test:
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
mysql-test/t/mdl_sync.test:
  Modify test case to reflect and ensure that truncate takes
  a exclusive metadata lock.
mysql-test/t/trigger-trans.test:
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
sql/ha_partition.cc:
  Reorganize the various truncate methods. delete_all_rows is now
  passed directly to the underlying engines, so as truncate. The
  code responsible for truncating individual partitions is moved
  to ha_partition::truncate_partition, which is invoked when a
  ALTER TABLE t1 TRUNCATE PARTITION p statement is executed.
  
  Since the partition truncate no longer can be invoked via
  delete, the bitmap operations are not necessary anymore. The
  explicit reset of the auto-increment value is also removed
  as the underlying engines are now responsible for reseting
  the value.
sql/handler.cc:
  Wire up the handler truncate method.
sql/handler.h:
  Introduce and document the truncate handler method. It assumes
  certain use cases of delete_all_rows.
  
  Add method to retrieve the list of foreign keys referencing a
  table. Method is used to avoid truncating tables that are
  parent in a foreign key relationship.
sql/share/errmsg-utf8.txt:
  Add error message for truncate and FK.
sql/sql_lex.h:
  Introduce a flag so that the partition engine can detect when
  a partition is being truncated. Used to give a special error.
sql/sql_parse.cc:
  Function mysql_truncate_table no longer exists.
sql/sql_partition_admin.cc:
  Implement the TRUNCATE PARTITION statement.
sql/sql_truncate.cc:
  Change the truncate table implementation to use the new truncate
  handler method and to not rely on row-by-row delete anymore.
  
  The truncate handler method is always invoked with a exclusive
  metadata lock. Also, it is no longer possible to truncate a
  table that is parent in some non-self-referencing foreign key.
storage/archive/ha_archive.cc:
  Rename method as the description indicates that in the future
  this could be a truncate operation.
storage/blackhole/ha_blackhole.cc:
  Implement truncate as no operation for the blackhole engine in
  order to remain compatible with older releases.
storage/federated/ha_federated.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required to support partition truncate as this
  form of truncate does not implement the drop and recreate
  protocol.
storage/heap/ha_heap.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required to support partition truncate as this
  form of truncate does not implement the drop and recreate
  protocol.
storage/ibmdb2i/ha_ibmdb2i.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required to support partition truncate as this
  form of truncate does not implement the drop and recreate
  protocol.
storage/innobase/handler/ha_innodb.cc:
  Rename delete_all_rows to truncate. InnoDB now does truncate
  under a exclusive metadata lock.
  
  Introduce and reorganize methods used to retrieve the list
  of foreign keys referenced by a or referencing a table.
storage/myisammrg/ha_myisammrg.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required in order to remain compatible with earlier
  releases where truncate would resort to a row-by-row delete.
2010-10-06 11:34:28 -03:00

153 lines
5.7 KiB
C++

/* Copyright (C) 2003 MySQL AB, 2008-2009 Sun Microsystems, Inc
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
#ifdef USE_PRAGMA_INTERFACE
#pragma interface /* gcc class implementation */
#endif
#include <zlib.h>
#include "azlib.h"
/*
Please read ha_archive.cc first. If you are looking for more general
answers on how storage engines work, look at ha_example.cc and
ha_example.h.
*/
typedef struct st_archive_record_buffer {
uchar *buffer;
uint32 length;
} archive_record_buffer;
typedef struct st_archive_share {
char *table_name;
char data_file_name[FN_REFLEN];
uint table_name_length,use_count;
mysql_mutex_t mutex;
THR_LOCK lock;
azio_stream archive_write; /* Archive file we are working with */
bool archive_write_open;
bool dirty; /* Flag for if a flush should occur */
bool crashed; /* Meta file is crashed */
ha_rows rows_recorded; /* Number of rows in tables */
ulonglong mean_rec_length;
char real_path[FN_REFLEN];
} ARCHIVE_SHARE;
/*
Version for file format.
1 - Initial Version (Never Released)
2 - Stream Compression, seperate blobs, no packing
3 - One steam (row and blobs), with packing
*/
#define ARCHIVE_VERSION 3
class ha_archive: public handler
{
THR_LOCK_DATA lock; /* MySQL lock */
ARCHIVE_SHARE *share; /* Shared lock info */
azio_stream archive; /* Archive file we are working with */
my_off_t current_position; /* The position of the row we just read */
uchar byte_buffer[IO_SIZE]; /* Initial buffer for our string */
String buffer; /* Buffer used for blob storage */
ha_rows scan_rows; /* Number of rows left in scan */
bool delayed_insert; /* If the insert is delayed */
bool bulk_insert; /* If we are performing a bulk insert */
const uchar *current_key;
uint current_key_len;
uint current_k_offset;
archive_record_buffer *record_buffer;
bool archive_reader_open;
archive_record_buffer *create_record_buffer(unsigned int length);
void destroy_record_buffer(archive_record_buffer *r);
int frm_copy(azio_stream *src, azio_stream *dst);
public:
ha_archive(handlerton *hton, TABLE_SHARE *table_arg);
~ha_archive()
{
}
const char *table_type() const { return "ARCHIVE"; }
const char *index_type(uint inx) { return "NONE"; }
const char **bas_ext() const;
ulonglong table_flags() const
{
return (HA_NO_TRANSACTIONS | HA_REC_NOT_IN_SEQ | HA_CAN_BIT_FIELD |
HA_BINLOG_ROW_CAPABLE | HA_BINLOG_STMT_CAPABLE |
HA_STATS_RECORDS_IS_EXACT |
HA_HAS_RECORDS |
HA_FILE_BASED | HA_CAN_INSERT_DELAYED | HA_CAN_GEOMETRY);
}
ulong index_flags(uint idx, uint part, bool all_parts) const
{
return HA_ONLY_WHOLE_INDEX;
}
virtual void get_auto_increment(ulonglong offset, ulonglong increment,
ulonglong nb_desired_values,
ulonglong *first_value,
ulonglong *nb_reserved_values);
uint max_supported_keys() const { return 1; }
uint max_supported_key_length() const { return sizeof(ulonglong); }
uint max_supported_key_part_length() const { return sizeof(ulonglong); }
ha_rows records() { return share->rows_recorded; }
int index_init(uint keynr, bool sorted);
virtual int index_read(uchar * buf, const uchar * key,
uint key_len, enum ha_rkey_function find_flag);
virtual int index_read_idx(uchar * buf, uint index, const uchar * key,
uint key_len, enum ha_rkey_function find_flag);
int index_next(uchar * buf);
int open(const char *name, int mode, uint test_if_locked);
int close(void);
int write_row(uchar * buf);
int real_write_row(uchar *buf, azio_stream *writer);
int truncate();
int rnd_init(bool scan=1);
int rnd_next(uchar *buf);
int rnd_pos(uchar * buf, uchar *pos);
int get_row(azio_stream *file_to_read, uchar *buf);
int get_row_version2(azio_stream *file_to_read, uchar *buf);
int get_row_version3(azio_stream *file_to_read, uchar *buf);
ARCHIVE_SHARE *get_share(const char *table_name, int *rc);
int free_share();
int init_archive_writer();
int init_archive_reader();
bool auto_repair() const { return 1; } // For the moment we just do this
int read_data_header(azio_stream *file_to_read);
void position(const uchar *record);
int info(uint);
void update_create_info(HA_CREATE_INFO *create_info);
int create(const char *name, TABLE *form, HA_CREATE_INFO *create_info);
int optimize(THD* thd, HA_CHECK_OPT* check_opt);
int repair(THD* thd, HA_CHECK_OPT* check_opt);
void start_bulk_insert(ha_rows rows);
int end_bulk_insert();
enum row_type get_row_type() const
{
return ROW_TYPE_COMPRESSED;
}
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
enum thr_lock_type lock_type);
bool is_crashed() const;
int check(THD* thd, HA_CHECK_OPT* check_opt);
bool check_and_repair(THD *thd);
uint32 max_row_length(const uchar *buf);
bool fix_rec_buff(unsigned int length);
int unpack_row(azio_stream *file_to_read, uchar *record);
unsigned int pack_row(uchar *record);
};