partially revert 76063c2a13. Item::clone() is not an all-purpose
Item copying machine, it was specifically created for pushdown
of predicates into derived tables and views and it does not
copy everything. In particular, it does not copy Item_func_regex.
Fix the bug differently by preserving the old constraint name.
But keep setting automatic_name=true to have it regenerated
for cases like ALTER TABLE ... ADD CONSTRAINT.
Ever since commit 007f68c37f,
ALTER TABLE no longer invokes handler::open() after
handler::commit_inplace_alter_table().
ha_innobase::reload_statistics(): Reload or recompute statistics
after ALTER TABLE.
innodb_notify_tabledef_changed(): A new function to invoke
ha_innobase::reload_statistics().
handlerton::notify_tabledef_changed(): Add the parameter handler*
so that ha_innobase::reload_statistics() can be invoked.
ha_partition::notify_tabledef_changed(),
partition_notify_tabledef_changed(): Pass through the call
to any partitions or subpartitions.
This is based on code that was supplied by Monty.
* be strict in CREATE TABLE, just like in ALTER TABLE, because
CREATE TABLE, just like ALTER TABLE, can be rolled back for any engine
* but don't auto-convert warnings into errors for engine warnings
(handler::create) - this matches ALTER TABLE behavior
* and not when creating a default record, these errors are handled
specially (and replaced with ER_INVALID_DEFAULT)
* always issue a Note when a non-unique key is truncated, because it's
not a Warning that can be converted to an Error. Before this commit
it was a Note for blobs and a Warning for all other data types.
..causes error on slave.
Cause: if the master doesn't have the frm file for the table,
DROP TABLE code will call ha_delete_table_force() to drop the table
in all available storage engines.
The issue was that this code path didn't check for
HTON_TABLE_MAY_NOT_EXIST_ON_SLAVE flag for the storage engine,
and so did not add "... IF EXISTS" to the statement that's written
to the binary log. This can cause error on the slave when it tries to
drop a table that's already gone.
Though this is an error message task, the problem was deep in the
`mysql_prepare_create_table` implementation. The problem is described as
follows:
1. `append_system_key_parts` was called before
`mysql_prepare_create_table`, though key name generation was done close to
the latest stage of the latter.
2. We can't move `append_system_key_parts` in the end, because system keys
should be appended before some checks done.
3. If the checks from `append_system_key_parts` are moved to the end of
`mysql_prepare_create_table`, then some other inappropriate errors are
issued. like `ER_DUP_FIELDNAME`.
To have key name specified in error message, name generation should be done
before the checks, which consequenced in more changes.
The final design for key initialization in `mysql_prepare_create_table`
follows. The initialization is done in three phases:
1. Calculate a total number of keys created with respect to keys ignored.
Allocate KEY* buffer.
2. Generate unique names; calculate a total number of key parts.
Make early checks. Allocate KEY_PART_INFO* buffer.
3. Initialize key parts, make the rest of the checks.
This failure was caused because of several bugs:
- Someone had removed s3-slave-ignore-updates=1 from slave.cnf, which
caused the slave to remove files that the master was working on.
- Bug in ha_partition::change_partitions() that didn't reset m_new_file
in case of errors. This caused crashes in ha_maria::extra() as the
maria handler was called on files that was already closed.
- In ma_pagecache there was a bug that when one got a read error one a
big block (s3 block), it left the flag PCBLOCK_BIG_READ on for the page
which cased an assert when the page where flushed.
- Flush all cached tables in case of ignored ALTER TABLE
Note that when merging code from 10.3, that fixes the partition bug, use
the code from this patch instead.
Changes to ma_pagecache.cc written or reviewed by Sanja
This commit fixed the problems with S3 after the "DROP TABLE FORCE" changes.
It also fixes all failing replication S3 tests.
A slave is delayed if it is trying to execute replicated queries on a
table that is already converted to S3 by the master later in the binlog.
Fixes for replication events on S3 tables for delayed slaves:
- INSERT and INSERT ... SELECT and CREATE TABLE are ignored but written
to the binary log. UPDATE & DELETE will be fixed in a future commit.
Other things:
- On slaves with --s3-slave-ignore-updates set, allow S3 tables to be
opened in read-write mode. This was done to be able to
ignore-but-replicate queries like insert. Without this change any
open of an S3 table failed with 'Table is read only' which is too
early to be able to replicate the original query.
- Errors are now printed if handler::extra() call fails in
wait_while_tables_are_used().
- Error message for row changes are changed from HA_ERR_WRONG_COMMAND
to HA_ERR_TABLE_READONLY.
- Disable some maria_extra() calls for S3 tables. This could cause
S3 tables to fail in some cases.
- Added missing thr_lock_delete() to ma_open() in case of failure.
- Removed from mysql_prepare_insert() the not needed argument 'table'.
- Remove row_start/row_end from keys in fix_create_like();
- Disable manual adding of implicit row_start/row_end to indexes on
CREATE TABLE. INVISIBLE_SYSTEM fields are unoperable by user;
- Fix memory leak on allocation of Key_part_spec.
Fixes also:
MDEV-22674 Server crash in compare_bin ... restore_table_state_after_repair
The bug was that the 'can_enable_index' variable in MyISAM and Aria was
not properly set and reset for bulk insert.
Because of this, insert...select was trying to recreate indexes while
another thread was using it, causing crashes in page cache.
Related to 7c2ba9e: ha_table_exists() is replaced by
dd_frm_type(). ha_table_exists() checked the existence of share and
that succeeded to enter the execution branch of ha_delete_table()
where tdc_remove_table() was called. Now it is skipped because
dd_frm_type() returns TABLE_TYPE_UNKNOWN. Fix it by calling
tdc_remove_table() in this case as well.
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
dict_table_t::prepare_instant(): Add debug assertions to catch
ROW_FORMAT mismatch.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
normal DROP TABLE with many tables continues after an error,
trying to drop as many tables as possible. But DROP TEMPORARY TABLE
was aborting on the first error. Change it to behave as DROP TABLE does.
don't do table discovery on DROP. DROP falls back to "force"
approach when a table isn't found and will try to drop in all
engines anyway. That is, trying to discover in all engines before
the drop is redundant and may be expensive.
first step in moving drop table out of the handler.
todo: other methods that don't need an open table
for now hton->drop_table is optional, for backward compatibility
reasons
When converting a table (test.s3_table) from S3 to another engine, the
following will be logged to the binary log:
DROP TABLE IF EXISTS test.t1;
CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine
INSERT rows to test.t1 in binary-row-log-format
The bug is that the above statements are logged one by one to the binary
log. This means that a fast slave, configured to use the same S3 storage
as the master, would be able to execute the DROP and CREATE from the
binary log before the master has finished the ALTER TABLE.
In this case the slave would ignore the DROP (as it's on a S3 table) but
it will stop on CREATE of the local tale, as the table is still exists in
S3. The REPLACE part will be ignored by the slave as it can't touch the
S3 table.
The fix is to ensure that all the above statements is written to binary
log AFTER the table has been deleted from S3.
copy_data_between_tables() sets to->s->default_fields to 0, as a part
of the code disabling ON UPDATE actions for all old fields
(so ON UPDATE is enable only for new fields during copying).
After the actual copying, copy_data_between_tables() did not restore
to->s->default_fields to the original value. As a result, the
TABLE_SHARE to->s was left in a wrong state after copy_data_between_tables()
and further open_table_from_share() using this TABLE_SHARE did not
populate TABLE::default_field, which further made
TABLE::evaluate_update_default_function() crash on access to NULL
pointer.
Fix:
Changing copy_data_between_tables() to restore to->s->default_fields
to its original value after the copying loop.
- IF EXISTS ends with a list of all not existing object, instead of a
separate note for every not existing object
- Produce a "Note" for all wrongly dropped objects
(like trying to do DROP SEQUENCE for a normal table)
- Do not write existing tables that could not be dropped to binlog
Other things:
MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop
parent table referenced by FK
This was caused by an older version of this commit patch and later fixed
The used code is largely based on code from Tencent
The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.
Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.
The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
engines.
- If the .frm table exists but the table does not exist in the engine
try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
is not needed and always succeed. This is used by ha_delete_table_force()
to know which handlers to ignore when trying to drop a table without
a .frm file.
The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.
Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
This is bug fix to handle the case when a drop table was aborted in
the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
InnoDB directory if .frm file does not exists. This change was required
to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table
Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
other engines