MEMORY table could be renamed into a non-extistent database.
rename() is documented to return ENOENT when the source file does not
exist OR when the target directory not exist. Nonexistent source .frm
file is ok (table can still exist in the engine), nonexistent target
directory is not.
Make my_rename to use ENOTDIR for the latter case. Make RENAME TABLE
issue an appropriate error ("unknown database" instead of "unknown table")
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
Problem was that if copy_data_between_tables() didn't do proper
clean up in case of failures:
- copy object was not properly freed
- end_bulk_insert() was not called
- mysql_trans_prepare_alter_copy_data() set THD->transaction.on to
false which was not properly restored
The last part caused a crash in Aria as Aria depends on that THD
is correct.
Other things:
- Reset info->switched_transactional after usage (safety)
- Reset bulk_insert_single_undo (safety)
ALTER TABLE ... ADD PARTITION modifies the open TABLE structure,
and sets table->need_reopen=1 to reset these modifications
in case of an error.
But under LOCK TABLES the table isn't get reopened, despite need_reopen.
Fixed by reopening need_reopen tables under LOCK TABLE.
PREBUILT->TABLE->N_MYSQL_HANDLES_OPENED == 1
ANALYSIS:
=========
Adding unique index to a InnoDB table which is locked as
mutliple instances may trigger an InnoDB assert.
When we add a primary key or an unique index, we need to
drop the original table and rebuild all indexes. InnoDB
expects that only the instance of the table that is being
rebuilt, is open during the process. In the current
scenario we have opened multiple instances of the table.
This triggers an assert during table rebuild.
'Locked_tables_list' encapsulates a list of all
instances of tables locked by LOCK TABLES statement.
FIX:
===
We are now temporarily closing all the instances of the
table except the one which is being altered and later
reopen them via Locked_tables_list::reopen_tables().
The thd->lex->part_info should be kept intact during PS
execution. Or the second execution gets that modified part_info.
Let's modify ths->work_part_info instead.
"tokudb_alter_table.drop_add_pk_part_104 leaves a temporary file behind"
Fixed by copying 3 lines from 10.1 to 10.0 that cleaned up the temporary
file for partitioning tables.
May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table"
I am not able to reproduce a crash, however there was no protection in
print_keydup_error() if the storage engine reported the wrong key number.
This patch adds such a protection and should stop any further crashes
in this case.
Other things:
- Added extra protection in Aria to not set errkey to more than number of
keys. (Don't think this is cause of this crash, but better safe than
sorry)
- Extend test_if_equal_repl_errors() to handle different cases of
ER_DUP_ENTRY. This is just mainly precaution for the future.
backport ce6c0e584e
MDEV-8960: Can't refer the same column twice in one ALTER TABLE
Problem was that if column was created in alter table when
it was refered again it was not tried to find from list
of current columns.
mysql_prepare_alter_table:
There is two cases
(1) If alter table adds a new column and then later alter
changes the field definition, there was no check from
list of new columns, instead an incorrect error was given.
(2) If alter table adds a new column and then later alter
changes the default, there was no check from list of
new columns, instead an incorrect error was given.
Problem was that if column was created in alter table when
it was refered again it was not tried to find from list
of current columns.
mysql_prepare_alter_table:
There is two cases
(1) If alter table adds a new column and then later alter
changes the field definition, there was no check from
list of new columns, instead an incorrect error was given.
(2) If alter table adds a new column and then later alter
changes the default, there was no check from list of
new columns, instead an incorrect error was given.
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.