Allow DROP TABLE `#mysql50##sql-...._.` to drop tables that were
being rebuilt by ALGORITHM=INPLACE
NOTE: If the server is killed after the table-rebuilding ALGORITHM=INPLACE
commits inside InnoDB but before the .frm file has been replaced, then
the recovery will involve something else than DROP TABLE.
NOTE: If the server is killed in a true inplace ALTER TABLE commits
inside InnoDB but before the .frm file has been replaced, then we
are really out of luck. To properly handle that situation, we would
need a transactional mysql.ddl_fixup table that directs recovery to
rename or remove files.
prepare_inplace_alter_table_dict(): Use the altered_table->s->table_name
for generating the new_table_name.
table_name_t::part_suffix: The start of the partition name suffix.
table_name_t::dbend(): Return the end of the schema name.
table_name_t::dblen(): Return the length of the schema name, in bytes.
table_name_t::basename(): Return the name without the schema name.
table_name_t::part(): Return the partition name, or NULL if none.
row_drop_table_for_mysql(): Assert for #sql, not #sql-ib.
fseg_alloc_free_page_low(): Remove a bogus and redundant assertion about
fil_space_t::purpose. The debug function fsp_space_modify_check()
is asserting something similar, but more accurately.
Problem was a merge error from MySQL wsrep i.e. Galera.
wsrep_on_check
New check function. Galera can't be enabled
if innodb-lock-schedule-algorithm=VATS.
innobase_kill_query
In Galera async kill we could own lock mutex.
innobase_init
If Variance-Aware-Transaction-Sheduling Algorithm (VATS) is
used on Galera we fall back to First-Come-First-Served (FCFS)
with notice to user.
Changed innodb-lock-schedule-algorithm as read-only parameter
as it was designed to be.
lock_reset_lock_and_trx_wait
Use ib::hex() to print out transaction ID.
lock_rec_other_has_expl_req,
lock_rec_other_has_conflicting,
RecLock::add_to_waitq
lock_rec_lock_slow
lock_table_other_has_incompatible
lock_rec_insert_check_and_lock
lock_prdt_other_has_conflicting
Change pointer to conflicting lock to normal pointer as this
pointer contents could be changed later.
RecLock::create
Conclicting lock pointer is moved to last parameter with
default value NULL. This conflicting transaction could
be selected as victim in Galera if requesting transaction
is BF (brute force) transaction. In this case contents
of conflicting lock pointer will be changed. Use ib::hex() to print
transaction ids.
Introduce the debug flag trx_t::persistent_stats to suppress the
assertion for the updates of persistent statistics during fast
shutdown.
dict_stats_exec_sql(): Do execute the statement even though shutdown
has been initiated.
dict_stats_exec_sql(): Expect the caller to always provide a transaction.
Remove some redundant assertions. The caller must hold dict_sys->mutex,
but holding dict_operation_lock is only necessary for accessing
data dictionary tables, which we are not accessing.
dict_stats_save_index_stat(): Acquire dict_sys->mutex
for invoking dict_stats_exec_sql().
dict_stats_save(), dict_stats_update_for_index(), dict_stats_update(),
dict_stats_drop_index(), dict_stats_delete_from_table_stats(),
dict_stats_delete_from_index_stats(), dict_stats_drop_table(),
dict_stats_rename_in_table_stats(), dict_stats_rename_in_index_stats(),
dict_stats_rename_table(): Use a single caller-provided
transaction that is started and committed or rolled back by the caller.
dict_stats_process_entry_from_recalc_pool(): Let the caller provide
a transaction object.
ha_innobase::open(): Pass a transaction to dict_stats_init().
ha_innobase::create(), ha_innobase::discard_or_import_tablespace():
Pass a transaction to dict_stats_update().
ha_innobase::rename_table(): Pass a transaction to
dict_stats_rename_table(). We do not use the same transaction
as the one that updated the data dictionary tables, because
we already released the dict_operation_lock. (FIXME: there is
a race condition; a lock wait on SYS_* tables could occur
in another DDL transaction until the data dictionary transaction
is committed.)
ha_innobase::info_low(): Pass a transaction to dict_stats_update()
when calculating persistent statistics.
alter_stats_norebuild(), alter_stats_rebuild(): Update the
persistent statistics as well. In this way, a single transaction
will be used for updating the statistics of a whole table, even
for partitioned tables.
ha_innobase::commit_inplace_alter_table(): Drop statistics for
all partitions when adding or dropping virtual columns, so that
the statistics will be recalculated on the next handler::open().
This is a refactored version of Oracle Bug#22469660 fix.
RecLock::add_to_waitq(), lock_table_enqueue_waiting():
Do not allow a lock wait to occur for updating statistics
in a data dictionary transaction, such as DROP TABLE. Instead,
return the previously unused error code DB_QUE_THR_SUSPENDED.
row_merge_lock_table(), row_mysql_lock_table(): Remove dead code
for handling DB_QUE_THR_SUSPENDED.
row_drop_table_for_mysql(), row_truncate_table_for_mysql():
Drop the statistics as part of the data dictionary transaction.
After TRUNCATE TABLE, the statistics will be recalculated on
subsequent ha_innobase::open(), similar to how the logic after
the above-mentioned Oracle Bug#22469660 fix in
ha_innobase::commit_inplace_alter_table() works.
btr_defragment_thread(): Use a single transaction object for
updating defragmentation statistics.
dict_stats_save_defrag_stats(), dict_stats_save_defrag_stats(),
dict_stats_process_entry_from_defrag_pool(),
dict_defrag_process_entries_from_defrag_pool(),
dict_stats_save_defrag_summary(), dict_stats_save_defrag_stats():
Add a parameter for the transaction.
dict_stats_empty_table(): Make public. This will be called by
row_truncate_table_for_mysql() after dropping persistent statistics,
to clear the memory-based statistics as well.
Silence the error log output that was introduced in MySQL 5.7
(MariaDB 10.2.2) if log_warnings=2 or less.
We should still figure out what these messages really indicate
and how to solve the problems.
pc_sleep_if_needed(): Add a parameter for the current time,
so that there will be fewer successive calls to ut_time_ms()
with no I/O between them.
buf_flush_page_cleaner_coordinator(): Exit the first loop
whenever shutdown has been requested. At the start of the loop,
call ut_time_ms() only once. Do not display the message if
log_warnings=2 or less.
srv_purge_wakeup(): If thd_destructor_proxy has initiated the first
step of shutdown, ensure that all purge threads terminate.
logs_empty_and_mark_files_at_shutdown(): Add a debug assertion.
(The purge threads should have been shut down already before this step.)
dict_stats_exec_sql(): Refuse the operation if shutdown has been
initiated.
The real fix would be to update the persistent statistics as part
of the data dictionary transactions. To do this, we should move the
storage of InnoDB persistent statistics to the InnoDB data files,
and maybe also remove the InnoDB data dictionary.
Also, MDEV-14317 When ALTER TABLE is aborted, do not write garbage pages to data files
As pointed out by Shaohua Wang, the merge of MDEV-13328 from
MariaDB 10.1 (based on MySQL 5.6) to 10.2 (based on 5.7)
was performed incorrectly.
Let us always pass a non-NULL FlushObserver* when writing
to data files is desired.
FlushObserver::is_partial_flush(): Check if this is a bulk-load
(partial flush of the tablespace).
FlushObserver::is_interrupted(): Check for interrupt status.
buf_LRU_flush_or_remove_pages(): Instead of trx_t*, take
FlushObserver* as a parameter.
buf_flush_or_remove_pages(): Remove the parameters flush, trx.
If observer!=NULL, write out the data pages. Use the new predicate
observer->is_partial() to distinguish a partial tablespace flush
(after bulk-loading) from a full tablespace flush (export).
Return a bool (whether all pages were removed from the flush_list).
buf_flush_dirty_pages(): Remove the parameter trx.
This is caused by following change:
commit 95d29c99f01882ffcc2259f62b3163f9b0e80c75
Author: Marko Mäkelä <marko.makela@oracle.com>
Date: Tue Nov 27 11:12:13 2012 +0200
Bug#15920445 INNODB REPORTS ER_DUP_KEY BEFORE CREATE UNIQUE INDEX COMPLETED
There is a phase during online secondary index creation where the index has
been internally completed inside InnoDB, but does not 'officially' exist yet.
We used to report ER_DUP_KEY in these situations, like this:
ERROR 23000: Can't write; duplicate key in table 't1'
What we should do is to let the 'offending' operation complete, but report an
error to the
ALTER TABLE t1 ADD UNIQUE KEY (c2):
ERROR HY000: Index c2 is corrupted
(This misleading error message should be fixed separately:
Bug#15920713 CREATE UNIQUE INDEX REPORTS ER_INDEX_CORRUPT INSTEAD OF DUPLICATE)
row_ins_sec_index_entry_low(): flag the index corrupted instead of
reporting a duplicate, in case the index has not been published yet.
rb:1614 approved by Jimmy Yang
Problem is that after we have found duplicate key on primary key
we continue to get necessary gap locks in secondary indexes to
block concurrent transactions from inserting the searched records.
However, search from unique index used in foreign key constraint
could return DB_NO_REFERENCED_ROW if INSERT .. ON DUPLICATE KEY UPDATE
does not contain value for foreign key column. In this case
we should return the original DB_DUPLICATE_KEY error instead
of DB_NO_REFERENCED_ROW.
Consider as a example following:
create table child(a int not null primary key,
b int not null,
c int,
unique key (b),
foreign key (b) references
parent (id)) engine=innodb;
insert into child values (1,1,2);
insert into child(a) values (1) on duplicate key update c = 3;
Now primary key value 1 naturally causes duplicate key error that will be
stored on node->duplicate. If there was no duplicate key error, we should
return the actual no referenced row error. As value for column b used in
both unique key and foreign key is not provided, server uses 0 as a
search value. This is naturally, not found leading to DB_NO_REFERENCED_ROW.
But, we should update the row with primay key value 1 anyway as
requested by on duplicate key update clause.
row_log_table_apply_op(): Remove references to dict_table_t::n_vcols.
Virtual column information is no longer being written to the log.
row_log_t: Remove the unused fields n_old_col, n_old_vcol.
When MySQL 5.7 introduced indexed virtual columns, it introduced
several bugs into the online table-rebuilding ALTER, that is,
the row_log_table_apply() family of functions.
The online_log format that was introduced for online table-rebuilding
ALTER in MySQL 5.6 should be sufficient. Ideally, any indexed virtual
column values would be evaluated based on the log records in the temporary
file. There is no need to log virtual column values.
(For ADD INDEX, that is row_log_apply(), we always must log the values of
the keys, no matter if the columns are virtual.)
Because omitting the virtual column values removes any chance of
row_log_table_apply() working with indexed virtual columns, we
will for now refuse LOCK=NONE in table-rebuilding ALTER operations
when indexes on virtual columns exist. This restriction would be
lifted in MDEV-14341.
innobase_indexed_virtual_exist(): New predicate, to determine if
indexed virtual columns exist in a table definition.
ha_innobase::check_if_supported_inplace_alter(): Refuse online rebuild
if indexed virtual columns exist.
rec_get_converted_size_temp_v(), rec_convert_dtuple_to_temp_v(): Remove.
row_log_table_delete(), row_log_table_update(, row_log_table_insert():
Remove parameters for virtual columns.
trx_undo_read_v_rows(): Remove the col_map parameter.
row_log_table_apply(): Do not deal with virtual columns.
* created tests focusing in multi-master conflicts during cascading foreign key
processing
* in row0upd.cc, calling wsrep_row_ups_check_foreign_constraints only when
running in cluster
* in row0ins.cc fixed regression from MW-369, which caused crash with MW-402.test