InnoDB insisted on closing the file handle before renaming a file.
Renaming a file should never be a problem on POSIX systems. Also on
Windows it should work if the file was opened in FILE_SHARE_DELETE
mode.
fil_space_t::stop_ios: Remove. We no longer need to stop file access
during rename operations.
fil_mutex_enter_and_prepare_for_io(): Remove the wait for stop_ios.
fil_rename_tablespace(): Remove the retry logic; do not close the
file handle. Remove the unused fault injection that was added along
with the DATA DIRECTORY functionality (MySQL WL#5980).
os_file_create_simple_func(), os_file_create_func(),
os_file_create_simple_no_error_handling_func(): Include FILE_SHARE_DELETE
in the share_mode. (We will still prevent multiple InnoDB instances
from using the same files by not setting FILE_SHARE_WRITE.)
ha_innobase::optimize(): If both innodb_defragment and
innodb_optimize_fulltext_only are at their default settings (OFF),
fall back to ALTER TABLE. Else process one or both options.
ha_innobase::set_partition_owner_stats(): Remove (unused function).
ha_innobase::ha_partition_stats: Remove (the variable is never read).
Remove unused ut_timer functions.
For non-semi-join subquery optimization we do a cost based decision between
Materialisation and IN -> EXIST transformation. The issue in this case is that for IN->EXIST transformation
we run JOIN::reoptimize with the IN->EXISt conditions and we come up with a new query plan. But when we compare
the cost with Materialization, we make the decision to chose Materialization so we need to restore the query plan
for Materilization.
The saving and restoring for keyuse array and join_tab keyuse is only done when we have atleast
one element in the keyuse_array , we are now changing to do it even for 0 elements to main the generality.
Also fixes MDEV-14727, MDEV-14491
InnoDB: Error: Waited for 5 secs for hash index ref_count (1) to drop to 0
by replacing the flawed wait logic in dict_index_remove_from_cache_low().
On DISCARD TABLESPACE, there is no need to drop the adaptive hash index.
We must drop it on IMPORT TABLESPACE, and eventually on DROP TABLE or
DROP INDEX. As long as the dict_index_t object remains in the cache
and the table remains inaccessible, the adaptive hash index entries
to orphaned pages would not do any harm. They would be dropped when
buffer pool pages are reused for something else.
btr_search_drop_page_hash_when_freed(), buf_LRU_drop_page_hash_batch():
Remove the parameter zip_size, and pass 0 to buf_page_get_gen().
buf_page_get_gen(): Ignore zip_size if mode==BUF_PEEK_IF_IN_POOL.
buf_LRU_drop_page_hash_for_tablespace(): Drop the adaptive hash index
even if the tablespace is inaccessible.
buf_LRU_drop_page_hash_for_tablespace(): New global function, to drop
the adaptive hash index.
buf_LRU_flush_or_remove_pages(), fil_delete_tablespace():
Remove the parameter drop_ahi.
dict_index_remove_from_cache_low(): Actively drop the adaptive hash index
if entries exist. This should prevent InnoDB hangs on DROP TABLE or
DROP INDEX.
row_import_for_mysql(): Drop any adaptive hash index entries for the table.
row_drop_table_for_mysql(): Drop any adaptive hash index for the table,
except if the table resides in the system tablespace. (DISCARD TABLESPACE
does not apply to the system tablespace, and we do no want to drop the
adaptive hash index for other tables than the one that is being dropped.)
row_truncate_table_for_mysql(): Drop any adaptive hash index entries for
the table, except if the table resides in the system tablespace.
When the transaction isolation level is SERIALIZABLE, or when
a locking read is performed in the REPEATABLE READ isolation level,
InnoDB must lock delete-marked records in order to prevent another
transaction from inserting something.
However, at READ UNCOMMITTED or READ COMMITTED isolation level or
when the parameter innodb_locks_unsafe_for_binlog is set, the
repeatability of the reads does not matter, and there is no need
to lock any records.
row_search_for_mysql(): Skip locks on delete-marked committed records
upfront, instead of invoking row_unlock_for_mysql() afterwards.
The unlocking never worked for secondary index records.
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
Close MYSQL (and destroy THD) in the same thread where it was used,
because THD embeds MDL_context, that owns some LF_PINS, that remember
a pointer to my_thread_var->stack_ends_here.
MDEV-16123 ASAN heap-use-after-free handler::ha_index_or_rnd_end
MDEV-13828 Segmentation fault on RENAME TABLE
Problem was that destructor called methods for closed table.
Fixed by removing code in destructor.
Problem was that detection of temporary tables was all wrong for
RENAME TABLE.
(Temporary tables where opened by top level call to
open_temporary_tables(), which can't detect if a temporary table
was renamed to something and then reused).
Fixed by adding proper parsing of rename list to check against
the current name of a table at each rename stage.
Also change do_rename_temporary() to check against the current
state of temporary tables, not according to the state of start
of RENAME TABLE.
MDEV-10130 Assertion `share->in_trans == 0' failed in storage/maria/ma_close.c
MDEV-10378 Assertion `trn' failed in virtual int ha_maria::start_stmt
The problem was that maria_handler->trn was not properly reset
at commit/rollback and ha_maria::exernal_lock() could get confused
because.
There was some old code in ha_maria::implicit_commit() that tried
to take care of this, but it was not bullet proof.
Fixed by adding list of all tables that is part of the maria transaction to
TRN.
A nice side effect was of the fix is that loops in
ha_maria::implict_commit() got to be much simpler.
Other things:
- Fixed a bug in mysql_admin_table() where argument open_for_modify
was wrongly reset for the next table in the chain
- rollback admin command also in case of fatal error.
- Split _ma_set_trn_for_table() to three version to simplify code
and debugging.
- Several new asserts to detect the original problem (that file was
not properly removed from trn before calling ma_close())
upon SELECT .. LIMIT 0
The code must differentiate between a SELECT with contradictory
WHERE/HAVING and one with LIMIT 0.
Also for the latter printed 'Zero limit' instead of 'Impossible where'
in the EXPLAIN output.
order with Galera and encrypt-tmp-files=1
Problem:- If trans_cache (IO_CACHE) uses encrypted tmp file
then on next DML server will crash.
Case:-
Lets take a case , we have a table t1 , We try to do 2 inserts in t1
1. A really long insert so that trans_cache has to use temp_file
2. Just a small insert
Analysis:- Actually server crashes from inside of galera
library.
/lib64/libc.so.6(abort+0x175)[0x7fb5ba779dc5]
/usr/lib64/galera/libgalera_smm.so(_ZN6galera3FSMINS_9TrxHandle5State...
mysys/stacktrace.c:247(my_print_stacktrace)[0x7fb5a714940e]
sql/signal_handler.cc:160(handle_fatal_signal)[0x7fb5a715c1bd]
sql/wsrep_hton.cc:257(wsrep_rollback)[0x7fb5bcce923a]
sql/wsrep_hton.cc:268(wsrep_rollback)[0x7fb5bcce9368]
sql/handler.cc:1658(ha_rollback_trans(THD*, bool))[0x7fb5bcd4f41a]
sql/handler.cc:1483(ha_commit_trans(THD*, bool))[0x7fb5bcd4f804]
but actual issue is not in galera but in mariadb, because for 2nd
insert we should never call rollback. We are calling rollback because
log_and_order fails it fails because write_cache fails , It fails
because after reinit_io_cache(trans_cache) , my_b_bytes_in_cache says 0
so we look into tmp_file for data , which is obviously wrong since temp
was used for previous insert and it no longer exist.
wsrep_write_cache_inc() reads the IO_CACHE in a loop, filling it with
my_b_fill() until it returns "0 bytes read". Later
MYSQL_BIN_LOG::write_cache() does the same. wsrep_write_cache_inc()
assumes that reading a zero bytes past EOF leaves the old data in the
cache
Solution:- There is two issue in my_b_encr_read
1st we should never equal read_end to info->buffer. I mean this
does not make sense read_end should always point to end of buffer.
2nd For most of the case(apart from async IO_CACHE) info->pos_in_file
should be equal to info->buffer position wrt to temp file , since
in this case we are not changing info->buffer it should remain
unchanged.
The failures with valgrind occur as a result of Spider sometimes using the
wrong transaction for operations in background threads that send requests to
the data nodes. The use of the wrong transaction caused the networking to the
data nodes to use the wrong thread in some cases. Valgrind eventually
detects this when such a thread is destroyed before it is used to disconnect
from the data node by that wrong transaction when it is freed.
I have fixed the problem by correcting the transaction used in each of these
cases.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit afe5a51 on branch 10.2
Description:- MyISAM table gets corrupted with concurrent
executions of INSERT, DELETE statements in a particular
sequence.
Analysis:- Due to the inappropriate manipulation of w_lock
and r_lock associated with a MyISAM table, there arises a
scenario where the table's state information becomes
invalid.
Fix:- A lock is introduced to resolve this issue.
It should work ok on all Unixes, but on Windows ,only worked by accident
in the past, with client not being Unicode safe.
It stopped working with Visual Studio 2017 15.7 update now.
The crash occurs when a thread that is closing its connection attempts to
access Spider transaction information when another thread has freed that memory
while processing Spider plugin deinit. This occurs because Spider does not
adjust the plugin's reference count when it sets a transaction information
pointer for the plugin.
The fix I implemented changes the way Spider sets the transaction information
pointer to use thd_set_ha_data() so that Spider's plugin reference counter is
adjusted as well.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged From:
Commit ab9d420 on branch 10.2
Fixed by extending unique_table() with a flag to not allow usage of
the replaced table.
I also cleaned up find_dup_table() to not use goto next.
I also added more comments to the code in find_dup_table()
Analyze core independently of max-save-datadir and max-save-core setting.
Increment $num_saved_cores only if core was actually saved.
"Move any core files from e.g. mysqltest" independently of
max-save-datadir setting. Note: it may overwrite core from mysqld, which
might not be desired (it did work this way even before).
Problem was that if copy_data_between_tables() didn't do proper
clean up in case of failures:
- copy object was not properly freed
- end_bulk_insert() was not called
- mysql_trans_prepare_alter_copy_data() set THD->transaction.on to
false which was not properly restored
The last part caused a crash in Aria as Aria depends on that THD
is correct.
Other things:
- Reset info->switched_transactional after usage (safety)
- Reset bulk_insert_single_undo (safety)
Problem was that we the bitmap needs to be flushed before disabling
logging of redo entires, as writing the bitmap to disk by
background checkpoint may cause redo entries.
Description:- Client applications establishes connection to
server, which does not support SSL, via TCP even when SSL is
enforced via MYSQL_OPT_SSL_MODE or MYSQL_OPT_SSL_ENFORCE or
MYSQL_OPT_SSL_VERIFY_SERVER_CERT.
Analysis:- There exist no error handling for catching client
applications which enforces SSL connection to connect to a
server which does not support SSL.
Fix:- Error handling is done to catch above mentioned
scenarios.
Problem:
When FTS index is added into a table which doesn't have 'FTS_DOC_ID'
column, Innodb rebuilds table to add column 'FTS_DOC_ID'. when this FTS
index is dropped from this table. Innodb doesn't not rebuild table to
remove 'FTS_DOC_ID' column and deletes FTS index auxiliary tables.
But it doesn't delete FTS common auxiliary tables.
Later when the database having this table is renamed, FTS auxiliary
tables are not renamed because table's flags2 (dict_table_t.flags2)
has been resetted for DICT_TF2_FTS flag during FTS index drop operation.
Now when we drop old database, it leads to an assert.
Fix:
During renaming of FTS auxiliary tables, ORed a condition to check if
table has DICT_TF2_FTS_HAS_DOC_ID flag set.
RB: 18769
Reviewed by : Jimmy.Yang@oracle.com
Problem:
=======
Multiple insert statement in table contains FULLTEXT KEY and a
FTS_DOC_ID column aborts the server if the FTS_DOC_ID exceeds
FTS_DOC_ID_MAX_STEP.
Solution:
========
Remove the exception for first committed insert statement.
Reviewed-by: Jimmy Yang<jimmy.yang@oracle.com>
RB: 18023