For non-semi-join subquery optimization we do a cost based decision between
Materialisation and IN -> EXIST transformation. The issue in this case is that for IN->EXIST transformation
we run JOIN::reoptimize with the IN->EXISt conditions and we come up with a new query plan. But when we compare
the cost with Materialization, we make the decision to chose Materialization so we need to restore the query plan
for Materilization.
The saving and restoring for keyuse array and join_tab keyuse is only done when we have atleast
one element in the keyuse_array , we are now changing to do it even for 0 elements to main the generality.
MDEV-16123 ASAN heap-use-after-free handler::ha_index_or_rnd_end
MDEV-13828 Segmentation fault on RENAME TABLE
Problem was that destructor called methods for closed table.
Fixed by removing code in destructor.
Problem was that detection of temporary tables was all wrong for
RENAME TABLE.
(Temporary tables where opened by top level call to
open_temporary_tables(), which can't detect if a temporary table
was renamed to something and then reused).
Fixed by adding proper parsing of rename list to check against
the current name of a table at each rename stage.
Also change do_rename_temporary() to check against the current
state of temporary tables, not according to the state of start
of RENAME TABLE.
MDEV-10130 Assertion `share->in_trans == 0' failed in storage/maria/ma_close.c
MDEV-10378 Assertion `trn' failed in virtual int ha_maria::start_stmt
The problem was that maria_handler->trn was not properly reset
at commit/rollback and ha_maria::exernal_lock() could get confused
because.
There was some old code in ha_maria::implicit_commit() that tried
to take care of this, but it was not bullet proof.
Fixed by adding list of all tables that is part of the maria transaction to
TRN.
A nice side effect was of the fix is that loops in
ha_maria::implict_commit() got to be much simpler.
Other things:
- Fixed a bug in mysql_admin_table() where argument open_for_modify
was wrongly reset for the next table in the chain
- rollback admin command also in case of fatal error.
- Split _ma_set_trn_for_table() to three version to simplify code
and debugging.
- Several new asserts to detect the original problem (that file was
not properly removed from trn before calling ma_close())
upon SELECT .. LIMIT 0
The code must differentiate between a SELECT with contradictory
WHERE/HAVING and one with LIMIT 0.
Also for the latter printed 'Zero limit' instead of 'Impossible where'
in the EXPLAIN output.
Description:- MyISAM table gets corrupted with concurrent
executions of INSERT, DELETE statements in a particular
sequence.
Analysis:- Due to the inappropriate manipulation of w_lock
and r_lock associated with a MyISAM table, there arises a
scenario where the table's state information becomes
invalid.
Fix:- A lock is introduced to resolve this issue.
It should work ok on all Unixes, but on Windows ,only worked by accident
in the past, with client not being Unicode safe.
It stopped working with Visual Studio 2017 15.7 update now.
Fixed by extending unique_table() with a flag to not allow usage of
the replaced table.
I also cleaned up find_dup_table() to not use goto next.
I also added more comments to the code in find_dup_table()
Analyze core independently of max-save-datadir and max-save-core setting.
Increment $num_saved_cores only if core was actually saved.
"Move any core files from e.g. mysqltest" independently of
max-save-datadir setting. Note: it may overwrite core from mysqld, which
might not be desired (it did work this way even before).
Problem was that if copy_data_between_tables() didn't do proper
clean up in case of failures:
- copy object was not properly freed
- end_bulk_insert() was not called
- mysql_trans_prepare_alter_copy_data() set THD->transaction.on to
false which was not properly restored
The last part caused a crash in Aria as Aria depends on that THD
is correct.
Other things:
- Reset info->switched_transactional after usage (safety)
- Reset bulk_insert_single_undo (safety)
Problem was that we the bitmap needs to be flushed before disabling
logging of redo entires, as writing the bitmap to disk by
background checkpoint may cause redo entries.
Description:- Client applications establishes connection to
server, which does not support SSL, via TCP even when SSL is
enforced via MYSQL_OPT_SSL_MODE or MYSQL_OPT_SSL_ENFORCE or
MYSQL_OPT_SSL_VERIFY_SERVER_CERT.
Analysis:- There exist no error handling for catching client
applications which enforces SSL connection to connect to a
server which does not support SSL.
Fix:- Error handling is done to catch above mentioned
scenarios.
Problem:
When FTS index is added into a table which doesn't have 'FTS_DOC_ID'
column, Innodb rebuilds table to add column 'FTS_DOC_ID'. when this FTS
index is dropped from this table. Innodb doesn't not rebuild table to
remove 'FTS_DOC_ID' column and deletes FTS index auxiliary tables.
But it doesn't delete FTS common auxiliary tables.
Later when the database having this table is renamed, FTS auxiliary
tables are not renamed because table's flags2 (dict_table_t.flags2)
has been resetted for DICT_TF2_FTS flag during FTS index drop operation.
Now when we drop old database, it leads to an assert.
Fix:
During renaming of FTS auxiliary tables, ORed a condition to check if
table has DICT_TF2_FTS_HAS_DOC_ID flag set.
RB: 18769
Reviewed by : Jimmy.Yang@oracle.com
Problem:
=======
Multiple insert statement in table contains FULLTEXT KEY and a
FTS_DOC_ID column aborts the server if the FTS_DOC_ID exceeds
FTS_DOC_ID_MAX_STEP.
Solution:
========
Remove the exception for first committed insert statement.
Reviewed-by: Jimmy Yang<jimmy.yang@oracle.com>
RB: 18023
When Oracle fixed MDEV-13899 in their own way, they moved the
condition to the only caller of PageConverter::update_records().
Thus, the merge of 5.6.40 into MariaDB added a redundant condition.
PageConverter::update_records(): Move the page_is_leaf() condition
to the only caller, PageConverter::update_index_page().
SHOW_ROUTINE_GRANTS
Description :- Server crashes in show_routine_grants().
Analysis :- When "grant_reload_procs_priv" encounters
an error, the grant structures (structures with column,
function and procedure privileges) are freed. Server
crashes when trying to access these structures later.
Fix :- Grant structures are retained even when
"grant_reload_procs_priv()" encounters an error while
reloading column, function and procedure privileges.
The problem is hard to repeat, and I failed to create a deterministic
test case. Online index creation creates stubs for to-be-created indexes.
If index creation fails, we could remove these stubs while locks exist
in the indexes. (This would require that the index creation was completed,
and a concurrent DML operation acquired a lock on a record in the
uncommitted index. If a duplicate key error occurs in an uncommitted
index, the error will be reported for the CREATE UNIQUE INDEX, not for
the DML operation that tried to insert the duplicate.)
dict_table_try_drop_aborted(), row_merge_drop_indexes(): If transactional
locks exist on the table, keep the table->indexes intact.
ha_innobase::commit_inplace_alter_table(): Defer the freeing of ctx->trx
until after the operation has been successfully committed. In this way,
rollback on a partitioned table will be possible.
rollback_inplace_alter_table(): Handle ctx->new_table == NULL when
ctx->trx != NULL.
Issue:
------
Prefix for externally stored columns were being stored in online_log when a
table is altered and alter causes table to be rebuilt. Space in online_log is
limited and if length of prefix of externally stored columns is very big, then
it is being written to online log without making sure if it fits. This leads to
memory corruption.
Fix:
----
After fix for Bug#16544143, there is no need to store prefixes of externally
stored columnd in online_log. Thus remove the code which stores column prefixes
for externally stored columns. Also, before writing anything on online_log,
make sure it fits to available memory to avoid memory corruption.
Read RB page for more details.
Reviewed-by: Annamalai Gurusami <annamalai.gurusami@oracle.com>
RB: 18239