- We don't test `json` MySQL tables from `std_data` since the error `ER_TABLE_NEEDS_REBUILD
` is invoked. However MDEV-32235 will override this test after merge,
but leave it to show behavior and historical changes.
- Closes PR #2833
Reviewer: <cvicentiu@mariadb.org>
<serg@mariadb.com>
Fixed typo in mysql_admin_table which cused call of
close_unused_temporary_table_instances alwas for the first table
instead of the current table.
Added ASSERT that close_unused_temporary_table_instances should not
remove all instances of user created temporary table.
The spider init bug fixes remove any race conditions during spider
init.
Also remove the add_suppressions in spider/bugfix.mdev_27575 which is
a similar issue.
All Spider tables are recorded in the system table
mysql.spider_tables. Deleting a spider table removes the corresponding
rows from the system table, among other things. This patch makes it so
that if spider could not find any record in the system table to delete
for a given table, it should correctly report that no such Spider
table exists.
When innodb_undo_log_truncate=ON causes an InnoDB undo tablespace
to be truncated, we must guarantee that the undo tablespace will
be rebuilt atomically: After mtr_t::commit_shrink() has durably
written the mini-transaction that rebuilds the undo tablespace,
we must not write any old pages to the tablespace.
To guarantee this, in trx_purge_truncate_history() we used to
traverse the entire buf_pool.flush_list in order to acquire
exclusive latches on all pages for the undo tablespace that
reside in the buffer pool, so that those pages cannot be written
and will be evicted during mtr_t::commit_shrink(). But, this
traversal may interfere with the page writing activity of
buf_flush_page_cleaner(). It would be better to lazily discard
the old pages of the truncated undo tablespace.
fil_space_t::is_being_truncated, fil_space_t::clear_stopping(): Remove.
fil_space_t::create_lsn: A new field, identifying the LSN of the
latest rebuild of a tablespace.
buf_page_t::flush(), buf_flush_try_neighbors(): Evict pages whose
FIL_PAGE_LSN is below fil_space_t::create_lsn.
mtr_t::commit_shrink(): Update fil_space_t::create_lsn and
fil_space_t::size right before the log is durably written and the
tablespace file is being truncated.
fsp_page_create(), trx_purge_truncate_history(): Simplify the logic.
Reviewed by: Thirunarayanan Balathandayuthapani, Vladislav Lesin
Performance tested by: Axel Schwenke
Correctness tested by: Matthias Leich
trx_purge_free_segment(), trx_purge_truncate_rseg_history():
Do not claim that the blocks will be modified in the mini-transaction,
because that will not always be the case. Whenever there is a
modification, mtr_t::set_modified() will flag it.
The debug assertion that failed in recovery is checking that all
changes to data pages are covered by log records. Due to these
incorrect calls, we would unnecessarily write unmodified data pages,
which is something that commit 05fa4558e0
aims to avoid.
The incorrect calls had originally been added in
commit de31ca6a21 (MDEV-32820) and
commit 86767bcc0f (MDEV-29593).
Reviewed by: Vladislav Lesin
Tested by: Elena Stepanova
A new column was introduced to the show index output in 10.6 in
f691d9865b
Thus we update the check of the number of columns to be at least 13,
rather than exactly 13.
Also backport an err number and format from 10.5 for better error
messages when the column number is wrong.
There are a large number of uses of `strerror` in the codebase,
the local declaration in `storage/connect/tabvct.cpp` is the only one.
Given that none is needed elsewhere, I conclude that this instance can
simply be removed.
perfschema thread walker needs to take thread's LOCK_thd_kill to prevent
the thread from disappearing why it's being looked at.
But there's no need to lock it for the current thread.
In fact, it was harmful as some code down the stack might take
LOCK_thd_kill (e.g. set_killed() does it, and my_malloc_size_cb_func()
calls set_killed()). And it caused a bunch of mutexes being locked under
LOCK_thd_kill, which created problems later when my_malloc_size_cb_func()
called set_killed() at some unspecified point under some
random mutexes.
same assertion with spider. spider status variables
didn't expect to be queried from a different thread
without LOCK_thd_data.
And they didn't expect to be queried under LOCK_thd_data either
(because spider_get_trx() calls thd_set_ha_data()).
need to protect access to thread-local cache_mngr with LOCK_thd_data
technically only access from different threads has to be protected,
but this is the SHOW STATUS code path, so the difference is neglectable
initialize THD::rand in THD::init() not in THD::THD(),
because the former is also called when a THD is reused -
in COM_CHANGE_USER and in taking a THD from the cache.
Also use current cycle timer for more unpreditability
Item_float::neg() did not preserve the "presentation" from "this".
So
CAST(-1e0 AS UNSIGNED) -- cast from double to unsigned
changes its meaning to:
CAST(-1 AS UNSIGNED) -- cast signed to undigned
Fixing Item_float::neg() to construct the new value for
Item_float::presentation as follows:
- if the old value starts with minus, then the minus is truncated:
'-2e0' -> '2e0'
- otherwise, minus sign followed by its old value:
'1e0' -> '-1e0'
The bitmap is temporarily flipped to ~0 for the sake of checking all
fields. It needs to be restored because it will be reused in second
and subsequent ps execution.
With the result like
encryption.innochecksum 'debug' [ skipped ] combination not found
instead of
*** ERROR: Could not run encryption.innochecksum with 'debug' combination(s)
buf_flush_page_cleaner(): A continue or break inside DBUG_EXECUTE_IF
actually is a no-op. Use an explicit call to _db_keyword_() to
actually avoid advancing the checkpoint.
buf_flush_list_now_set(): Invoke os_aio_wait_until_no_pending_writes()
to ensure that the page write to the system tablespace is completed.
In commit b4ff64568c the
signature of mysql_show_var_func was changed, but not all functions
of that type were adjusted.
When the server is configured with `cmake -DWITH_ASAN=ON` and
compiled with clang, runtime errors would be flagged for invoking
functions through an incompatible function pointer.
Reviewed by: Michael 'Monty' Widenius
If a query contained a CTE whose name coincided with the name of one of
the base tables used in the specification of the CTE and the query had at
least two references to this CTE in the specifications of other CTEs then
processing of the query led to unlimited recursion that ultimately caused
a crash of the server.
Any secondary non-recursive reference to a CTE requires creation of a copy
of the CTE specification. All the references to CTEs in this copy must be
resolved. If the specification contains a reference to a base table whose
name coincides with the name of then CTE then it should be ensured that
this reference in no way can be resolved against the name of the CTE.
srv_start(): Move a read only mode startup tweak from
innodb_init_params() to the correct location. Also if
innodb_force_recovery=6 we will disable the doublewrite buffer,
because InnoDB must run in read-only mode to prevent further corruption.
This change only affects debug checks. Whenever srv_read_only_mode holds,
the buf_pool.flush_list will be empty, that is, there will be no writes
of persistent InnoDB data pages.
Reviewed by: Thirunarayanan Balathandayuthapani
- innodb.doublewrite_debug should avoid the checkpoint
before killing the server. So used debug sync and
innodb_flush_sync to avoid the checkpoint completely.
Test case allowed to skip on MSAN builder due to extra
checkpoint.
wsrep_plugin_init(), wsrep_plugin_deinit(): Remove these dummy functions
in order to fix an error that would be flagged by cmake -DWITH_UBSAN=ON
when using clang.
wsrep_show_ready(), wsrep_show_bf_aborts(): Correct the signature.
If a query has a HAVING clause that contains a predicate with a constant
IN subquery whose lef part in its turn is a subquery and the predicate is
subject to pushdown from HAVING to WHERE then execution of the query could
cause a crash of the server.
The cause of the problem was the missing implementation of the walk()
method for the class Item_in_optimizer. As a result in some cases the left
operand of the Item_in_optimizer condition could be traversed twice by
the walk procedure. For many call-back functions used as an argument of
this procedure it does not matter. Yet it matters for the call-back
function cleanup_excluding_immutables_processor() used in pushdown of
predicates from HAVING to WHERE. If the processed item is marked with
the IMMUTABLE_FL flag then the processor just removes this flag, otherwise
it performs cleanup of the item making it unfixed. If an item is marked
with an the IMMUTABLE_FL and it traversed with this processor twice then
it becomes unfixed after the second traversal though the flag indicates
that the item should not be cleaned up.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
sp_cache erroneously looked up fully qualified SP names (e.g. `DB`.`SP`),
in case insensitive style. It was wrong, because only the "name"
part is always case insensitive, while the "db" part should be compared
according to lower_case_table_names (case sensitively for 0,
case insensitively for 1 and 2).
Fix:
Adding a "casedn_name" parameter make_qname() to tell
if the name part should be lower cased:
`DB1`.`SP` -> "DB1.SP" (when casedn_name=false)
`DB1`.`SP` -> "DB1.sp" (when casedn_name=true)
and using make_qname() with casedn_name=true when creating
sp_cache hash lookup keys.
Details:
As a result, it now works as follows:
- sp_head::m_db is converted to lower case if lower_case_table_names>0
during the sp_name initialization phase. So when make_qname() is called,
sp_head::m_db is already normalized. There are no changes in here.
- The initialization phase of sp_head when creating sp_head::m_qname
now calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to sp_head::m_qname in lower case.
- sp_cache_lookup() now also calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to the temporary lookup key in lower case.
- sp_cache::m_hashtable now uses case sensitive comparison
Part#1 A non-functional change
Changing the signature of Identifier_chain2::make_qname() from
bool make_qname(MEM_ROOT *mem_root, LEX_CSTRING *dst) const;
to
LEX_CSTRING make_qname(MEM_ROOT *mem_root) const;
Now the result is returned as LEX_CSTRING from the function rather than
is passed as a parameter.
The return value {NULL,0} means "EOM".
This is a requirement step to fix and merge easier
MDEV-33019 The database part is not case sensitive in SP names
The original MDEV-31991 commit commend:
- Moving some of Database_qualified_name methods into a new class
Identifier_chain2.
- Changing the data type of the following variables from
Database_qualified_name to Identifier_chain2:
* q_pkg_proc in LEX::call_statement_start()
* q_pkg_func in LEX::make_item_func_call_generic()
Rationale:
The data type of Database_qualified_name::m_db will be changed
to Lex_ident_db soon. So Database_qualified_name won't be able
to store the `pkg.routine` part of `db.pkg.routine` any more,
because `pkg` must not depend on lower-case-table-names.
Marko reported DBUG_ASSERT from dict_stats_shutdown().
destroy_background_thd() does not like when current_thd is set.
In galera, it can be the case, dict_stats_shutdown() can be called from
user thread, to stop and later restart stats recalculations.
MDEV-31003 has introduced second execution for SELECTs that execute
under ps-protocol. The following tests in galera suites do not support
this mode of execution, disable it:
galera.MDEV-27862
galera.galera_log_output_csv
galera.galera_query_cache
galera.galera_query_cache_sync_wait
galera_3nodes_sr.GCF-336
galera_3nodes_sr.galera_sr_isolate_master
galera_sr.galera_sr_large_fragment
galera_sr.galera_sr_many_fragments
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>