post-merge fixes:
* remove log_slow_queries_not_using_indexes, no need to create variables
that are deprecated since the moment of creation
* rename log_slow_query_enable->log_slow_query
no other variable uses *_enable pattern
* MDEV-29626 Assertion `self == &Sys_slow_query_log' failed in fix_log_state
* tests
Closes#2137
Thus, all these variables will be
grouped together and more logically named.
Descriptions for the old variables were updated to indicate they are
now aliases for the newly introduced variables with prefix log_slow.
log_slow_queries_not_using_indexes_filter will not be addressed
in this merge request.
log_throttle_queries_not_using_indexes seems to no longer be in use.
MTR tests are also updated to include the new variable names.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Underlying causes of all bugs mentioned below are same. This patch fixes
all of them:
1) MDEV-25028: ASAN use-after-poison in
base_list_iterator::next or Assertion `sl->join == 0' upon
INSERT .. RETURNING via PS
2) MDEV-25187: Assertion `inited == NONE || table->open_by_handler'
failed or Direct leak in init_dynamic_array2 upon INSERT .. RETURNING
and memory leak in init_dynamic_array2
3) MDEV-28740: crash in INSERT RETURNING subquery in prepared statements
4) MDEV-27165: crash in base_list_iterator::next
5) MDEV-29686: Assertion `slave == 0' failed in
st_select_lex_node::attach_single
Analysis:
consider this statement:
INSERT(1)...SELECT(2)...(SELECT(3)...) RETURNING (SELECT(4)...)
When RETURNING is encountered, add_slave() changes how selects are linked.
It makes the builtin_select(1) slave of SELECT(2). This causes
losing of already existing slave(3) (which is nested select of SELECT of
INSERT...SELECT). When really, builtin_select (1) shouldn't be slave to
SELECT(2) because it is not nested within it. Also, push_select() to use
correct context also changed how select are linked.
During reinit_stmt_before_use(), we expect the selects to
be cleaned-up and have join=0. Since these selects are not linked correctly,
clean-up doesn't happen correctly so join is not NULL. Hence the crash.
Fix:
IF we are parsing RETURNING, make is_parsing_returning= true for
current select. get rid of add_slave(). In place of push_select(), used
push_context() to have correct context (the context of builtin_select)
to resolve items in item_list. And add these items to item_list of
builtin_select.
When ha_end_bulk_insert() fails F_UNLCK was done twice: in
select_insert::prepare_eof() and in select_create::abort_result_set().
Now we avoid making F_UNLCK in prepare_eof() if error is non-zero.
after d7d3ad698a debug_sync waits are no longer interrupted by
soft kills that server shutdown is using. Luckily in this test
there's no need to wait, it wants the commit to be properly completed,
fully committed in both engines.
The problem is that if table definition cache (TDC) is full of real tables
which are in tables cache, view definition can not stay there so will be
removed by its own underlying tables.
In situation above old mechanism of detection matching definition in PS
and current version always require reprepare and so prevent executing
the PS.
One work around is to increase TDC, other - improve version check for
views/triggers (which is done here). Now in suspicious cases we check:
- timestamp (microseconds) of the view to be sure that version really
have changed;
- time (microseconds) of creation of a trigger related to time
(microseconds) of statement preparation.
- Added missing information about database of corresponding table for various types of commands
- Update some typos
- Reviewed by: <vicentiu@mariadb.org>
make_tmp_name() creates temporary name with prefix containing PID and
TID. This prefix influences max length of the rest of the name (the
whole name is limited by NAME_LEN). During the test run PID and TID
can change. PID increases when the server restarts. TID is incremented
with the new connections (generated by next_thread_id(), is not the
posix thread ID).
During the test run PID can increase max 2 decimal positions: from
tens to thousands this requires ~900 restarts. TID depends on
connection count, but for test we assume it will not trespass 100000
connections which is 5 decimal positions. So it should be enough to
reserve 7 characters for PID and TID increment.
The patch reserves more: it reserves 12 characters for 7-decimal PID
and 5-decimal TID plus 4 chars of additional reserve (for future
prefix changes) and assumes minimal legth of the prefix is 30 bytes:
#sql-backup-PID-TID-
#sql-create-PID-TID-
4-6-PID-TID- is 10 + 4 + PID + TID, so PID + TID is 16 chars (7 + 5 + 4).
Usually when we get into finalize_locked_tables() with error
m_locked_tables_count was not decremented. m_locked_tables_count is
decremented when we drop the original table and if we failed that
m_locked_tables_count is expected intact.
The bug comes from the fact that finalize_atomic_replace() violates
the above contract. It does HA_EXTRA_PREPARE_FOR_DROP and decrements
m_locked_tables_count. Then it tries rename_table_and_triggers() and
fails. With decremented m_locked_tables_count reopen_tables() does
nothing and we don't get new value for pos_in_locked_tables->table.
The test case demonstrates ER_ERROR_ON_RENAME where non-atomic CREATE
OR REPLACE would not fail. The original RENAME TABLE fails under such
broken environment, so nothing is wrong with atomic CREATE OR REPLACE
failing there too.
mysqlslap has an unusual semantics of the --iterations option, so that
$ mysqlslap --concurrency=16 --iterations=1000
performs 16,000 connections. Because of that it gets many times slower
with --ssl
Let's disable ssl for mysqlslap in mysql-test
thd_get_ha_data() can be used without a lock, but only from the
current thd thread, when calling from anoher thread it *must*
be protected by thd->LOCK_thd_data
* fix group commit code to take thd->LOCK_thd_data
* remove innobase_close_connection() from the innodb background thread,
it's not needed after 87775402cd and was failing the assert with
current_thd==0
This patch resolves the problem of improper name resolution of table
references to embedded CTEs for some queries. This improper binding could
lead to
- infinite sequence of calls of recursive functions
- crashes due to resolution of null pointers
- wrong result sets returned by queries
- bogus error messages
If the definition of a CTE contains with clauses then such CTE is called
embedding CTE while CTEs from the with clauses are called embedded CTEs.
If a table reference used in the definition of an embedded CTE cannot be
resolved within the unit that contains this reference it still may be
resolved against a CTE definition from the with clause with one of the
embedding CTEs.
A table reference can be resolved against a CTE definition if it used in
the the scope of this definition and it refers to the name of the CTE.
Table reference t is in the scope of the CTE definition of CTE cte if
- the definition of cte is an element of a with clause declared as
RECURSIVE and the reference t belongs either to the unit to which
this with clause is attached or to one of the elements of this clause
- the definition of cte is an element of a with clause without RECURSIVE
specifier and the reference t belongs either to the unit to which this
with clause is attached or to one of the elements from this clause that
are placed before the definition of cte.
If a table reference can be resolved against several CTE definitions then
it is bound to the most embedded.
The code before this patch not always resolved table references used in
embedded CTE according to the above rules.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This is a 10.5 version of 9b750dcbd8, fix for
MDEV-23536 Race condition between KILL and transaction commit
InnoDB needs to remove trx from thd before destroying it (trx), otherwise
a concurrent KILL might get a pointer from thd to a destroyed trx.
ha_close_connection() should allow engines to clear ha_data in
hton->on close_connection(). To prevent the engine from being unloaded
while hton->close_connection() is running, we remove the lock from
ha_data and unlock the plugin manually.
Continue with similar changes as done in 19af1890 to replace sprintf(buf, ...)
with snprintf(buf, sizeof(buf), ...), specifically in the "easy" cases where buf
is allocated with a size known at compile time.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
- During bulk insert, server detects the error and does rollback
operation. At that time, InnoDB fails to clean up the bulk insert
buffer
trx_t::rollback_finish(): Removed the clean up of modified tables
from transaction
trx_t::commit_cleanup(): Free the bulk buffer for bulk insert operation
trx_t::commit(): Check whether modified table wasn't dropped only
for non-dictionary transaction
Problem:
=======
Test Case 4 sporadically failed upon waiting for the slave to start during
master demotion to slave, due to a GTID event not existing in the new
master's binlog. If by chance the original slave (new master) did not yet
receive all the binlog events, then this failure could occur.
Solution:
========
Sync slave with master GTID before master demotion.
Reviewed By:
============
Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Nowdays subquery in a UNION's ORDER BY placed correctly in fake select,
the only problem was incorrect Name_resolution_contect is fixed by this
patch in parsing, so we do not need scanning/reseting of ORDER BY of
a union.
This reverts commit bdc5548cad
that introduced a work-around to ha_innobase::delete_table()
for avoiding failures when trying to remove table partitions.
This work-around (of not removing statistics in case of a locking
conflict) would occasionally cause a failure of the test
parts.part_supported_sql_func_innodb:
mysqltest: In included file "./suite/parts/inc/partition_supported_sql_funcs.inc":
included from ./suite/parts/inc/part_supported_sql_funcs_main.inc at line 91:
included from /buildbot/amd64-ubuntu-2004-msan/build/mysql-test/suite/parts/t/part_supported_sql_func_innodb.test at line 44:
At line 234: query 'alter table t66
reorganize partition s1 into
(partition p0 values less than ($valsqlfunc),
partition p1 values less than maxvalue)' failed: ER_DUP_KEY (1022): Can't write; duplicate key in table 'mysql.innodb_table_stats'