triggers are opened and tables used in triggers are prelocked in
open_tables(). But multi-update can detect what tables will actually
be updated only later, after all main tables are opened.
Meaning, if a table is used in multi-update, but is not actually updated,
its on-update treggers will be opened and tables will be prelocked,
even if it's unnecessary. This can cause more tables to be
write-locked than needed, causing read_only errors, privilege errors
and lock waits.
Fix: don't open/prelock triggers unless table->updating is true.
In multi-update after setting table->updating=true, do a second
open_tables() for newly added tables, if any.
With INFORMATION_SCHEMA set as the default database the check that a table
referred in the processed query is defined in INORMATION_SCHEMA must
be postponed until all CTE names can be identified.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
table->pos_in_locked_tables->table == table'
failed in mark_used_tables_as_free_for_reuse
Assertion failure can be triggered by some DDL executed under LOCK TABLES
that holds lock for DDL target table multiple times (either explicitly or
implcitly).
When closing all table instances for given table (e.g. when preparing for
table removal during CREATE OR REPLACE), only one instance was removed
from m_locked_tables list.
Later we attempt to re-insert one of the instances in mysql_create_table()/
add_back_last_deleted_lock(), which wasn't actually removed. This leads
to m_locks_tables corruption, specifically loss of all following elements.
Then UNLOCK TABLE won't reset some table instances properly (specifically
pos_in_locked_tables), since they're not present in m_locked_tables.
Eventually such table instance gets released to table cache and then
re-used by subsequent statement, which triggers this assertion failure.
While executing CTAS galera applier thread can cause CTAS to abort and rollback. Rollback can take time causing applier thread to shutdown node after serial unsuccessful retries to apply transaction. Don't set lock_wait_timeout to zero to wait for lock.
This problem manifested itself when a join query used two or more
materialized CTE such that each of them employed the same recursive CTE.
The bug caused a crash. The crash happened because the cleanup()
function was performed premature for recursive CTE. This clean up was
induced by the cleanup of the first CTE referenced the recusrsive CTE.
This cleanup destroyed the structures that would allow to read from the
temporary table containing the rows of the recursive CTE and an attempt to read
these rows for the second CTE referencing the recursive CTE triggered a
crash.
The clean up for a recursive CTE R should be performed after the cleanup
of the last materialized CTE that uses R.
While executing CTAS galera applier thread can cause CTAS to abort and rollback. Rollback can take time causing applier thread to shutdown node after serial unsuccessful retries to apply transaction. Don't set lock_wait_timeout to zero to wait for lock.
ALTER TABLE locks the table with TL_READ_NO_INSERT, to prevent the
source table modifications while it's being copied. But there's an
indirect way of modifying a table, via cascade FK actions.
After previous commits, an attempt to modify an FK parent table
will cause FK children to be prelocked, so the table-being-altered
cannot be modified by a cascade FK action, because ALTER holds a
lock and prelocking will wait.
But if a new FK is being added by this very ALTER, then the target
table is not locked yet (it's a temporary table). So, we have to
lock FK parents explicitly.
table_already_fk_prelocked() was looking for a table in the wrong
list (not the complete list of prelocked tables, but only in its tail,
starting from the current table - which is always empty for the last
added table), so for circular FKs it kept adding same tables to the list
indefinitely.
Backport of d6d7e169fb
The problem was that join_columns creation was not finished due to error of notfound column in USING, but next execution tried to use join_columns lists.
Solution is cleanup the lists on error. It can eat memory in statement MEM_ROOT but it is an error and error will be fixed or statement/procedure removed/altered.
as a separate source for data
Actually MDEV-15867 and MDEV-16192 are same, Slave adds "or replace" to create
table stmt. So create table t1 is create or replace on slave. So this bug
is not because of replication, We can get this bug on general server if we
manually add or replace to create query.
Problem:- So if we try to create table t1 (same name as of temp table t1 ) via
CREATE or replace TABLE t AS SELECT * FROM t;
Since in this query we are creating table from select * from t1 , we call
unique_table function to see whether if source and destination table are same.
But there is one issue unique_table does not account if source table is tmp table
in this case source and destination table can be same.
Solution:- We will change find_dup_table to not to look for temp table if
CHECK_DUP_SKIP_TEMP_TABLE flag is on.
The previous correction of the patch for mdev-16473 did not work
correctly for the databases whose names started with '*'.
Added a test case with a database named "*".
MDEV-16512 Server crashes in find_field_in_table_ref on 2nd
execution of SP referring to non-existing field
Problem was in the natural join code that it changed TABLE_LIST and
Item_fields but didn't restore changed things if things goes wrong
and was not able to re-execute after failure.
Some of the problems could have been avoided if we would have run
fix_fields before doing natural join transformations.
Fixed by marking functions complete AFTER they had executed, instead at
start.
I had also to change some tests that checked if Item_fields are usable.
This doesn't fix all known problems, but at least avoids some crashes.
What should be done in the near future is to mark the statement in the SP
as 'not re-executable' and force a reparse of it on next execution.
Reviewer: Sergei Petrunia <psergey@askmonty.org>
Before this patch if no default database was set the server threw
an error for any table name reference that was not fully qualified by
database name. In particular it happened for table names referenced
CTE tables. This was incorrect.
The error message was thrown at the parser stage when the names referencing
different tables were not resolved yet.
Now if no default database is set and a with clause is used in the
processed statement any table reference is just supplied with a dummy
database name "*none*" at the parser stage. Later after a call
of check_dependencies_in_with_clauses() when the names for CTE tables
can be resolved error messages are thrown only for those names that
refer to non-CTE tables. This is done in open_and_process_table().