In a trigger or a function used in a statement it is possible to do
SELECT from a table being modified by the statement. However,
encapsulation of such SELECT into a view and selecting from a view
instead of direct SELECT was not possible.
This happened because tables used by views (which in their turn
were used from functions/triggers) were not excluded from checks
in unique_table() routine as it happens for the rest of tables
added to the statement table list for prelocking.
With this fix we ignore all such tables in unique_table(), thus
providing consistency: inside a trigger or a functions SELECT from
a view may be used where plain SELECT is allowed. Modification of
the same table from function or trigger is still disallowed. Also,
this patch doesn't affect the case where SELECT from the table being
modified is done outside of function of trigger, such SELECTs are
still disallowed (this limitation and visibility problem when function
select from a table being modified are subjects of bug 21326). See
also bug 22427.
On an INSERT into an updatable but non-insertable view an error message was
issued stating the view being not updatable. This can lead to a confusion of a
user.
A new error message is introduced. Is is showed when a user tries to insert
into a non-insertable view.
The cause of the bug was an incomplete fix for bug 18080.
The problem was that setup_tables() unconditionally reset the
name resolution context to its 'tables' argument, which pointed
to the first table of an SQL statement.
The bug fix limits resetting of the name resolution context in
setup_tables() only in the cases when the context was not set
by earlier parser/optimizer phases.
SELECT right instead of INSERT right was required for an insert into to a view.
This wrong behaviour appeared after the fix for bug #20989. Its intention was
to ask only SELECT right for all tables except the very first for a complex
INSERT query. But that patch has done it in a wrong way and lead to asking
a wrong access right for an insert into a view.
The setup_tables_and_check_access() function now accepts two want_access
parameters. One will be used for the first table and the second for other
tables.
When executing INSERT over a view with calculated columns it was assuming all
elements of the fields collection are actually Item_field instances.
This may not be true when inserting into a view and that view has columns that are
such expressions that allow updating (like setting a collation for example).
Corrected to access field information through the filed_for_view_update() function and
retrieve correctly the field info even for "update-friendly" non-Item_field items.
The problem was that store_top_level_join_columns() incorrectly assumed
that the left/right neighbor of a nested join table reference can be only
at the same level in the join tree.
The fix checks if the current nested join table reference has no immediate
left/right neighbor, and if so chooses the left/right neighbors of the
nearest upper level, where these references are != NULL.
closing temp tables through end_thread
had a flaw in binlog-off branch of close_temporary_tables where
next table to close was reset via table->next
for (table= thd->temporary_tables; table; table= table->next)
which was wrong since the current table instance got destoyed at
close_temporary(table, 1);
The fix adapts binlog-on branch method to engage the loop's internal 'next' variable which holds table->next prior table's destoying.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
schemas
The function check_one_table_access() called to check access to tables in
SELECT/INSERT/UPDATE was doing additional checks/modifications that don't hold
in the context of setup_tables_and_check_access().
That's why the check_one_table() was split into two : the functionality needed by
setup_tables_and_check_access() into check_single_table_access() and the rest of
the functionality stays in check_one_table_access() that is made to call the new
check_single_table_access() function.
query
Problem:
There was a wrong context assigned to the columns that were added in insert_fields()
when expanding a '*'. When this is done in a prepared statement it causes
fix_fields() to fail to find the table that these columns reference.
Actually the right context is set in setup_natural_join_row_types() called at the
end of setup_tables(). However when executed in a context of a prepared statement
setup_tables() resets the context, but setup_natural_join_row_types() was not
setting it to the correct value assuming it has already done so.
Solution:
The top-most, left-most NATURAL/USING join must be set as a
first_name_resolution_table in context even when operating on prepared statements.
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.
Replaced COND_refresh with COND_global_read_lock becasue of a bug in NTPL threads when using different mutexes as arguments to pthread_cond_wait()
The original code caused a hang in FLUSH TABLES WITH READ LOCK in some circumstances because pthread_cond_broadcast() was not delivered to other threads.
This fixes:
Bug#16986: Deadlock condition with MyISAM tables
Bug#20048: FLUSH TABLES WITH READ LOCK causes a deadlock
specific to 5.0 version of the patch is motivated by the fact that a wrapper over
MYSQLLOG::write can not help in 5.0 where query's charset is embedded into event instance in the constructor.
A pattern to generate binlog for DROPped temp table in close_temporary_tables
was buggy: could not deal with a grave-accent-in-name table.
The fix exploits `append_identifier()' for quoting and duplicating accents.
Binlog lacks encoding info about DROPped temporary table.
Idea of the fix is to switch temporary to system_charset_info when a temporary table
is DROPped for binlog. Since that is the server, that automatically, but not the client, who generates the query
the binlog should be updated on the server's encoding for the coming DROP.
The `write_binlog_with_system_charset()' is introduced to replace similar problematic places in the code.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Backporting a changeset made for 5.0. Comments from there:
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
The cause of this bug was a design flaw due to which the list of natural
join columns was incorrectly computed and stored for nested joins that
are not natural joins, but are operands (possibly indirect) of nested joins.
The patch corrects the flaw in a such a way, that the result columns of a
table reference are materialized only if it is a leaf table (that is, only
if it is a view, stored table, or natural/using join).
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
trigger starts trigger".
In short, the deadlock/crash happened when execution of statement, which used
stored functions or activated triggers, coincided with alteration of the
tables used by these functions or triggers (in highly concurrent environment).
Bug was caused by the incorrect handling of tables from prelocked set in
open_tables() functions in situations when refresh happened. This fix replaces
old smart but not very robust way of handling tables after refresh (which was
closing only old tables), with new one which simply closes all tables opened so
far and restarts open_tables().
Also fixed handling of temporary tables in close_tables_for_reopen().
No test case present since bug manifests itself only in concurrent environment.
When setup_fields() function finds field named '*' it expands it to the list
of all table fields. It does so by checking that the first char of
field_name is '*', but it doesn't checks that the '* is the only char.
Due to this, when updating table with a field named like '*name', such field
is wrongly treated as '*' and expanded. This leads to making list of fields
to update being longer than list of the new values. Later, the fill_record()
function crashes by dereferencing null when there is left fields to update,
but no more values.
Added check in the setup_fields() function which ensures that the field
expanding will be done only when '*' is the only char in the field name.
- Fixed tests
- Optimized new code
- Fixed some unlikely core dumps
- Better bug fixes for:
- #14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
- #14850 (ERROR 1062 when a quering a view using a Group By on a column that can be null
when high concurrency": remove HASH::current_record and make it
an external search parameter, so that it can not be the cause of a
race condition under high concurrent load.
The bug was in a race condition in table_hash_search,
when column_priv_hash.current_record was overwritten simultaneously
by multiple threads, causing the search for a suitable grant record
to fail.
No test case as the bug is repeatable only under concurrent load.
Problem #1: INSERT...SELECT, Version for 5.0.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
impossible view security".
We should not expose names of tables which are explicitly or implicitly (via
routine or trigger) used by view even if we find that they are missing.
So during building of list of prelocked tables for statement we track which
routines (and therefore tables for these routines) are used from views. We
mark elements of LEX::routines set which correspond to routines used in views
by setting Sroutine_hash_entry::belong_to_view member to point to TABLE_LIST
object for topmost view which uses routine. We propagate this mark to all
routines which are used by this routine and which we add to this set. We also
mark tables used by such routine which we add to the list of tables for
prelocking as belonging to this view.
Post-review fixes that simplify the way access rights
are checked during name resolution and factor out all
entry points to check access rights into one single
function.
Post-review version. Some minor review fixes, but also changed the way
some errors are handled: Don't return specific parse errors; instead
always use the more general "table corrupt" error (amended accordingly).
Version for 5.0.
It fixes three problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
3. In 5.0 (not in 4.1 or 4.0) DROP TABLE had a possible deadlock flaw in
concur with FLUSH TABLES WITH READ LOCK. I call the fix for this
problem "the 5.0 addendum fix".
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
large table gives server crash": make sure that when a MyISAM temporary
table is created for a cursor, it's created in its memory root,
not the memory root of the current query.
Added error checking for errors when attempting to use stored procedures
after the mysql.proc table has been dropped, corrupted, or tampered with.
Test cases were put in a separate file (sp-destruct.test).
droping trigger on InnoDB table".
Deadlock occured in cases when we were trying to create two triggers for
the same InnoDB table concurrently and both threads were able to reach
close_cached_table() simultaneously. Bugfix implements new approach to
table locking and table cache invalidation during creation/dropping
of trigger.
No testcase is supplied since bug was repeatable only under high concurrency.
- CHAR() now returns binary string as default
- CHAR(X*65536+Y*256+Z) is now equal to CHAR(X,Y,Z) independent of the character set for CHAR()
- Test for both ETIMEDOUT and ETIME from pthread_cond_timedwait()
(Some old systems returns ETIME and it's safer to test for both values
than to try to write a wrapper for each old system)
- Fixed new introduced bug in NOT BETWEEN X and X
- Ensure we call commit_by_xid or rollback_by_xid for all engines, even if one engine has failed
- Use octet2hex() for all conversion of string to hex
- Simplify and optimize code
The problem was then when a column reference was resolved to a view column, the new Item
created for this column contained the name of the view, and not the view alias.
"SELECT ... FOR UPDATE executed as consistent read inside LOCK TABLES"
Do not discard lock_type information as handler::start_stmt() may require knowledge.
(fixed by Antony)
Fixed bug #13411.
Fixed name resolution for non-qualified reference to a view column
in the HAVING clause.
view.result, view.test:
Added a test case for bug #13411.
Fixed problems in test suite where some test failed
Fixed access to not initialized memory in federated
Fixed access to not initialized memory when using BIT fields in internal temporary tables
The problem was in the way table references are pre-filtered when
resolving a qualified field. When resolving qualified table references
we search recursively in the operands of the join. If there is
natural/using join with a merge view, the first call to find_field_in_table_ref
makes a recursive call to itself with the view as the new table reference
to search for the column. However the view has both nested_join and
join_columns != NULL so it skipped the test whether the view name matches
the field qualifier. As a result the field was found in the view since the
view already has a field with the same name. Thus the field was incorrectly
resolved as the view field.
This bug occurs when some trigger for table used by DML statement is created
or changed while statement was waiting in lock_tables(). In this situation
prelocking set which we have calculated becames invalid which can easily lead
to errors and even in some cases to crashes.
With proposed patch we no longer silently reopen tables in lock_tables(),
instead caller of lock_tables() becomes responsible for reopening tables and
recalculation of prelocking set.
Solution for 4.0 and 4.1.
If the caller cannot re-open table(s), it gives a NULL
'refresh' argument to open_table(). We used to ignore
flushes then. Now we ignore drops too.
Solution for 5.0.
Changed calls to open_table(). Requested to ignore
flush at places where the command did already lock tables.
This could happen in CREATE ... SELECT and ALTER TABLE.
No test case. The bug can only be triggered by true concurrency.
The stress test suite provides a test case for this.
When view column aliased in subselect alias is set on ref which represents
field. When tmp table is created for subselect, it takes name of original field
not ref. Because of this alias on view column in subselect is lost. Which
results in reported error.
Now when alias is set on ref, it's set on ref real item too.
- current_arena to stmt_arena: the thread may have more than one
'current' arenas: one for runtime data, and one for the parsed
tree of a statement. Only one of them is active at any moment.
- set_item_arena -> set_query_arena, because Item_arena was renamed to
Query_arena a while ago
- set_n_backup_item_arena -> set_n_backup_active_arena;
the active arena is the arena thd->mem_root and thd->free_list
are currently pointing at.
- restore_backup_item_arena -> restore_active_arena (with the same
rationale)
- change_arena_if_needed -> activate_stmt_arena_if_needed; this
method sets thd->stmt_arena active if it's not done yet.
* Provide backwards compatibility extension to name resolution of
coalesced columns. The patch allows such columns to be qualified
with a table (and db) name, as it is in 4.1.
Based on a patch from Monty.
* Adjusted tests accordingly to test both backwards compatible name
resolution of qualified columns, and ANSI-style resolution of
non-qualified columns.
For this, each affected test has two versions - one with qualified
columns, and one without.
- Corrected problem with N-way nested natural joins in PS mode.
- Code cleanup
- More asserts to check consistency of name resolution contexts
- Fixed potential memory leak of name resolution contexts
If we are in stored function or trigger we should ensure that we won't change
table that is already used by calling statement (this can damage table or
easily cause infinite loops). Particularly this means that recursive triggers
should be disallowed.