- fixed wrong test case for bug 20903
- closed the dangling connections in trigger.test
- GET_LOCK() and RELEASE_LOCK() now produce more detailed log
- fixed an omission in GET_LOCK() : assign the thread_id when
acquiring the lock.
The value of "low-priority-updates" option and the LOW PRIORITY
prefix was taken into account at parse time.
This caused triggers (among others) to ignore this flag (if
supplied for the DML statement).
Moved reading of the LOW_PRIORITY flag at run time.
Fixed an incosistency when handling
SET GLOBAL LOW_PRIORITY_UPDATES : now it is in effect for
delayed INSERTs.
Tested by checking the effect of LOW_PRIORITY flag via a
trigger.
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Note that we have separate fix for 5.0 since there we use slightly
different less intrusive approach.
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Also note that we decided to postpone introduction of some tests
for concurrent behaviour of CREATE TABLE ... SELECT till 5.1.
The main reason for this is absence in 5.0 ability to set @@debug
variable at runtime, which can be circumvented only by using several
test files with individual .opt files. Since the latter is likely
to slowdown test-suite unnecessary we chose not to push this tests
into 5.0, but run them manually for this version and later push
their optimized version into 5.1
Removed wrong fix for the bug#27006.
The bug was added by the fix for the bug#19978 and fixed by Monty on 2007/02/21.
trigger.test, trigger.result:
Corrected test case for the bug#27006.
UPDATE if the row wasn't actually changed.
This bug was caused by fix for bug#19978. It causes AFTER UPDATE triggers
not firing if a row wasn't actually changed by the update part of the
INSERT .. ON DUPLICATE KEY UPDATE.
Now triggers are always fired if a row is touched by the INSERT ... ON
DUPLICATE KEY UPDATE.
in a select list.
The objects of the Item_trigger_field class inherited the implementations
of the methods copy_or_same, get_tmp_table_item and get_tmp_table_field
from the class Item_field while they rather should have used the default
implementations defined for the base class Item.
It could cause catastrophic problems for triggers that used SELECTs
with select list containing trigger fields such as NEW.<table column>
under DISTINCT.
5.1-related fixes
libmysqld/Makefile.am fixed to recompile and link ha_*.cc files that
keep dependance on THD structure.
Minor fixes to make tests working.
This is the 5.0 part of the fix.
Currently TRUNCATE command will not call
delete_all_rows() in the handler (that implements
the "fast" TRUNCATE for InnoDB) when there are
triggers on the table.
As decided by the architecture team TRUNCATE must
use "fast" TRUNCATE even when there are triggers.
Thus it must ignore the triggers.
Made TRUNCATE to ignore the triggers and call
delete_all_rows() for all storage engines
to maintain engine consistency.
This change set implements the DROP TRIGGER IF EXISTS functionality.
This fix is considered a bug and not a feature, because without it,
there is no known method to write a database creation script that can create
a trigger without failing, when executed on a database that may or may not
contain already a trigger of the same name.
Implementing this functionality closes an orthogonality gap between triggers
and stored procedures / stored functions (which do support the DROP IF
EXISTS syntax).
In sql_trigger.cc, in mysql_create_or_drop_trigger,
the code has been reordered to:
- perform the tests that do not depend on the file system (access()),
- get the locks (wait_if_global_read_lock, LOCK_open)
- call access()
- perform the operation
- write to the binlog
- unlock (LOCK_open, start_waiting_global_read_lock)
This is to ensure that all the code that depends on the presence of the
trigger file is executed in the same critical section,
and prevents race conditions similar to the case fixed by Bug 14262 :
- thread 1 executes DROP TRIGGER IF EXISTS, access() returns a failure
- thread 2 executes CREATE TRIGGER
- thread 2 logs CREATE TRIGGER
- thread 1 logs DROP TRIGGER IF EXISTS
The patch itself is based on code contributed by the MySQL community,
under the terms of the Contributor License Agreement (See Bug 18161).
stored function invoked from different connections".
Invocation of trigger which was using stored function from different
connections caused server crashes (for non-debug server this happened
in highly concurrent environment, but debug server failed on assertion
in relatively simple scenario).
Item_func_sp was not safe to use in triggers (in other words for
re-execution from different threads) as artificial TABLE object
pointed by Item_func_sp::dummy_table referenced incorrect THD
object. To fix the problem we force re-initialization of this
object for each re-execution of statement.
This patch reverts a change introduced by Bug 6951, which incorrectly
set thd->abort_on_warning for stored procedures.
As per internal discussions about the SQL_MODE=TRADITIONAL,
the correct behavior is to *not* abort on warnings even inside an INSERT/UPDATE
trigger.
Tests for Stored Procedures, Stored Functions, Triggers involving SQL_MODE
have been included or revised, to reflect the intended behavior.
(reposting approved patch, to work around source control issues, no review needed)
this key does not stop" (version for 5.0 only).
UPDATE statement which WHERE clause used key and which invoked trigger
that modified field in this key worked indefinetely.
This problem occured because in cases when UPDATE statement was
executed in update-on-the-fly mode (in which row is updated right
during evaluation of select for WHERE clause) the new version of
the row became visible to select representing WHERE clause and was
updated again and again.
We already solve this problem for UPDATE statements which does not
invoke triggers by detecting the fact that we are going to update
field in key used for scanning and performing update in two steps,
during the first step we gather information about the rows to be
updated and then doing actual updates. We also do this for
MULTI-UPDATE and in its case we even detect situation when such
fields are updated in triggers (actually we simply assume that
we always update fields used in key if we have before update
trigger).
The fix simply extends this check which is done in check_if_key_used()/
QUICK_SELECT_I::check_if_keys_used() routine/method in such way that
it also detects cases when field used in key is updated in trigger.
As nice side-effect we have more precise and thus more optimal
perfomance-wise check for the MULTI-UPDATE.
Also check_if_key_used()/QUICK_SELECT_I::check_if_keys_used() were
renamed to is_key_used()/QUICK_SELECT_I::is_keys_used() in order to
better reflect that boolean predicate.
Note that this check is implemented in much more elegant way in 5.1
erroneous check
Problem: Actually there were two problems in the server code. The check
for SQLCOM_FLUSH in SF/Triggers were not according to the existing
architecture which uses sp_get_flags_for_command() from sp_head.cc .
This function was also missing a check for SQLCOM_FLUSH which has a
problem combined with prelocking. This changeset fixes both of these
deficiencies as well as the erroneous check in
sp_head::is_not_allowed_in_function() which was a copy&paste error.
User name (host name) has limit on length. The server code relies on these
limits when storing the names. The problem was that sometimes these limits
were not checked properly, so that could lead to buffer overflow.
The fix is to check length of user/host name in parser and if string is too
long, throw an error.
It was hard to distinguish case, when one was unable to create trigger
on the table because trigger with same action time and event already
existed for this table, from the case, when one tried to create trigger
with name which was already occupied by some other trigger, since in
both these cases we emitted ER_TRG_ALREADY_EXISTS error and message.
Now we emit ER_NOT_SUPPORTED_YET error with appropriate additional
message in the first case. There is no sense in introducing separate
error for this situation since we plan to get rid of this limitation
eventually.
Bug #18361: Triggers on mysql.user table cause server crash
Because they do not work, we do not allow creating triggers on tables
within the 'mysql' schema.
(They may be made to work and re-enabled at some later date, but not
in 5.0 or 5.1.)
INSERT triggers".
In cases when REPLACE was internally executed via update and table had
on update (on delete) triggers defined we exposed the fact that such
optimization used by callng on update (not calling on delete) triggers.
Such behavior contradicts our documentation which describes REPLACE as
INSERT with optional DELETE.
This fix just disables this optimization for tables with on delete triggers.
The optimization is still applied for tables which have on update but have
no on delete triggers, we just don't invoke on update triggers in this case
and thus don't expose information about optimization to user.
Also added test coverage for values returned by ROW_COUNT() function (and
thus for values returned by mysql_affected_rows()) for various forms of
INSERT.