Do not issue a 'read-only' error in case of DROP TEMPORARY TABLE on a non-existing temporary table.
Instead produce the correct "Unknown table" error or warning (in cases when the IF EXISTS clause was specified).
To a documentor: the part of the manual describing the 'read_only' system variable should be clarified to state the following:
"When the read_only variable is set to ON, all operations which create/update/drop tables are rejected with the exceptions for:
1. Any operation performed by the replication thread on a slave server
2. Any operation performed by a user that have the SUPER privilege
3. Any operation that creates/updates/drops only temporary tables"
(this is the 5.0 patch, because 4.1 differs)
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
The problem was that some functions (namely IN() starting with 4.1, and
CHAR() starting with 5.0) were returning NULL in certain conditions,
while they didn't set their maybe_null flag. Because of that there could
be some problems with 'IS NULL' check, and statements that depend on the
function value domain, like CREATE TABLE t1 SELECT 1 IN (2, NULL);.
The fix is to set maybe_null correctly.
Problem: When we have a really large number (between 2^63 and 2^64)
as the left side of the mod operator, it gets improperly corerced
into a signed value.
Solution: Added check to see if the "negative" number is really
positive, and if so, cast it.
The problem was that THD::row_count_func was zeroed too. It was zeroed
as a fix for bug 4905 "Stored procedure doesn't clear for "Rows affected"
However, the proper solution is not to zero, because THD::row_count_func has
been set to -1 already in mysql_execute_command(), a later fix, which obsoletes
the incorrect fix of #4095
The regression is caused by the fix for bug 14767. When INSERT ... SELECT
used a view in the SELECT list that was not inlined, and there was an
active transaction, the server could crash in Query_cache::invalidate.
On INSERT ... SELECT only the table being inserted into is invalidated.
Thus views that can't be inlined are skipped from invalidation.
The bug manifests itself in two ways so there is 2 test cases.
One checks that the only the table being inserted into is invalidated.
And the second one checks that there is no crash on INSERT ... SELECT.
This change set implements the DROP TRIGGER IF EXISTS functionality.
This fix is considered a bug and not a feature, because without it,
there is no known method to write a database creation script that can create
a trigger without failing, when executed on a database that may or may not
contain already a trigger of the same name.
Implementing this functionality closes an orthogonality gap between triggers
and stored procedures / stored functions (which do support the DROP IF
EXISTS syntax).
In sql_trigger.cc, in mysql_create_or_drop_trigger,
the code has been reordered to:
- perform the tests that do not depend on the file system (access()),
- get the locks (wait_if_global_read_lock, LOCK_open)
- call access()
- perform the operation
- write to the binlog
- unlock (LOCK_open, start_waiting_global_read_lock)
This is to ensure that all the code that depends on the presence of the
trigger file is executed in the same critical section,
and prevents race conditions similar to the case fixed by Bug 14262 :
- thread 1 executes DROP TRIGGER IF EXISTS, access() returns a failure
- thread 2 executes CREATE TRIGGER
- thread 2 logs CREATE TRIGGER
- thread 1 logs DROP TRIGGER IF EXISTS
The patch itself is based on code contributed by the MySQL community,
under the terms of the Contributor License Agreement (See Bug 18161).
The code that set up data to be passed to user-defined functions was very
old and analyzed the "Type" of the data that was passed into the UDF, when
it really should analyze the "return_type", which is hard-coded for simple
Items and works correctly for complex ones like functions.
---
Added test at Sergei's behest.
stored function invoked from different connections".
Invocation of trigger which was using stored function from different
connections caused server crashes (for non-debug server this happened
in highly concurrent environment, but debug server failed on assertion
in relatively simple scenario).
Item_func_sp was not safe to use in triggers (in other words for
re-execution from different threads) as artificial TABLE object
pointed by Item_func_sp::dummy_table referenced incorrect THD
object. To fix the problem we force re-initialization of this
object for each re-execution of statement.
specifying DEFAULT
This was not specific to datetime. When there is no default value
for a column, and the user inserted DEFAULT, we would write
uninitialized memory to the table.
Now, insist on writing a default value, a zero-ish value, the same
one that comes from inserting NULL into a not-NULL field.
(This is, at best, really strange behavior that comes from allowing
sloppy usage, and serves as a good reason always to run one's server
in a strict SQL mode.)
When compiling GROUP BY Item_ref instances are dereferenced in
setup_copy_fields(), i.e. replaced with the corresponding Item_field
(if they point to one) or Item_copy_string for the other cases.
Since the Item_ref (in the Item_field case) is no longer used the information
about the aliases stored in it is lost.
Fixed by preserving the column, table and DB alias on dereferencing Item_ref