This change set implements the DROP TRIGGER IF EXISTS functionality.
This fix is considered a bug and not a feature, because without it,
there is no known method to write a database creation script that can create
a trigger without failing, when executed on a database that may or may not
contain already a trigger of the same name.
Implementing this functionality closes an orthogonality gap between triggers
and stored procedures / stored functions (which do support the DROP IF
EXISTS syntax).
In sql_trigger.cc, in mysql_create_or_drop_trigger,
the code has been reordered to:
- perform the tests that do not depend on the file system (access()),
- get the locks (wait_if_global_read_lock, LOCK_open)
- call access()
- perform the operation
- write to the binlog
- unlock (LOCK_open, start_waiting_global_read_lock)
This is to ensure that all the code that depends on the presence of the
trigger file is executed in the same critical section,
and prevents race conditions similar to the case fixed by Bug 14262 :
- thread 1 executes DROP TRIGGER IF EXISTS, access() returns a failure
- thread 2 executes CREATE TRIGGER
- thread 2 logs CREATE TRIGGER
- thread 1 logs DROP TRIGGER IF EXISTS
The patch itself is based on code contributed by the MySQL community,
under the terms of the Contributor License Agreement (See Bug 18161).
The code that set up data to be passed to user-defined functions was very
old and analyzed the "Type" of the data that was passed into the UDF, when
it really should analyze the "return_type", which is hard-coded for simple
Items and works correctly for complex ones like functions.
---
Added test at Sergei's behest.
stored function invoked from different connections".
Invocation of trigger which was using stored function from different
connections caused server crashes (for non-debug server this happened
in highly concurrent environment, but debug server failed on assertion
in relatively simple scenario).
Item_func_sp was not safe to use in triggers (in other words for
re-execution from different threads) as artificial TABLE object
pointed by Item_func_sp::dummy_table referenced incorrect THD
object. To fix the problem we force re-initialization of this
object for each re-execution of statement.
specifying DEFAULT
This was not specific to datetime. When there is no default value
for a column, and the user inserted DEFAULT, we would write
uninitialized memory to the table.
Now, insist on writing a default value, a zero-ish value, the same
one that comes from inserting NULL into a not-NULL field.
(This is, at best, really strange behavior that comes from allowing
sloppy usage, and serves as a good reason always to run one's server
in a strict SQL mode.)
When compiling GROUP BY Item_ref instances are dereferenced in
setup_copy_fields(), i.e. replaced with the corresponding Item_field
(if they point to one) or Item_copy_string for the other cases.
Since the Item_ref (in the Item_field case) is no longer used the information
about the aliases stored in it is lost.
Fixed by preserving the column, table and DB alias on dereferencing Item_ref
- Detect if a table has field of type MYSQL_TYPE_VAR_STRING while running
"CHECK TABLE t FOR UPGRADE" and indicate it need to be fixed
with "REPAIR TABLE t".
- When running a "REPAIR TABLE t" or "ALTER TABLE t FORCE" on the above
table, install a special copy function to trim off the trailing spaces
which we safely can say that the pre 5.0 mysqld didn't put there.
The problem was that any VIEW columns had always implicit derivation.
Fix: derivation is now copied from the original expression
given in VIEW definition.
For example:
- a VIEW column which comes from a string constant
in CREATE VIEW definition have now coercible derivation.
- a VIEW column having COLLATE clause
in CREATE VIEW definition have now explicit derivation.
on large length
Problem: Most (all) of the numeric inputs were being coerced into
int (32 bit) sized variables. Works OK for sane inputs; any input
larger than 2^32 (or 2^31 for signed vars) exihibited predictable
wrapping behavior (up to about 10^18) and then started having really
strange behaviour past that point (since the conversion to 64 bit int
from the DECIMAL type can do weird things on out of range numbers).
Solution: 1) Add many tests. 2) Convert input from (u)long type to
(u)longlong. 3) Do (sometimes multiple) sanity checks on input,
keeping in mind that sometimes a negative longlong is not a negative
longlong (if the unsigned_flag is set). 4) Emulate existing behavior
w/rt negative and "small" out-of-bounds values.
- When returning metadata for scalar subqueries the actual type of the
column was calculated based on the value type, which limits the actual
type of a scalar subselect to the set of (currently) 3 basic types :
integer, double precision or string. This is the reason that columns
of types other then the basic ones (e.g. date/time) are reported as
being of the corresponding basic type.
Fixed by storing/returning information for the column type in addition
to the result type.
Problem: GROUP_CONCAT on a multi-byte column can truncate
in the middle of a multibyte character when applying
group_concat_max_len limit. It produces an invalid
multi-byte character in the result string.
The second, easier version - reusing old "warning_for_row" flag,
instead of introducing of "result_is_full" - which was
added in the previous commit.
The Item_func_mod objects never had maybe_null set, so users had no reason
to expect that they can be NULL, and may therefore deduce wrong results.
Now, set maybe_null.