It was hard to distinguish case, when one was unable to create trigger
on the table because trigger with same action time and event already
existed for this table, from the case, when one tried to create trigger
with name which was already occupied by some other trigger, since in
both these cases we emitted ER_TRG_ALREADY_EXISTS error and message.
Now we emit ER_NOT_SUPPORTED_YET error with appropriate additional
message in the first case. There is no sense in introducing separate
error for this situation since we plan to get rid of this limitation
eventually.
No test case as the bug is in an existing test case (rpl_trigger.test
when it is run under valgrind).
The warning was caused by memory corruption in replication slave: thd->db
was pointing at a stack address that was previously used by
sp_head::execute()::old_db. This happened because mysql_change_db
behaved differently in replication slave and did not make a copy of the
argument to assign to thd->db.
The solution is to always free the old value of thd->db and allocate a new
copy, regardless whether we're running in a replication slave or not.
sp_grant_privileges(), the function that GRANTs EXECUTE + ALTER privs on a SP,
did so creating a user-entry with not password; mysql_routine_grant() would then
write that "change" to the user-table.
and BUG#19208 "Test 'rpl000017' hangs on Windows".
Both bugs are caused by attempting to delete an opened
file and to create immediatedly a new one with the same
name. On Windows it can be supported only on NT-platforms
(by using FILE_SHARE_DELETE mode and with renaming the
file before deletion). Because deleting not-closed files
is not supported on all platforms (e.g. Win 98|ME) this
is to be considered harmful and should be eliminated by
a "code redesign".
Bug #18361: Triggers on mysql.user table cause server crash
Because they do not work, we do not allow creating triggers on tables
within the 'mysql' schema.
(They may be made to work and re-enabled at some later date, but not
in 5.0 or 5.1.)
The problem was that we restored SQL_CACHE, SQL_NO_CACHE flags in SELECT
statement from internal structures based on value set later at runtime, not
the original value set by the user.
The solution is to remember that original value.
Produce a warning if DATA/INDEX DIRECTORY is specified in
ALTER TABLE statement.
Ignoring of these options is documented in the symbolic links
section of the manual.
'SELECT DISTINCT a,b FROM t1' should not use temp table if there is unique
index (or primary key) on a.
There are a number of other similar cases that can be calculated without the
use of a temp table : multi-part unique indexes, primary keys or using GROUP BY
instead of DISTINCT.
When a GROUP BY/DISTINCT clause contains all key parts of a unique
index, then it is guaranteed that the fields of the clause will be
unique, therefore we can optimize away GROUP BY/DISTINCT altogether.
This optimization has two effects:
* there is no need to create a temporary table to compute the
GROUP/DISTINCT operation (or the temporary table will be smaller if only GROUP
is removed and DISTINCT stays or if DISTINCT is removed and GROUP BY stays)
* this causes the statement in effect to become updatable in Connector/Java
because the result set columns will be direct reference to the primary key of
the table (instead to the temporary table that it currently references).
Implemented a check that will optimize away GROUP BY/DISTINCT for queries like
the above.
Currently it will work only for single non-constant table in the FROM clause.
- make sure to allocate just enough pages in the fragments by using the actual
row count from the backup, to avoid over allocation of pages to fragments, and
thus avoid the bug