specific to 5.0 version of the patch is motivated by the fact that a wrapper over
MYSQLLOG::write can not help in 5.0 where query's charset is embedded into event instance in the constructor.
Stored procedure execution sometimes placed the address of auto variables
in the list of Item changes to undo in THD::rollback_item_tree_changes().
This could cause stack corruption.
over two views when using syntax with curly braces.
Each outer join operation must be placed in a separate
nest. This was not done when the syntax with curly braces
was used. In some cases, in particular, for queries with outer
join operation over views it could cause a crash.
itself when executing queries referring to a view with GROUP BY
an expression containing non-constant interval.
It happened because Item_date_add_interval::eq neglected the
fact that the method can be applied to an expression of the form
date(col) + interval time_to_sec(col) second
at the time when col could not be evaluated yet.
An attempt to evaluate time_to_sec(col) in this method resulted
in a crash.
A pattern to generate binlog for DROPped temp table in close_temporary_tables
was buggy: could not deal with a grave-accent-in-name table.
The fix exploits `append_identifier()' for quoting and duplicating accents.
from within triggers
Add support for passing NEW.x as INOUT and OUT parameters to stored
procedures. Passing NEW.x as INOUT parameter requires SELECT and
UPDATE privileges on that column, and passing it as OUT parameter
requires only UPDATE privilege.
When a view statement is compiled on CREATE VIEW time, most of the
optimizations should not be done. Finding the right optimization
for a subquery is one of them.
Unfortunately the optimizer is resolving the column references of
the left expression of IN subqueries in the process of deciding
witch optimization to use (if needed). So there should be a
special case in Item_in_subselect::fix_fields() : check the
validity of the left expression of IN subqueries in CREATE VIEW
mode and then proceed as normal.
garbles data if longer than 766 chars.
The problem is that a stored routine returns BLOBs to the previous
caller, BLOBs are shallow-copied (i.e. only pointers to the data are
copied). The fix is to also copy data of BLOBs.
Re-work best_access_path() and find_best() to reuse E(#rows(range access)) as
E(#rows(ref[_or_null](const) access) only when it is appropriate.
[This is the final cumulative patch]
which explicitly or implicitly uses stored function gives 'Table not locked'
error"
Test case for these bugs crashed in --ps-protocol mode. The crash was caused
by incorrect usage of check_grant() routine from create_table_precheck()
routine. The former assumes that either number of tables to be inspected by
it is limited explicitly (i.e. is is not UINT_MAX) or table list used and
thd->lex->query_tables_own_last value correspond to each other.
create_table_precheck() was not fulfilling this condition and crash happened.
The fix simply sets number of tables to be inspected by check_grant() to 1.
"alter table from MyISAM to MERGE lost data without errors and warnings"
Add new handlerton flag which prevent user from altering table storage
engine to storage engines which would lose data. Both 'blackhole' and
'merge' are marked with the new flag.
Tests included.
Binlog lacks encoding info about DROPped temporary table.
Idea of the fix is to switch temporary to system_charset_info when a temporary table
is DROPped for binlog. Since that is the server, that automatically, but not the client, who generates the query
the binlog should be updated on the server's encoding for the coming DROP.
The `write_binlog_with_system_charset()' is introduced to replace similar problematic places in the code.
When converting DISTINCT to GROUP BY where the columns are from the covering
index and they are quoted twice in the SELECT list the optimizer is creating
improper processing sequence. This is because of the fact that the columns
of the covering index are not recognized as such and treated as non-index
columns.
Generally speaking duplicate columns can safely be removed from the GROUP
BY/DISTINCT list because this will not add or remove new rows in the
resulting set. Duplicates can be removed even if they are not consecutive
(as is the case for ORDER BY, where the duplicate columns can be removed
only if they are consecutive).
So we can safely transform "SELECT DISTINCT a,a FROM ... ORDER BY a" to
"SELECT a,a FROM ... GROUP BY a ORDER BY a" instead of
"SELECT a,a FROM .. GROUP BY a,a ORDER BY a". We can even transform
"SELECT DISTINCT a,b,a FROM ... ORDER BY a,b" to
"SELECT a,b,a FROM ... GROUP BY a,b ORDER BY a,b".
The fix to this bug consists of checking for duplicate columns in the SELECT
list when constructing the GROUP BY list in transforming DISTINCT to GROUP
BY and skipping the ones that are already in.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.