list using a function
When executing dependent subqueries they are re-inited and re-exec() for
each row of the outer context.
The cause for the bug is that during subquery reinitialization/re-execution,
the optimizer reallocates JOIN::join_tab, JOIN::table in make_simple_join()
and the local variable in 'sortorder' in create_sort_index(), which is
allocated by make_unireg_sortorder().
Care must be taken not to allocate anything into the thread's memory pool
while re-initializing query plan structures between subquery re-executions.
All such items mush be cached and reused because the thread's memory pool
is freed at the end of the whole query.
Note that they must be cached and reused even for queries that are not
otherwise cacheable because otherwise it will grow the thread's memory
pool every time a cacheable query is re-executed.
We provide additional members to the JOIN structure to store references
to the items that need to be cached.
account predicates that become sargable after reading const tables.
In some cases this resulted in choosing non-optimal execution plans.
Now info of such potentially saragable predicates is saved in
an array and after reading const tables we check whether this
predicates has become saragable.
When using index for group by and range access the server isolates
a set of ranges based on the conditions over the key parts of the
index used. Then it uses only the ranges over the GROUP BY fields to
jump from one group to another. Since the GROUP BY fields may form a
prefix over the index, we may use only a prefix of the ranges produced
by the range optimizer.
Each range contains a notion on whether it includes its border values.
The problem is that when using a range prefix, the last range is open
because it assumes that there is a range on the next keypart. Thus when
we use a prefix range as it is, it excludes all border values.
The solution is when ignoring the suffix of the range conditions
(to jump over the GROUP BY prefix only) the server must change the
remaining intervals so they always contain their borders, e.g.
if the whole range was :
(1,-inf) <= (<group_by_col>,<min_max_arg_col>) < (1, 3) we must make
(1) <= (<group_by_col>) <= (1) because (a,b) < (c1,c2) means :
a < c1 OR (a = c1 AND b < c2).
strings
MySQL is setting the flag HA_END_SPACE_KEYS for all the keys that reference
text or varchar columns with collation different than binary.
This was done to handle correctly the situation where a lookup on such a key
may return more than 1 row because of the presence of many rows that differ
only by the amount of trailing space in the table's string column.
Inserting such values however appears to violate the unique checks on
INSERT/UPDATE. Thus that flag must not be set as it will prevent the optimizer
from choosing a faster access method.
This fix removes the setting of the HA_END_SPACE_KEYS flag.
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
The mysql_alter_table() was able to rename only a table.
The view/table renaming code is moved from the function rename_tables
to the new function called do_rename().
The mysql_alter_table() function calls it when it needs to rename a view.
Bug #21785 "Server crashes after rename of the log table" and
Bug #21966 "Strange warnings on create like/repair of the log
tables"
According to the patch, from now on, one should use RENAME to
perform a log table rotation (this should also be reflected in
the manual).
Here is a sample:
use mysql;
CREATE TABLE IF NOT EXISTS general_log2 LIKE general_log;
RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
The rules for Rename of the log tables are following:
IF 1. Log tables are enabled
AND 2. Rename operates on the log table and nothing is being
renamed to the log table.
DO 3. Throw an error message.
ELSE 4. Perform rename.
The very RENAME query will go the the old (backup) table. This is
consistent with the behavoiur we have with binlog ROTATE LOGS
statement.
Other problems, which are solved by the patch are:
1) Now REPAIR of the log table is exclusive operation (as it should be), this
also eliminates lock-related warnings. and
2) CREATE LIKE TABLE now usese usual read lock on the source table rather
then name lock, which is too restrictive. This way we get rid of another
log table-related warning, which occured because of the above fact
(as a side-effect, name lock resulted in a warning).
should fail to create
The problem was that this type of errors was checked during view
creation, which doesn't happen when CREATE VIEW is a statement of
a created stored routine.
The solution is to perform the checks at parse time. The idea of the
fix is that the parser checks if a construction just parsed is allowed
in current circumstances by testing certain flags, and this flags are
reset for VIEWs.
The side effect of this change is that if the user already have
such bogus routines, it will now get a error when trying to do
SHOW CREATE PROCEDURE proc;
(and some other) and when trying to execute such routine he will get
ERROR 1457 (HY000): Failed to load routine test.p5. The table mysql.proc is missing, corrupt, or contains bad data (internal code -6)
However there should be very few such users (if any), and they may
(and should) drop these bogus routines.
The Cached_item_decimal::cmp() method wasn't checking for null pointer
returned from the val_decimal() of the item being cached.
This leads to server crash.
The Cached_item_decimal::cmp() method now check for null values.
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
Examined rows are counted for every join part. The per-join-part
counter was incremented over all iterations. The result variable
was replaced at the end of every iteration. The final result was
the number of examined rows by the join part that ended its
execution as the last one. The numbers of other join parts was
lost.
Now we reset the per-join-part counter before every iteration and
add it to the result variable at the end of the iteration. That
way we get the sum of all iterations of all join parts.
No test case. Testing this needs a look into the slow query log.
I don't know of a way to do this portably with the test suite.