the master but on the slave
MySQL can decide to "downgrade" a INSERT DELAYED statement
to normal insert in certain situations.
One such situation is when the slave is replaying a
replication feed.
However INSERT DELAYED is logged even if there're no updates
whereas the NORMAL INSERT is not logged in such cases.
Fixed by always logging a "downgraded" INSERT DELAYED: even
if there were no updates.
When the SQL_BIG_RESULT flag is specified SELECT should store items from the
select list in the filesort data and use them when sending to a client.
The get_addon_fields function is responsible for creating necessary structures
for that. But this function was allowed to do so only for SELECT and
INSERT .. SELECT queries. This makes the SQL_BIG_RESULT useless for
the CREATE .. SELECT queries.
Now the get_addon_fields allows storing select list items in the filesort
data for the CREATE .. SELECT queries.
If a primary key is defined over column c of enum type then
the EXPLAIN command for a look-up query of the form
SELECT * FROM t WHERE c=0
said that the query was with an impossible where condition though the
query correctly returned non-empty result set when the table indeed
contained rows with error empty strings for column c.
This kind of misbehavior was due to a bug in the function
Field_enum::store(longlong,bool) that erroneously returned 1 if
the the value to be stored was equal to 0.
Note that the method
Field_enum::store(const char *from,uint length,CHARSET_INFO *cs)
correctly returned 0 if a value of the error empty string
was stored.
This bug manifested itself for join queries with GROUP BY and HAVING clauses
whose SELECT lists contained DISTINCT. It occurred when the optimizer could
deduce that the result set would have not more than one row.
The bug could lead to wrong result sets for queries of this type because
HAVING conditions were erroneously ignored in some cases in the function
remove_duplicates.
After dumping triggers mysqldump copied
the value of the OLD_SQL_MODE variable to the SQL_MODE
variable. If the --compact option of the mysqldump was
not set the OLD_SQL_MODE variable had the value
of the uninitialized SQL_MODE variable. So
usually the NO_AUTO_VALUE_ON_ZERO option of the
SQL_MODE variable was discarded.
This fix is for non-"--compact" mode of the mysqldump,
because mysqldump --compact never set SQL_MODE to the
value of NO_AUTO_VALUE_ON_ZERO.
The dump_triggers_for_table function has been modified
to restore previous value of the SQL_MODE variable after
dumping triggers using the SAVE_SQL_MODE temporary
variable.
ORDER BY primary_key on InnoDB table
Queries that use an InnoDB secondary index to retrieve
data don't need to sort in case of ORDER BY primary key
if the secondary index is compared to constant(s).
They can also skip sorting if ORDER BY contains both the
the secondary key parts and the primary key parts (in
that order).
This is because InnoDB returns the rows in order of the
primary key for rows with the same values of the secondary
key columns.
Fixed by preventing temp table sort for the qualifying
queries.
by long running transaction
On Windows opened files can't be deleted. There was a special
upgraded lock mode (TL_WRITE instead of TL_WRITE_ALLOW_READ)
in ALTER TABLE to make sure nobody has the table opened
when deleting the old table in ALTER TABLE. This special mode
was causing ALTER TABLE to hang waiting on a lock inside InnoDB.
This special lock is no longer necessary as the server is
closing the tables it needs to delete in ALTER TABLE.
Fixed by removing the special lock.
Note that this also reverses the fix for bug 17264 that deals with
another consequence of this special lock mode being used.
The Item_date_typecast::val_int function doesn't reset null_value flag.
This makes all values that follows the first null value to be treated as nulls
and led to a wrong result.
Now the Item_date_typecast::val_int function correctly sets the null_value flag
for both null and non-null values.
a temporary table.
The result string of the Item_func_group_concat wasn't initialized in the
copying constructor of the Item_func_group_concat class. This led to a
wrong charset of GROUP_CONCAT result when the select employs a temporary
table.
The copying constructor of the Item_func_group_concat class now correctly
initializes the charset of the result string.
Optimization of queries with DETERMINISTIC functions in the
WHERE clause was not effective: sequential scan was always
used.
Now a SF with the DETERMINISTIC flags is treated as constant
when it's arguments are constants (or a SF doesn't has arguments).
For each view the mysqldump utility creates a temporary table
with the same name and the same columns as the view
in order to satisfy views that depend on this view.
After the creation of all tables, mysqldump drops all
temporary tables and creates actual views.
However, --skip-add-drop-table and --compact flags disable
DROP TABLE statements for those temporary tables. Thus, it was
impossible to create the views because of existence of the
temporary tables with the same names.
The get_time_value function is added. It is used to obtain TIME values both
from items the can return time as an integer and from items that can return
time only as a string.
The Arg_comparator::compare_datetime function now uses pointer to a getter
function to obtain values to compare. Now this function is also used for
comparison of TIME values.
The get_value_func variable is added to the Arg_comparator class.
It points to a getter function for the DATE/DATETIME/TIME comparator.
The Field_newdate::store when storing a DATETIME value was returning the
'value was cut' error even if the thd->count_cuted_fields flag is set to
CHECK_FIELD_IGNORE. This made range optimizr think that there is no
appropriate data in the table and thus to return an empty set.
Now the Field_newdate::store function returns conversion error only if the
thd->count_cuted_fields flag isn't set to CHECK_FIELD_IGNORE.
Added an option to yassl to allow "quiet shutdown" like openssl does. This option causes the SSL libs to NOT perform the close_notify handshake during shutdown. This fixes a hang we experience because we hold a lock during socket shutdown.
Time values were compared by the BETWEEN function as strings. This led to a
wrong result in cases when some of arguments are less than 100 hours and other
are greater.
Now if all 3 arguments of the BETWEEN function are of the TIME type then
they are compared as integers.
causes full table lock on innodb table.
Also fixes Bug#28502 Triggers that update another innodb table
will block on X lock unnecessarily (duplciate).
Code review fixes.
Both bugs' synopses are misleading: InnoDB table is
not X locked. The statements, however, cannot proceed concurrently,
but this happens due to lock conflicts for tables used in triggers,
not for the InnoDB table.
If a user had an InnoDB table, and two triggers, AFTER UPDATE and
AFTER INSERT, competing for different resources (e.g. two distinct
MyISAM tables), then these two triggers would not be able to execute
concurrently. Moreover, INSERTS/UPDATES of the InnoDB table would
not be able to run concurrently.
The problem had other side-effects (see respective bug reports).
This behavior was a consequence of a shortcoming of the pre-locking
algorithm, which would not distinguish between different DML operations
(e.g. INSERT and DELETE) and pre-lock all the tables
that are used by any trigger defined on the subject table.
The idea of the fix is to extend the pre-locking algorithm to keep track,
for each table, what DML operation it is used for and not
load triggers that are known to never be fired.
A race condition in the integration between MyISAM and the query cache code
caused the query cache to fail to invalidate itself on concurrently inserted
data.
This patch fix this problem by using the existing handler interface which, upon
each statement cache attempt, compare the size of the table as viewed from the
cache writing thread and with any snap shot of the global table state. If the
two sizes are different the global table size is unknown and the current
statement can't be cached.
A bug in the restore_prev_nj_state function allowed interleaving
inner tables of outer join operations with outer tables. With the
current implementation of the nested loops algorithm it could lead
to wrong result sets for queries with nested outer joins.
Another bug in this procedure effectively blocked evaluation of some
valid execution plans for queries with nested outer joins.