When the SQL_BIG_RESULT flag is specified SELECT should store items from the
select list in the filesort data and use them when sending to a client.
The get_addon_fields function is responsible for creating necessary structures
for that. But this function was allowed to do so only for SELECT and
INSERT .. SELECT queries. This makes the SQL_BIG_RESULT useless for
the CREATE .. SELECT queries.
Now the get_addon_fields allows storing select list items in the filesort
data for the CREATE .. SELECT queries.
If a primary key is defined over column c of enum type then
the EXPLAIN command for a look-up query of the form
SELECT * FROM t WHERE c=0
said that the query was with an impossible where condition though the
query correctly returned non-empty result set when the table indeed
contained rows with error empty strings for column c.
This kind of misbehavior was due to a bug in the function
Field_enum::store(longlong,bool) that erroneously returned 1 if
the the value to be stored was equal to 0.
Note that the method
Field_enum::store(const char *from,uint length,CHARSET_INFO *cs)
correctly returned 0 if a value of the error empty string
was stored.
This bug manifested itself for join queries with GROUP BY and HAVING clauses
whose SELECT lists contained DISTINCT. It occurred when the optimizer could
deduce that the result set would have not more than one row.
The bug could lead to wrong result sets for queries of this type because
HAVING conditions were erroneously ignored in some cases in the function
remove_duplicates.
ORDER BY primary_key on InnoDB table
Queries that use an InnoDB secondary index to retrieve
data don't need to sort in case of ORDER BY primary key
if the secondary index is compared to constant(s).
They can also skip sorting if ORDER BY contains both the
the secondary key parts and the primary key parts (in
that order).
This is because InnoDB returns the rows in order of the
primary key for rows with the same values of the secondary
key columns.
Fixed by preventing temp table sort for the qualifying
queries.
by long running transaction
On Windows opened files can't be deleted. There was a special
upgraded lock mode (TL_WRITE instead of TL_WRITE_ALLOW_READ)
in ALTER TABLE to make sure nobody has the table opened
when deleting the old table in ALTER TABLE. This special mode
was causing ALTER TABLE to hang waiting on a lock inside InnoDB.
This special lock is no longer necessary as the server is
closing the tables it needs to delete in ALTER TABLE.
Fixed by removing the special lock.
Note that this also reverses the fix for bug 17264 that deals with
another consequence of this special lock mode being used.
The Item_date_typecast::val_int function doesn't reset null_value flag.
This makes all values that follows the first null value to be treated as nulls
and led to a wrong result.
Now the Item_date_typecast::val_int function correctly sets the null_value flag
for both null and non-null values.
a temporary table.
The result string of the Item_func_group_concat wasn't initialized in the
copying constructor of the Item_func_group_concat class. This led to a
wrong charset of GROUP_CONCAT result when the select employs a temporary
table.
The copying constructor of the Item_func_group_concat class now correctly
initializes the charset of the result string.
Optimization of queries with DETERMINISTIC functions in the
WHERE clause was not effective: sequential scan was always
used.
Now a SF with the DETERMINISTIC flags is treated as constant
when it's arguments are constants (or a SF doesn't has arguments).
The get_time_value function is added. It is used to obtain TIME values both
from items the can return time as an integer and from items that can return
time only as a string.
The Arg_comparator::compare_datetime function now uses pointer to a getter
function to obtain values to compare. Now this function is also used for
comparison of TIME values.
The get_value_func variable is added to the Arg_comparator class.
It points to a getter function for the DATE/DATETIME/TIME comparator.
The Field_newdate::store when storing a DATETIME value was returning the
'value was cut' error even if the thd->count_cuted_fields flag is set to
CHECK_FIELD_IGNORE. This made range optimizr think that there is no
appropriate data in the table and thus to return an empty set.
Now the Field_newdate::store function returns conversion error only if the
thd->count_cuted_fields flag isn't set to CHECK_FIELD_IGNORE.
pseudo_thread_id was reset to zero via mysql_change_user() handling
whereas there is no reason to do that. Moreover, having two
concurrent threads that change user and create a namesake temp tables
leads to recording the dup pair of queries:
set @@session.pseudo_thread_id = 0;
CREATE temporary table `the namesake`;
which will stall the slave as the second instance can not be created.
And that is the bug case.
Fixed by correcting pseudo_thread_id value after mysql_change_user().
gettimeofday() can fail and presumably, so can time().
Keep an eye on it.
Since we have no data on this at all so far, we just
retry on failure (and log the event), assuming that
this is just an intermittant failure. This might of
course hang the threat until we succeed. Once we know
more about these failures, an appropriate more clever
scheme may be picked (only try so many times per thread,
etc., if that fails, return last "good" time() we got or
some such). Using sql_print_information() to log as this
probably only occurs in high load scenarios where the debug-
trace likely is disabled (or might interfere with testing
the effect). No test-case as this is a non-deterministic
issue.
incomplete in 5.0 (and review fixes).
When in 5.0.13 I introduced class Prepared_statement and methods
::prepare and ::execute, general logging was left out of this class.
This was good for stored procedures, since in stored procedures
we do not log sub-statements, but introduced a regression in case of SQL
syntax for prepared statements, as previously we would log the actual
statements to the log, and after the change we would log only
COM_QUERY text.
Restore the old behavior, but still suppress logging if inside a stored
procedure.
Based on a community contributed patch from Vladimir Shebordaev.
No test case since we do not have a mechanism to test output
of the general log.
Time values were compared by the BETWEEN function as strings. This led to a
wrong result in cases when some of arguments are less than 100 hours and other
are greater.
Now if all 3 arguments of the BETWEEN function are of the TIME type then
they are compared as integers.
causes full table lock on innodb table.
Also fixes Bug#28502 Triggers that update another innodb table
will block on X lock unnecessarily (duplciate).
Code review fixes.
Both bugs' synopses are misleading: InnoDB table is
not X locked. The statements, however, cannot proceed concurrently,
but this happens due to lock conflicts for tables used in triggers,
not for the InnoDB table.
If a user had an InnoDB table, and two triggers, AFTER UPDATE and
AFTER INSERT, competing for different resources (e.g. two distinct
MyISAM tables), then these two triggers would not be able to execute
concurrently. Moreover, INSERTS/UPDATES of the InnoDB table would
not be able to run concurrently.
The problem had other side-effects (see respective bug reports).
This behavior was a consequence of a shortcoming of the pre-locking
algorithm, which would not distinguish between different DML operations
(e.g. INSERT and DELETE) and pre-lock all the tables
that are used by any trigger defined on the subject table.
The idea of the fix is to extend the pre-locking algorithm to keep track,
for each table, what DML operation it is used for and not
load triggers that are known to never be fired.