The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Since the original fix for this bug lowercases the search pattern it's not a
good idea to copy the search pattern to the output instead of the real table
name found (since, depending on the case mode these two names may differ in
case).
Fixed the infrmation_schema.test failure by making sure the actual table
name of an inoformation schema table is passed instead of the lookup pattern
even when the pattern doesn't contain wildcards.
The atomic operations implementation on 5.1 has a few problems,
which might cause tests to abort randomly. Since no code in 5.1
uses atomic operations, simply remove the code.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
returns nothing
When looking for table or database names inside INFORMATION_SCHEMA
we must convert the table and database names to lowercase (just as it's
done in the rest of the server) when lowercase_table_names is non-zero.
This will allow us to find the same tables that we would find if there
is no condition.
Fixed by converting to lower case when extracting the database and
table name conditions.
Test case added.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENTbut, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, session's user will be written into query log events
if these statements call CURREN_USER() or 'ALTER EVENT' does not assign a definer.
Apart strict-aliasing warnings, fix the remaining warnings
generated by GCC 4.4.4 -Wall and -Wextra flags.
One major source of warnings was the in-house function my_bcmp
which (unconventionally) took pointers to unsigned characters
as the byte sequences to be compared. Since my_bcmp and bcmp
are deprecated functions whose only difference with memcmp is
the return value, every use of the function is replaced with
memcmp as the special return value wasn't actually being used
by any caller.
There were also various other warnings, mostly due to type
mismatches, missing return values, missing prototypes, dead
code (unreachable) and ignored return values.
POSIX requires that a signal handler defined with sigaction()
is not reset on delivering a signal unless SA_NODEFER or
SA_RESETHAND is set. It is therefore unnecessary to redefine
the handler on signal delivery on platforms where sigaction()
is used without those flags.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
automatic reconnect
A client with automatic reconnect enabled will see the error
message "Lost connection to MySQL server during query" if the
connection is lost between mysql_stmt_prepare() and
mysql_stmt_execute(). The mysql_stmt_errno() number, however,
is 0 -- not the corresponding value 2013.
This patch checks for the case where the prepared statement
has been pruned due to a connection loss (i.e., stmt->mysql
has been set to NULL) during a call to cli_advanced_command(),
and avoids changing the last_errno to the result of the last
reconnect attempt.
The default value of the myisam_max_extra_sort_file_size could be
higher than the maximum accepted value, leading to warnings upon
the server start.
The solution is to simply set the value to the maximum value in a
32-bit built (2147483647, one less than the current). This should
be harmless as the option is currently unused in 5.1.
The problem was that a user could supply supply data in chunks
via the COM_STMT_SEND_LONG_DATA command to prepared statement
parameter other than of type TEXT or BLOB. This posed a problem
since other parameter types aren't setup to handle long data,
which would lead to a crash when attempting to use the supplied
data.
Given that long data can be supplied at any stage of a prepared
statement, coupled with the fact that the type of a parameter
marker might change between consecutive executions, the solution
is to validate at execution time each parameter marker for which
a data stream was provided. If the parameter type is not TEXT or
BLOB (that is, if the type is not able to handle a data stream),
a error is returned.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENTbut, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, session's user will be written into query log events
if these statements call CURREN_USER() or 'ALTER EVENT' does not assign a definer.
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.
During creation of the table list of
processed tables hidden I_S table 'VARIABLES'
is erroneously added into the table list.
it leads to ER_UNKNOWN_TABLE error in
TABLE_LIST::add_table_to_list() function.
The fix is to skip addition of hidden I_S
tables into the table list.
require O(#scans) memory
When an index merge operation was restarted, it would
re-allocate the Unique object controlling the duplicate row
ID elimination. Fixed by making the Unique object a member
of QUICK_INDEX_MERGE_SELECT and thus reusing it throughout
the lifetime of this object.
file .\filesort.cc, line 149 (part II)
Problem: the server didn't disregard sort order
for some zero length tuples.
Fix: skip sort order in such a case
(zero length NOT NULL string functions).
Incorrect handling of NULL arguments could lead to a crash on
the IN or CASE operations when either NULL arguments were
passed explicitly as arguments (IN) or implicitly generated by
the WITH ROLLUP modifier (both IN and CASE).
Item_func_case::find_item() assumed all necessary comparators
to be instantiated in fix_length_and_dec(). However, in the
presence of WITH ROLLUP modifier, arguments could be
substituted with an Item_null leading to an "unexpected"
STRING_RESULT comparator being invoked.
In addition to the problem identical to the above,
Item_func_in::val_int() could crash even with explicitly passed
NULL arguments due to an optimization in fix_length_and_dec()
leading to NULL arguments being ignored during comparators
creation.
In process of record search it is not taken into account
that inital quick->file->ref value could be inapplicable
to range interval. After proper row is found this value is
stored into the record buffer and later the record is
filtered out at condition evaluation stage.
The fix is store a refernce of found row to the handler ref field.
Problem: a flaw (derefencing a NULL pointer) in the LIKE optimization
code may lead to a server crash in some rare cases.
Fix: check the pointer before its dereferencing.
mysql_client_binlog_statement
Problem: server may read from unassigned memory performing
"wrong" BINLOG queries.
Fix: never read from unassigned memory.
line exceeds the limit
The number and/or names of our files for the main test suite
(contents of "mysql-test/t/") now exceeds the command line
length limit on AIX.
Solve the problem by using separate "cp" commands for the
various file name extensions.
This is the fix for 5.1, where only the behaviour on upgrade is changed:
If the server was stopped when the upgrade begins, we assume the
administrator is taking manual action, so we do not start the (new)
server at the end of the upgrade.
We still install the start/stop script, so it will be started on reboot.