Description: Fix for bug CVE-2012-5611 (bug 67685) is
incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and
check_grant_db() can be overflown by up to two bytes. That's
probably not enough to do anything more serious than crashing
mysqld.
Analysis: In acl_get() when "copy_length" is calculated it
just adding the variable lengths. But when we are using them
with strmov() we are adding +1 to each. This will lead to a
three byte buffer overflow (i.e two +1's at strmov() and one
byte for the null added by strmov() function). Similarly it
happens for check_grant_db() function as well.
Fix: We need to add "+2" to "copy_length" in acl_get()
and "+1" to "copy_length" in check_grant_db().
REPEATED TWICE IN BINLOG
Problem:
=======
If LOAD DATA ... SET ... is used the last argument of SET is
repeated twice in replication binlog.
Analysis:
========
LOAD DATA statements are reconstructed once again before
they are written to the binary log. When SET clauses are
specified as part of LOAD DATA statement, these SET clause
user command strings need to be stored in order to rebuild
the original user command. During parsing each column and
the value in the SET command are stored in two differenet
lists. All the values are stored in a string list.
When SET expression has more than one value as shown in the
following example:
SET a = @a, b = CONCAT(@b, '| 123456789');
Parser extracts values in the following manner i.e Item name
, value string, actual length of the value of the item with
in the string.
Item a:
Value for a:"= @a, b = CONCAT(@b, '| 123456789')
str_length = 4
Item b:
Value for b:"= CONCAT(@b, '| 123456789')
str_length = 27
During reconstructing the LOAD DATA command the above
strings are retrived as it is and appended to the LOAD DATA
statement. Hence it becomes as shown below.
SET `a`= @a, b = CONCAT(@b, '| 123456789'),
`b`= CONCAT(@b, '| 123456789')
Fix:
===
During reconstruction of SET command, retrieve exact item
value string rather than reading the entire string.
sql/sql_load.cc:
Added code to extract the exact Item value.
AND 'KILL SESSION' LEAD TO CRASH
Analysis:
--------
This situation occurs when the connection executes query
"show engine innodb status" and this connection is killed by
executing statement "kill <con>" by another connection.
In function "innodb_show_status", function "stat_print"
is called to print the status but return value of function
is not checked. After killing connection, if write to
connection fails then error is returned and same is set
in Diagnostic area. Since FALSE is returned from
"innodb_show_status" now, assert to check no error
is set in function "set_eof_status" (called from
my_eof) is failing.
Fix:
----
Changed code to check return value of function "stat_print"
in "innodb_show_status".
Description:
------------
There are 2 issues reported in the bug report,
1. One session running a "long" select, then, from the other
session, you kill that first one, while select is
running, and it receives that message "Server shutdown in
progress".
Reported Date: 02-Apr-2006
=> Looks like this isuse is already fixed in 2009 by the patch
pushed for bug28141.
2. Killing query which goes to filesort, logs error entries like:
120416 9:17:28 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
120416 9:18:48 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
120416 9:19:39 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
Reported Date: 16-Apr-2012
=> This issue is introduced in 5.5+ versions. Fixing this issue
in this patch.
Analysis:
---------
In function "filesort()", on error we are logging error message.
To the error message, the message related THD::killed_errno is
also appeneded, if it is set.(THD::kill_errno value is obtained
by calling member function THD::killed_errno)
In the scenario mentioned in this bug report, when we kill the
connection, THD::kill_errno is set to the THD::KILL_CONNECTION.
Enum type THD::KILL_CONNECTION corressponds to value
ER_SERVER_SHUTDOWN. Because of this, "Server shutdown in ...." is
appended to the message logged.
Fix:
----
Modified code of "filesort()" function to append "KILL_QUERY"
status to error message when thread is killed and server
shutdown is not in progress.
For queries like
update t1 set ... where <unique key> order by ... limit ...
we need to handle the fact that unique keys allow NULL
values, and hence can return more than one row.
sql/opt_range.cc:
When the unique key has multiple key parts,
check each key_part for nullability, rather than the first key part.
(s/key->part ==/key_tree->part ==/)
Also: revert the if() test, for better readability.
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.
To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.
This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.
Finally, two warnings are added when dump threads encounter the temporary
errors.
DO WHAT YOU EXPECT"
Fix info:
--------
Backport of the deprecation bug fix (WL#5265) for global variable
'THREAD_CONCURRENCY' from mysql-5.6 to mysql-5.5
Note: With this backport, certain additional deprecation warnings
would be reported under error conditions while setting the
global/session variables.
CONTAINING NULL
Problem:-
In MySQL, We can obtain the number of distinct expression
combinations that do not contain NULL by giving a list of
expressions in COUNT(DISTINCT).
However rows with NULL values are
incorrectly included in the count when loose index scan is
used.
Analysis:-
In case of loose index scan, we check whether the field is null or
not and increase the count in Item_sum_count::add().
But there we are checking for the first field in COUNT(DISTINCT),
not for every field. This is causing an incorrect result.
Solution:-
Check all field in Item_sum_count::add(), whether there values
are null or not. Then only increment the count.
******
Bug#17222452 - SELECT COUNT(DISTINCT A,B) INCORRECTLY COUNTS ROWS
CONTAINING NULL
Problem:-
In MySQL, We can obtain the number of distinct expression
combinations that do not contain NULL by giving a list of
expressions in COUNT(DISTINCT).
However rows with NULL values are
incorrectly included in the count when loose index scan is
used.
Analysis:-
In case of loose index scan, we check whether the field is null or
not and increase the count in Item_sum_count::add().
But there we are checking for the first field in COUNT(DISTINCT),
not for every field. This is causing an incorrect result.
Solution:-
Check all field in Item_sum_count::add(), whether there values
are null or not. Then only increment the count.
Problem:-
Second execution of prepared statement for query with
parameter in limit clause, causes an assert when using
connectors (e.g., Connector C).
Analysis:-
In prepared statement, LIMIT parameters can be
specified using '?' markers. Value for the parameter can
be supplied while executing the prepared statement.
Passing string, float or double values for LIMIT clause
works well from command-line client. That's because, while
setting the LIMIT parameter value from a user-variable,
the value is converted to integer value.
However, when prepared statement is executed from other
interfaces as J connectors, or C applications etc,
the value for the parameters are sent to the server
with execute command. Each item in command has value and
the data TYPE. So, while setting parameter values
from this log, value is set to all the parameters
with the same data type as passed.
Here, we have the logic to convert the value to change the
state and item_type if it is part of LIMIT parameter and
its item_type is not INT.
But when we reset this parameter we save the item_type but change
state. So on second execution we have old item_type but our state
has been changed, which make us to use string type variable
in Item_param::query_str_val(). This cause an assert.
Fix:
Instead of checking the item_type of the parameter, check for
the state of the parameter. As state value are reset everytime
we execute the statement.
ER_LOCK_WAIT_TIMEOUT".
The problem was that after changes caused by fix bug 14188793 "DEADLOCK
CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION"/
bug 17054007 "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB
DEADLOCK implicit rollback of transaction which occurred on ER_LOCK_DEADLOCK
(and ER_LOCK_WAIT_TIMEOUT if innodb_rollback_on_timeout option was set)
didn't start new transaction in @@autocommit=1 mode.
Such behavior although consistent with behavior of explicit ROLLBACK has
broken expectations of users and backward compatibility assumptions.
This patch fixes problem by reverting to starting new transaction
in 5.5/5.6.
The plan is to keep new behavior in trunk so the code change from this
patch is to be null-merged there.
Problem:-
In a Procedure, when we are comparing value of select query
with IN clause and they both have different collation, cause
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
sel1 sel2
Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call
Item_in_subselect::fix_fields while will resolve sel2. There
in Item_in_subselect::select_transformer, we evaluate the
left expression(sel1) and store it in Item_cache_* object
(to avoid re-evaluating it many times during subquery execution)
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2
in Item_in_subselect::select_transformer() which will compare
t2.d and sel1(which is cached in Item_in_optimizer).
Later while checking the collation in agg_item_collations()
we get error and we cleanup the item. While cleaning up we cleaned
the cached value in Item_in_optimizer object.
When we execute the procedure second time, we have condition for
sel2 and while setup_cond(), we can't able to find reference item
as it is cleanup while item cleanup.So it assert.
Solution:-
We should not cleanup the cached value for Item_in_optimizer object,
if we have put the condition to subselect.
Problem:-
In a Procedure, when we are comparing value of select query
with IN clause and they both have different collation, cause
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
sel1 sel2
Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call
Item_in_subselect::fix_fields while will resolve sel2. There
in Item_in_subselect::select_transformer, we evaluate the
left expression(sel1) and store it in Item_cache_* object
(to avoid re-evaluating it many times during subquery execution)
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2
in Item_in_subselect::select_transformer() which will compare
t2.d and sel1(which is cached in Item_in_optimizer).
Later while checking the collation in agg_item_collations()
we get error and we cleanup the item. While cleaning up we cleaned
the cached value in Item_in_optimizer object.
When we execute the procedure second time, we have condition for
sel2 and while setup_cond(), we can't able to find reference item
as it is cleanup while item cleanup.So it assert.
Solution:-
We should not cleanup the cached value for Item_in_optimizer object,
if we have put the condition to subselect.
"SHOW PROCESSLIST"
Analysis:
----------
The problem here is, if one connection changes its
default db and at the same time another connection executes
"SHOW PROCESSLIST", when it wants to read db of the another
connection then there is a chance of accessing the invalid
memory.
The db name stored in THD is not guarded while changing user
DB and while reading the user DB in "SHOW PROCESSLIST".
So, if THD.db is freed by thd "owner" thread and if another
thread executing "SHOW PROCESSLIST" statement tries to read
and copy THD.db at the same time then we may endup in the issue
reported here.
Fix:
----------
Used mutex "LOCK_thd_data" to guard THD.db while freeing it
and while copying it to processlist.
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.
The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.
The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.
The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.
This patch solves these two problems in the following way:
- In case when MDL deadlock is detect transaction rollback is requested
by setting THD::transaction_rollback_request flag.
- Code performing rollback of transaction if THD::transaction_rollback_request
is moved out from trans_rollback_stmt(). Now we handle rollback request
on the same level as we call trans_rollback_stmt() and release statement/
transaction MDL locks.
AND PARTITION VALUES IN (NULL)
The code assumed there was at least one list element
in LIST partitioned table.
Fixed by checking the number of list elements.
IN STORED ROUTINE
Inside a loop in a stored procedure, we create a partitioned
table. The CREATE statement is thus treated as a prepared statement:
it is prepared once, and then executed by each iteration. Thus its Lex
is reused many times. This Lex contains a part_info member, which
describes how the partitions should be laid out, including the
partitioning function. Each execution of the CREATE does this, in
open_table_from_share ():
tmp= mysql_unpack_partition(thd, share->partition_info_str,
share->partition_info_str_len,
outparam, is_create_table,
share->default_part_db_type,
&work_part_info_used);
...
tmp= fix_partition_func(thd, outparam, is_create_table);
The first line calls init_lex_with_single_table() which creates
a TABLE_LIST, necessary for the "field fixing" which will be
done by the second line; this is how it is created:
if ((!(table_ident= new Table_ident(thd,
table->s->db,
table->s->table_name, TRUE))) ||
(!(table_list= select_lex->add_table_to_list(thd,
table_ident,
NULL,
0))))
return TRUE;
it is allocated in the execution memory root.
Then the partitioning function ("id", stored in Lex -> part_info)
is fixed, which calls Item_ident:: fix_fields (), which resolves
"id" to the table_list above, and stores in the item's
cached_table a pointer to this table_list.
The table is created, later it is dropped by another statement,
then we execute again the prepared CREATE. This reuses the Lex,
thus also its part_info, thus also the item representing the
partitioning function (part_info is cloned but it's a shallow
cloning); CREATE wants to fix the item again (which is
normal, every execution fixes items again), fix_fields ()
sees that the cached_table pointer is set and picks up the
pointed table_list. But this last object does not exist
anymore (it was allocated in the execution memory root of
the previous execution, so it has been freed), so we access
invalid memory.
The solution: when creating the table_list, mark that it
cannot be cached.
OF OLD STYLE DECIMALS
Problem: In RBR, Slave is unable to read row buffer
properly when the row event contains MYSQL_TYPE_DECIMAL
(old style decimals) data type column.
Analysis: In RBR, Slave assumes that Master sends
meta data information for all column types like
text,blob,varchar,old decimal,new decimal,float,
and few other types along with row buffer event.
But Master is not sending this meta data information
for old style decimal columns. Hence Slave is crashing
due to unknown precision value for these column types.
Master cannot send this precision value to Slave which
will break replication cross-version compatibility.
Fix: To fix the crash, Slave will now throw error if it
receives old-style decimal datatype. User should
consider changing the old-style decimal to new style
decimal data type by executing "ALTER table modify column"
query as mentioned in http://dev.mysql.com/
doc/refman/5.0/en/upgrading-from-previous-series.html.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
PARTITIONS.
ANALYSIS
--------
Whenever we query I_S.partitions,
ha_partition::get_dynamic_partition_info()
is called which resets the cardinality
according to the number of rows in last
partition.
Fix
---
When we call get_dynamic_partition_info()
avoid passing the flag HA_STATUS_CONST
to info() since HA_STATUS_CONST should
ideally not be called for per partition.
[Approved by mattiasj rb#2830 ]
IN TIME RECOVERY FAILURE ON SLAVES
Problem:
DROP TEMP TABLE IF EXISTS commands can cause point
in time recovery (re-applying binlog) failures.
Analyses:
In RBR, 'DROP TEMPORARY TABLE' commands are
always binlogged by adding 'IF EXISTS' clauses.
Also, the slave SQL thread will not check replicate.* filter
rules for "DROP TEMPORARY TABLE IF EXISTS" queries.
If log-slave-updates is enabled on slave, these queries
will be binlogged in the format of "USE `db`;
DROP TEMPORARY TABLE IF EXISTS `t1`;" irrespective
of filtering rules and irrespective of the `db` existence.
When users try to recover slave from it's own binlog,
use `db` command might fail if `db` is not present on slave.
Fix:
At the time of writing the 'DROP TEMPORARY TABLE
IF EXISTS' query into the binlog, 'use `db`' will not be
present and the table name in the query will be a fully
qualified table name.
Eg:
'USE `db`; DROP TEMPORARY TABLE IF EXISTS `t1`;'
will be logged as
'DROP TEMPORARY TABLE IF EXISTS `db`.`t1`;'.
Inside a loop in a stored procedure, we create a partitioned
table. The CREATE statement is thus treated as a prepared statement:
it is prepared once, and then executed by each iteration. Thus its Lex
is reused many times. This Lex contains a part_info member, which
describes how the partitions should be laid out, including the
partitioning function. Each execution of the CREATE does this, in
open_table_from_share ():
tmp= mysql_unpack_partition(thd, share->partition_info_str,
share->partition_info_str_len,
outparam, is_create_table,
share->default_part_db_type,
&work_part_info_used);
...
tmp= fix_partition_func(thd, outparam, is_create_table);
The first line calls init_lex_with_single_table() which creates
a TABLE_LIST, necessary for the "field fixing" which will be
done by the second line; this is how it is created:
if ((!(table_ident= new Table_ident(thd,
table->s->db,
table->s->table_name, TRUE))) ||
(!(table_list= select_lex->add_table_to_list(thd,
table_ident,
NULL,
0))))
return TRUE;
it is allocated in the execution memory root.
Then the partitioning function ("id", stored in Lex -> part_info)
is fixed, which calls Item_ident:: fix_fields (), which resolves
"id" to the table_list above, and stores in the item's
cached_table a pointer to this table_list.
The table is created, later it is dropped by another statement,
then we execute again the prepared CREATE. This reuses the Lex,
thus also its part_info, thus also the item representing the
partitioning function (part_info is cloned but it's a shallow
cloning); CREATE wants to fix the item again (which is
normal, every execution fixes items again), fix_fields ()
sees that the cached_table pointer is set and picks up the
pointed table_list. But this last object does not exist
anymore (it was allocated in the execution memory root of
the previous execution, so it has been freed), so we access
invalid memory.
The solution: when creating the table_list, mark that it
cannot be cached.
Since log_throttle is not available in 5.5. Logging of
error message for failure of thread to create new connection
in "create_thread_to_handle_connection" is not backported.
Since, function "my_plugin_log_message" is not available in
5.5 version and since there is incompatibility between
sql_print_XXX function compiled with g++ and alog files with
gcc to use sql_print_error, changes related to audit log
plugin is not backported.
SERIALIZABLE
Problem:
The documentation claims that WITH CONSISTENT SNAPSHOT will work for both
REPEATABLE READ and SERIALIZABLE isolation levels. But it will work only
for REPEATABLE READ isolation level. Also, the clause WITH CONSISTENT
SNAPSHOT is silently ignored when it is not applicable to the given isolation
level.
Solution:
Generate a warning when the clause WITH CONSISTENT SNAPSHOT is ignored.
rb#2797 approved by Kevin.
Note: Support team wanted to push this to 5.5+.
ALTER TABLE ... ALGORITHM= ... STATEMENT
The problem was an intermediate buffer of smaller size,
which truncated the alter statement.
Solved by providing the size of the buffer to be allocated through
the function call, instead of using an one-size-fits-all stack buffer
inside the function.
LOAD DATA CAN CAUSE SQL INJECTION
Problem:
=======
A long SET expression in LOAD DATA is incorrectly truncated
when written to the binary log.
Analysis:
========
LOAD DATA statements are reconstructed once again before
they are written to the binary log. When SET clauses are
specified as part of LOAD DATA statement, these SET clause
user command strings need to be stored as it is inorder to
reconstruct the original user command. At present these
strings are stored as part of SET clause item tree's
top most Item node's name itself which is incorrect. As an
Item::name can be of MAX_ALIAS_NAME (256) size. Hence the
name will get truncated to "255".
Because of this the rewritten LOAD DATA statement will be
terminated incorrectly. When this statment is read back by
the mysqlbinlog tool it reads a starting single quote and
continuos to read till it finds an ending quote. Hence any
statement written post ending quote will be considered as
a new statement.
Fix:
===
As name field has length restriction the string value
should not be stored in Item::name. A new String list is
maintained to store the SET expression values and this list
is read during reconstrution.
sql/sql_lex.cc:
Clear the load data set string list during each query
execution.
sql/sql_lex.h:
Added a new String list to store the load data operation's
SET clause user command strings.
sql/sql_load.cc:
Read the SET clause user command strings from load data
set string list.
sql/sql_yacc.yy:
Store the SET caluse user command string as part of load
data set string list.
Sys_var_keycache inherits from some variant of Sys_var_integer
Instances of Sys_var_keycache are initialized using the KEYCACHE_VAR macro,
which takes an offset within st_key_cache.
However, the Sys_var_integer CTOR treats the offset as if it was within
global_system_variables (hidden within some layers of macros and fuction
pointers)
The result is that we write arbitrary data to arbitrary locations in memory.
This all happens during static initialization of global objects,
i.e. before we have even entered the main() function.
Bug#12325449 TYPO IN CMAKE/DTRACE.CMAKE
Fix typo in dtrace.cmake
Backport to 5.5
(external Bug#69407 Build warnings with mysql)
support-files/build-tags:
Run etags on sql_yacc.yy, ignore other .yy files
unittest/mysys/explain_filename-t.cc:
NO_PLAN seems to fail on some platforms, use the actual number instead.
TO INCONSISTENCY
PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the
table_name.par file and store it in an internal data
structure.Then we delete this file and the data in
the table. If the server crashes after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.
FIX
---
1. We move the part of deleting par file after deleting
all the table data from the storage egine.
2. During drop operation if we detect that the par
file is missing then we delete the .frm file,since
there is no way of recovering without par file.
[Approved by Mattias rb#2576 ]
CAN LEAD TO MISSING TABLES
Overview
--------
If the FOREIGN_KEY_CHECKS system variable is set to 0, it is
possible to break a foreign key constraint by changing the type
or character set of the foreign key column, or by dropping the
foreign key index (without carrying out corresponding changes on
another table in the relationship).
If we subsequently set FOREIGN_KEY_CHECKS to 1 and execute ALTER
TABLE involving the COPY algorithm on such a table, the following
happens:
1) If ALTER TABLE does not contain a RENAME clause, the attempt
to install the new version of the table instead of the old one
will fail due to the fact that the inconsistency will be
detected. An attempt to revert the partially executed alter
table operation by restoring the old table definition will
fail as well due to FOREIGN_KEY_CHECKS == 1. As a result, the
table being altered will be lost.
2) If ALTER TABLE contains the RENAME clause, the inconsistency
will not be detected (most probably due to other bugs). But if
an attempt to install the new version of the table fails (for
example, due to a failure when updating triggers associated
with the table), reverting the partially executed alter table
by restoring the old table definition will fail too. So the
table being altered might be lost as well.
Suggested fix
-------------
The suggested fix is to temporarily unset the option bit
representing FOREIGN_KEY_CHECKS when the old table definition is
restored while reverting the partially executed operation.
Bug#13116514 - CREATE LOGFILE GROUP INITIAL_SIZE & UNDO_BUFFER_SIZE FAILS
Fixing parser to accept the syntax: to give a size with suffix 'M', eg. undo_buffer_size=10M (M for mega bytes), in 'create logfile group' command.
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
sql/item_func.h:
Add constructors.
sql/sql_select.cc:
Change user variable assignment functions to read from fields after
tables have been unlocked.
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS
Backporting these two fixes to 5.1
Added unittest to test my_decimal construtor and assignment operators
sql/my_decimal.h:
Added constructor and assignment operators for my_decimal
unittest/my_decimal/my_decimal-t.cc:
Added test to check constructor and assignment operators for my_decimal
USING THE PLUGIN INTERFACE.
ISSUE: No support for floating-point plugin
system variables.
SOLUTION: Allowing plugins to define and expose floating-point
system variables of type double. MYSQL_SYSVAR_DOUBLE
and MYSQL_THDVAR_DOUBLE are added.
ISSUE: Fractional part of the def, min, max values of system
variables are ignored.
SOLUTION: Adding functions that are used to store the raw
representation of a double in the raw bits of unsigned
longlong in a way that the binary representation
remains the same.
The problem was in get_partition_id_cols_range_for_endpoint
and cmp_rec_and_tuple_prune, which stepped one partition too long.
Solution was to move a small portion of logic to cmp_rec_and_tuple_prune,
to simplify both get_partition_id_cols_range_for_endpoint and
get_partition_id_cols_list_for_endpoint.
In log_event.h
DESCRIPTION:
Due to inclusion of an implementation file, namely 'rpl_tblmap.cc'
in a header file, namely 'log_event.h'; linker errors occur if
log_event.h is included in an application containing multiple source
files, such as in the case of Binlog API.
Binlog API requires including log_event.h in its source files;
which leads to multiple definition errors, for functions defined
in rpl_tblmap.cc for class 'table_mapping'.
FIX:
Change the inclusion from header file(log_event.h) to source files
using this header and have flag MYSQL_CLIENT set. The only file in
the current server repository is mysqlbinlog.cc.
client/mysqlbinlog.cc:
Pulled in code of rpl_tblmap.cc
sql/log_event.h:
Removed inclusion of the implementation file from this header file
WITH COMPOSITE KEY COLUMNS
Problem:-
While running a SELECT query with several AGGR(DISTINCT) function
and these are referring to different field of same composite key,
Returned incorrect value.
Analysis:-
In a table, where we have composite key like (a,b,c)
and when we give a query like
select COUNT(DISTINCT b), SUM(DISTINCT a) from ....
here, we first make a list of items in Aggr(distinct) function
(which is a, b), where order of item doesn't matter.
and then we see, whether we have a composite key where the prefix
of index columns matches the items of the aggregation function.
(in this case we have a,b,c).
if yes, so we can use loose index scan and we need not perform
duplicate removal to distinct in our aggregate function.
In our table, we traverse column marked with <-- and get the result as
(a,b,c) count(distinct b) sum(distinct a)
treated as count b treated as sum(a)
(1,1,2)<-- 1 1
(1,2,2)<-- 1++=2 1+1=2
(1,2,3)
(2,1,2)<-- 2++=3 1+1+2=4
(2,2,2)<-- 3++=4 1+1+2+2=6
(2,2,3)
result will be 4,6, but it should be (2,3)
As in this case, our assumption is incorrect. If we have
query like
select count(distinct a,b), sum(distinct a,b)from ..
then we can use loose index scan
Solution:-
In our query, when we have more then one aggr(distinct) function
then they should refer to same fields like
select count(distinct a,b), sum(distinct a,b) from ..
-->we can use loose scan index as both aggr(distinct) refer to same fields a,b.
If they are referring to different field like
select count(distinct a), sum(distinct b) from ..
-->will not use loose scan index as both aggr(distinct) refer to different fields.
NUMBER ALREADY USED BY 5.6
The problem was that the patch for Bug#13004581 added a new error
message to 5.5. This causes it to use an error number already used
in 5.6 by ER_CANNOT_LOAD_FROM_TABLE_V2. Which means that error
message number stability between GA releases is broken.
This patch fixes the problem by removing the error message and
using ER_UNKNOWN_ERROR instead.
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
sql/item_func.h:
Add constructors.
sql/sql_select.cc:
Change user variable assignment functions to read from fields after
tables have been unlocked.
The problem happened due to broken left expression in Item_in_optimizer object.
In case of the bug left expression is runtime created Item_outer_ref item which
is deleted at the end of the statement and one of Item_in_optimizer arguments
becomes bad when re-executed. The fix is to use real_item() instead of original
left expression. Note: It feels a bit weird that after preparing, the field is
directly part of the generated Item_func_eq, whereas in execution it is replaced
with an Item_outer_ref wrapper object.
sql/item_subselect.cc:
use left_expr->real_item() instead of original left expression
because left_expr can be runtime created Ref item which is deleted
at the end of the statement. Thus one of 'substitution' arguments
can be broken in case of PS.
The problem was that if UPDATE with subselect caused a
deadlock inside InnoDB, this deadlock was not properly
handled by the SQL layer. This meant that the SQL layer
would try to unlock the row after InnoDB had rolled
back the transaction. This caused an assertion inside
InnoDB.
This patch fixes the problem by checking for errors
reported by SQL_SELECT::skip_record() and not calling
unlock_row() if any errors have been reported.
This bug is similar to Bug#13586591, but for UPDATE
rather than DELETE. Similar issues in filesort/opt_range/
sql_select will be investigated and handled in the scope
of Bug#16767929
GROUP BY, MYISAM
Problem:-
In a query, where we are using loose index scan optimization and
we have MIN() causes segmentation fault(where table row length
is less then key_length).
Analysis:
While using loose index scan for MIN(), we call key_copy(), to copy
the key data from record.
This function is using temporary record buffer to store key data
from the record buffer.But in case where the key length is greater
then the buffer length, this will cause a segmentation fault.
Solution:
Give a proper buffer to store a key record.
sql/opt_range.cc:
We can't use record buffer to store key data.So, give a proper buffer to store a key record.
When logging to the binary log in row, updates and deletes to a BLACKHOLE
engine table are skipped.
It is impossible to log binary log in row format for updates and deletes to
a BLACKHOLE engine table, as no row events can be generated in these cases.
After fix, generate a warning for UPDATE/DELETE statements that modify a
BLACKHOLE table, as row events are not logged in row format.
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
bz
sql/item_func.cc:
skiping execution, if item is not resolved.
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
sql/item_func.cc:
skiping execution, if item is not resolved.
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
Solution:
Make group_concat's order by parsing independent of the select
sql/item_sum.cc:
Change the argument as, select->gorder_list is not pointer anymore
sql/item_sum.h:
Change the argument as, select->gorder_list is not pointer anymore
sql/mysql_priv.h:
Parsing for group_concat's order by is made independent.
As a result, add_order_to_list cannot be used anymore.
sql/sql_lex.cc:
Parsing for group_concat's order by is made independent.
As a result, add_order_to_list cannot be used anymore.
sql/sql_lex.h:
Parsing for group_concat's order by is made independent.
As a result, add_order_to_list cannot be used anymore.
sql/sql_yacc.yy:
Make group_concat's order by parsing independent of the select
queries order by.
PARTIAL INDEX
Consider the following table definition:
CREATE TABLE t (
my_col CHAR(10),
...
INDEX my_idx (my_col(1))
)
The my_idx index is not able to distinguish between rows with
equal first-character my_col-values (e.g. "f", "foo", "fee").
Prior to this CS, the range optimizer would translate
"WHERE my_col NOT IN ('f', 'h')" into (optimizer trace syntax)
"ranges": [
"NULL < my_col < f",
"f < my_col"
]
But this was not correct because the rows with values "foo"
and "fee" would not belong to any of those ranges. However, the
predicate "my_col != 'f' AND my_col != 'h'" would translate
to
"ranges": [
"NULL < my_col"
]
because get_mm_leaf() changes from "<" to "<=" for partial
keyparts. This CS changes the range optimizer implementation
for NOT IN to behave like a conjunction of NOT EQUAL: it
replaces "<" with "<=" for all but the first range when the
keypart is partial.
BACKGROUND:
The testcase i_innodb.innodb_bug14036214 when run under valgrind
leaks memory.
ANALYSIS:
In the code path of mysql_update, a temporary file is opened
using open_cached_file().
When an error has occured in that code path, this temporary
file was not closed since call to close_cached_file() was
missing.
This problem exists in 5.5 but it does not exists in 5.6 and
trunk.
This is because in 5.6 and trunk, when we issue the update
statement in the test case, it does not take the same code path
as in 5.5. The code path is different because a different plan
is chosen by optimizer.
See Bug#14036214 for details.
However, the problem can still be examined in 5.6 and trunk
by code inspection.
FIX:
The file opened by open_cached_file() has been closed by calling
close_cached_file() when an error occurs so that it does not
results in a memory leak.
Problem:
A select query inside a group_concat function having an
outer reference results in a crash.
Analysis:
In function Item_group_concat::add, we do not check if
return value of get_tmp_table_field can be NULL for
a non-const item. This can happen for a query with a
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache"
to false. As a result in the call to const_item() from
Item_func_group_concat::add, it returns false and goes on
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type
Item_field, Item_result_field and Item_ref.
For all other items, it returns NULL.
Solution:
Check for the return value of get_tmp_table_field before we
access field contents.
sql/item_sum.cc:
Check for the return value of get_tmp_table_field before accessing
Problem:
Insert with 'on duplicate key update' on a view,
crashes the server.
Analysis:
During an insert on to a view, we do the following:
For insert fields and values -
1. Resolve insert values.
2. Resolve insert fields.
3. Check if the fields and values are all from a
single table of a view in case of INSERT VALUES.
Do not check the same in case of INSERT SELECT,
as the values can be read from different table than
that of the view.
For the update fields (if DUP UPDATE is used)
1. Create a name resolution context with 'table_list' only.
2. Resolve update fields in this context.
3. Check if update fields and values are from the same
table as the insert fields.
4. Get the next name resolution context. Concatinate this
with the previous one.
5. Resolve update values in this context as we can refer
to other tables in the values clause.
Note that at step 3(of update fields), we check for
'used_tables map' of update values, without resolving them
first. Hence the crash.
Fix:
At step 3, do not pass the update values to check if its a
single table view update, as update values can refer other table.
Code has been re-organized to function like check_insert_fields.
sql/sql_insert.cc:
Do not pass update_values as they are not resolved yet.
SCHEDULER DROPS EVENTS
Problem: On a semi sync enabled server (Master/Slave),
if event scheduler drops an event after completion,
server crashes.
Analaysis: If an event is created with "ON COMPLETION
NOT PRESERVE" clause, event scheduler deletes the event
upon event completion(expiration) and the thread object
will be destroyed. In the destructor of the thread object,
mysys_var member is set to zero explicitly. Later from
the same destructor call(same execution path),
incase of semi sync enabled server, while cleanup is called,
THD::mysys_var member is accessed by THD::enter_cond()
function which causes server to crash.
Fix: mysys_var should not be explicitly set to zero and
also it is not required.
sql/sql_class.cc:
mysys_var should not be explicitly set to zero.
Fixed the get_data_size() methods for multi-point features to check properly for end
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper
function.
Test cases added.
Fixed review comments.
REGULAR SQL VS PREPARED STATEMENT
Analysis:
---------
When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.
Consider the example:
SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1 | sqlif |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+
Executing a prepared statement where the parameters are
supplied:
PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ? | ps_if_fail |
+----------+------------+
| 0.038687 | 1 |
+----------+------------+
1 row in set (0.00 sec)
In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.
But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.
Fix:
----
The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
mysys/mf_iocache2.c:
Added a check for "cache->error" and simulation of cache read failure
mysys/my_read.c:
Simulation of read failure
sql/log_event.cc:
Added debug simulation
sql/sql_repl.cc:
Added a check for cache error
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the
current default database becomes "NULL" none of the
statements will be binlogged.
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
mysql-test/suite/rpl/r/rpl_loaddata_m.result:
Earlier when defalut database was "NULL" DROP TABLE
was not getting logged. Post this fix it will be logged
and the DROP will fail at slave as the table creation
was skipped by master as --binlog-ignore-db=test.
mysql-test/suite/rpl/t/rpl_loaddata_m.test:
Earlier when defalut database was "NULL" DROP TABLE
was not getting logged. Post this fix it will be logged
and the DROP will fail at slave as the table creation
was skipped by master as --binlog-ignore-db=test.
sql/rpl_filter.cc:
Replaced DBUG_RETURN(0) with DBUG_RETURN(1).
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
sql/log_event.cc:
Storing the Uservar parsing time query id into a new member of the event
to to temporarily substitute
with it the actual thread id at the Uservar execution time.
sql/log_event.h:
Storage for keeping query-id in User-var intance is added.
Problem - When the slave was disconnected from the master, under certain
conditions, upon reconnect, it will report that it received a
packet larger the slave_max_allowed_packet which causes the
replication to stop.
Analysis -The reason of this failure is that on reconnect
the slave sets the max_allowed_packet from the master's mi->mysql
object which keeps the max_allowed_packet as 1MB. This causes the
slave to report such error on recieving packet bigger than 1MB.
START SLAVE on the slave fixes the problem since it restarts
slave threads which initializes the max_allowed_packet to
slave_max_allowed_packet.
Fix - The problem is fixed by some code refactoring and introduction of a new
function which updates the max_allowed_packet for the THD object of the
slave thread and the mysql->options max_allowed_packet.
(Based on Sinisa's patch)
Added a version checking facility to mysql_upgrade.
The versions used for checking is the version of the
server that mysql_upgrade is going to upgrade and the
server version that mysql_upgrade was build/distributed
with.
Also added an option '--version-check' to enable/disable
the version checking.
SHOW ENGINE INNOD
Problem:
The purpose of explain_filename() is to provide useful additional
information regarding the partitions given the filename. This function
was returning an error when it was not able to parse the given filename.
For example, within InnoDB, temporary files are created with #sql-
prefix. But this function was not able to parse it correctly.
Solution:
It is not an error, if explain_filename() could not parse the given
filename. If there is no partition information to explain, then silently
return from the function.
rb#1940 approved by mattiasj
The range optimizer uses 'save_in_field_no_warnings()' to verify properties of
'value <cmp> field' expressions.
If this execution yields an error, it should abort.
RETURNS RANDOM DATA
MySQL 5.5 specific version of bugfix.
When Loose Index Scan Range access is used, MySQL execution needs
to copy non-aggregated fields. end_send() checked if this was
necessary by checking if join_tab->select->quick had type
QS_TYPE_GROUP_MIN_MAX.
In this bug, however, MySQL created a sort index to sort the rows
read from this range access method. create_sort_index() deletes
join_tab->select->quick which makes it impossible to inquire
the join_tab if LIS has been used.
The fix for MySQL 5.5 is to introduce a variable in JOIN_TAB
that stores whether or not LIS has been used. There is no need
for this variable in later MySQL versions because the relevant
code has been refactored.
Post push fix:
setup_ref_array() now uses n_sum_items to determine size of ref_pointer_array.
The problem was that n_sum_items kept growing, it wasn't reset for each query.
A similar memory leak was fixed with the patch for:
Bug 14683676 ENDLESS MEMORY CONSUMPTION IN SETUP_REF_ARRAY WITH MAX IN SUBQUERY
sql/sql_yacc.yy:
Reset parsing_place when we're done parsing SHOW commands,
to prevent Item::Item incrementing select_n_having_items
(which is also used in setup_ref_array())
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
sql/item_sum.cc:
update ORDER::next fields so that they point to new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: When 'SET' type columns are used in a DML
inside a stored procedure and a NULL value is passed
to that column, replication is breaking.
Analysis: All stored procedure variables used inside
a DML will be substituted with NAME_CONST functions.
While NAME_CONST are used in this particular scenario,
i.e., when NULL value is passed then charset is copied
from 'empty_set_string' member of Field_set class.
The operator '=' overload method inside 'String' class
is not coping str_charset from R.H.S object to L.H.S object.
Hence charset is wrongly copied in the string assignment
Fix: Handle coping str_charset member in operator '=' overload
method.
sql/sql_string.h:
Handled coping str_charset member in operator '=' overload
method.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
sql/sql_string.h:
Handled coping str_charset member in operator '=' overload
method.
PROBLEM: If multiple statements are sent by a single
request then only the last statement was
getting logged. An attacker can bypass the
audit log just by sending two comsecutive
statements in one request.
SOLUTION: Each statements from a single request are
logged.
PROBLEM AFTER MYSQL_HA_FIND
This problem occured if a prepared statement tried to create a table
for which there already existed a view with the same name while a
SQL handler was opened.
Before DDL statements are executed, mysql_ha_rm_tables() is called
to remove any matching tables from the internal list of opened SQL
handler tables. This match was done on TABLE_LIST::db and
TABLE_LIST::table_name. This is problematic for views (which use
TABLE_LIST::view_db and TABLE_LIST::view_name) and anonymous
derived tables.
This patch fixes the problem by skipping TABLE_LISTs representing
anonymous derived tables and using get_db_name()/get_table_name()
which handles views when looking for SQL handler tables to remove.