send_result_set_metadata
Analysis
--------
Cursor inside trigger accessing NEW/OLD row leads server exit.
The reason for the bug was that implementation of function
create_tmp_table() was not considering Item::TRIGGER_FIELD_ITEM
as possible alternative for type of class being instantiated.
This was resulting in a mismatch between a number of columns
in result list and temp table definition. This mismatch leads
to the failure of assertion
DBUG_ASSERT(send_result_set_metadata.elements == item_list.elements)
in the method Materialized_cursor::send_result_set_metadata
in debug mode.
Fix:
---
Added code to consider Item::TRIGGER_FIELD_ITEM as valid
type while creating fields.
Issue: A select for update subquery in having clause
resulted deadlock and its transaction was rolled back
by innodb. val_XXX interfaces do not handle errors and
it do not propogate errors to its caller. sub_select
did not see this error when it called
evaluate_join_record and later made a call to innodb.
As transaction is rolled back innodb asserted.
Fix: Now evaluate_join_record checks if there is any
error reported and then return the same to its caller.
FIND_USED_PARTITIONS | SQL/OPT_RANGE.CC:3884
Issue:
-----
During partition pruning, first we identify the partition
in which row can reside and then identify the subpartition.
If we find a partition but not the subpartion then we hit
a debug assert. While finding the subpartition we check
the current thread's error status in part_val_int()
function after some operation. In this case the thread's
error status is already set to an error (multiple rows
returned) so the function returns no partition found and
results in incorrect behavior.
SOLUTION:
---------
Currently any error encountered in part_val_int is
considered a "partition not found" type error. Instead of
an assert, a check needs to be done and a valid error
returned.
HOST WHEN IT CONTAINS WILDCARD
Description :- Incorrect access privileges are provided to a
user due to wrong sorting of users when wildcard characters
is present in the hostname.
Analysis :- Function "get_sorts()" is used to sort the
strings of user name, hostname, database name. It is used
to arrange the users in the access privilege matching order.
When a user connects, it checks in the sorted user access
privilege list and finds a corresponding matching entry for
the user. Algorithm used in "get_sort()" sorts the strings
inappropriately. As a result, when a user connects to the
server, it is mapped to incorrect user access privileges.
Algorithm used in "get_sort()" counts the number of
characters before the first occurence of any one of the
wildcard characters (single-wildcard character '_' or
multi-wildcard character '%') and sorts in that order.
As a result of inconnect sorting it treats hostname "%" and
"%.mysql.com" as equally-specific values and therefore
the order is indeterminate.
Fix:- The "get_sort()" algorithm has been modified to treat
"%" seperately. Now "get_sort()" returns a number which, if
sorted in descending order, puts strings in the following
order:-
* strings with no wildcards
* strings containg wildcards and non-wildcard characters
* single muilt-wildcard character('%')
* empty string.
Backport from mysql-5.5 to mysql-5.1
Bug# 19699237: UNINITIALIZED VARIABLE IN
ITEM_FIELD::STR_RESULT LEADS TO INCORRECT
BEHAVIOR
ISSUE:
------
When the following conditions are satisfied in a query, a
server crash occurs:
a) Two rows are compared using a NULL-safe equal-to operator.
b) Each of these rows belong to different charsets.
SOLUTION:
---------
When one charset is converted to another for comparision,
the constructor of "Item_func_conv_charset" is called.
This will attempt to use the Item_cache if the string is a
constant. This check succeeds because the "used_table_map"
of the Item_cache class is never set to the correct value.
Since it is mistakenly assumed to be a constant, it tries
to fetch the relevant null value related fields which are
yet to be initialized. This results in valgrind issues
and wrong results.
The fix is to update the "used_table_map" of "Item_cache".
This will allow "Item_func_conv_charset" to realise that
this is not a constant.
Problem: UDF doesn't handle the arguments properly when they
are of string type due to a misplaced break.
The length of arguments is also not set properly
when the argument is NULL.
Solution: Fixed the code by putting the break at right place
and setting the argument length to zero when the
argument is NULL.
Backport from mysql-5.5 to mysql-5.1
Bug#19880368 : GROUP_CONCAT CRASHES AFTER DUMP_LEAF_KEY
Problem:
find_order_by_list does not update the address of order_item
correctly after resolving.
Solution:
Change the ref_by address for a order_by field if its
SUM_FUNC_ITEM to the address of the field present in
all_fields.
Backport from mysql-5.5 to mysql-5.1
Bug #19612819 : FILESORT: ASSERTION FAILED: POS->FIELD != 0 || POS->ITEM != 0
Problem:
While getting the temp table field for a REF_ITEM
make_sortorder is using the real_item. As a result
server fails later with an assert.
Solution:
Do not use real_item to get the temp table field.
Instead use the REF_ITEM itself as temp table fields
are created for REF_ITEM not the real_item.
Backport from mysql-5.5 to mysql-5.1 of:
Bug19770858: MYSQLD CAN BE DRIVEN TO OOM WITH TWO SIMPLE SESSION VARS
The problem was that the maximum value of the transaction_prealloc_size
session system variable was ULONG_MAX which meant that it was possible
to cause the server to allocate excessive amounts of memory.
This patch fixes the problem by reducing the maxmimum value of
transaction_prealloc_size and transaction_alloc_block_size down
to 128K.
Note that transactions will still be able to allocate more than
128K if needed, this patch just reduces the amount that can be
preallocated - as well as the maximum size of the incremental
allocation blocks.
(cherry picked from commit 540c9f7ebb428bbf9ec028feabe1f7f919fdefd9)
Conflicts:
mysql-test/suite/sys_vars/r/transaction_alloc_block_size_basic.result
mysql-test/suite/sys_vars/r/transaction_alloc_block_size_basic_64.result
mysql-test/suite/sys_vars/t/disabled.def
mysql-test/suite/sys_vars/t/transaction_alloc_block_size_basic.test
sql/sys_vars.cc
AS A INNODB PARTITTION.
PROBLEM
-------
The correct engine_type was not being set during
rebuild of the partition due to which the handler
was always created with the default engine,
which is innodb for 5.5+ ,therefore even if the
table was myisam, after rebuilding the partitions
ended up as innodb partitions.
FIX
---
Set the correct engine type during rebuild.
[Approved by mattiasj #rb3599]
REPLICATION FILTERS ARE USED.
Problem:
When Filtered-slave applies Int_var_log_event and when it
tries to write the event to its own binlog, LAST_INSERT_ID
value is written wrongly.
Analysis:
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is a variable which is set when LAST_INSERT_ID() is used by
a statement. If it is set, first_successful_insert_id_in_
prev_stmt_for_binlog will be stored in the statement-based
binlog. This variable is CUMULATIVE along the execution of
a stored function or trigger: if one substatement sets it
to 1 it will stay 1 until the function/trigger ends,
thus making sure that first_successful_insert_id_in_
prev_stmt_for_binlog does not change anymore and is
propagated to the caller for binlogging. This is achieved
using the following code
if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)
{
/* It's the first time we read it */
first_successful_insert_id_in_prev_stmt_for_binlog=
first_successful_insert_id_in_prev_stmt;
stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;
}
Slave server, after receiving Int_var_log_event event from
master, it is setting
stmt_depends_on_first_successful_insert_id_in_prev_stmt
to true(*which is wrong*) and not setting
first_successful_insert_id_in_prev_stmt_for_binlog. Because
of this problem, when the actual DML statement with
LAST_INSERT_ID() is parsed by slave SQL thread,
first_successful_insert_id_in_prev_stmt_for_binlog is not
set. Hence the value zero (default value) is written to
slave's binlog.
Why only *Filtered slave* is effected when the code is
in common place:
-------------------------------------------------------
In Query_log_event::do_apply_event,
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is reset to zero at the end of the function. In case of
normal slave (No Filters), this variable will be reset.
In Filtered slave, Slave SQL thread defers all IRU events's
execution until IRU's Query_log event is received. Once it
receives Query_log_event it executes all pending IRU events
and then it executes Query_log_event. Hence the variable is
not getting reset to 0, causing this bug.
Fix: As described above, the root cause was setting
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
when Int_var_log_event was executed by a SQL thread. Hence
removing the problematic line from the code.
Description: Fix for bug CVE-2012-5611 (bug 67685) is
incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and
check_grant_db() can be overflown by up to two bytes. That's
probably not enough to do anything more serious than crashing
mysqld.
Analysis: In acl_get() when "copy_length" is calculated it
just adding the variable lengths. But when we are using them
with strmov() we are adding +1 to each. This will lead to a
three byte buffer overflow (i.e two +1's at strmov() and one
byte for the null added by strmov() function). Similarly it
happens for check_grant_db() function as well.
Fix: We need to add "+2" to "copy_length" in acl_get()
and "+1" to "copy_length" in check_grant_db().
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.
To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.
This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.
Finally, two warnings are added when dump threads encounter the temporary
errors.
Problem:-
In a Procedure, when we are comparing value of select query
with IN clause and they both have different collation, cause
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
sel1 sel2
Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call
Item_in_subselect::fix_fields while will resolve sel2. There
in Item_in_subselect::select_transformer, we evaluate the
left expression(sel1) and store it in Item_cache_* object
(to avoid re-evaluating it many times during subquery execution)
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2
in Item_in_subselect::select_transformer() which will compare
t2.d and sel1(which is cached in Item_in_optimizer).
Later while checking the collation in agg_item_collations()
we get error and we cleanup the item. While cleaning up we cleaned
the cached value in Item_in_optimizer object.
When we execute the procedure second time, we have condition for
sel2 and while setup_cond(), we can't able to find reference item
as it is cleanup while item cleanup.So it assert.
Solution:-
We should not cleanup the cached value for Item_in_optimizer object,
if we have put the condition to subselect.
"SHOW PROCESSLIST"
Analysis:
----------
The problem here is, if one connection changes its
default db and at the same time another connection executes
"SHOW PROCESSLIST", when it wants to read db of the another
connection then there is a chance of accessing the invalid
memory.
The db name stored in THD is not guarded while changing user
DB and while reading the user DB in "SHOW PROCESSLIST".
So, if THD.db is freed by thd "owner" thread and if another
thread executing "SHOW PROCESSLIST" statement tries to read
and copy THD.db at the same time then we may endup in the issue
reported here.
Fix:
----------
Used mutex "LOCK_thd_data" to guard THD.db while freeing it
and while copying it to processlist.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
TO INCONSISTENCY
PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the
table_name.par file and store it in an internal data
structure.Then we delete this file and the data in
the table. If the server crashes after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.
FIX
---
1. We move the part of deleting par file after deleting
all the table data from the storage egine.
2. During drop operation if we detect that the par
file is missing then we delete the .frm file,since
there is no way of recovering without par file.
[Approved by Mattias rb#2576 ]
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS
Backporting these two fixes to 5.1
Added unittest to test my_decimal construtor and assignment operators
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
The problem happened due to broken left expression in Item_in_optimizer object.
In case of the bug left expression is runtime created Item_outer_ref item which
is deleted at the end of the statement and one of Item_in_optimizer arguments
becomes bad when re-executed. The fix is to use real_item() instead of original
left expression. Note: It feels a bit weird that after preparing, the field is
directly part of the generated Item_func_eq, whereas in execution it is replaced
with an Item_outer_ref wrapper object.
GROUP BY, MYISAM
Problem:-
In a query, where we are using loose index scan optimization and
we have MIN() causes segmentation fault(where table row length
is less then key_length).
Analysis:
While using loose index scan for MIN(), we call key_copy(), to copy
the key data from record.
This function is using temporary record buffer to store key data
from the record buffer.But in case where the key length is greater
then the buffer length, this will cause a segmentation fault.
Solution:
Give a proper buffer to store a key record.
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
Solution:
Make group_concat's order by parsing independent of the select
Problem:
A select query inside a group_concat function having an
outer reference results in a crash.
Analysis:
In function Item_group_concat::add, we do not check if
return value of get_tmp_table_field can be NULL for
a non-const item. This can happen for a query with a
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache"
to false. As a result in the call to const_item() from
Item_func_group_concat::add, it returns false and goes on
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type
Item_field, Item_result_field and Item_ref.
For all other items, it returns NULL.
Solution:
Check for the return value of get_tmp_table_field before we
access field contents.
Fixed the get_data_size() methods for multi-point features to check properly for end
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper
function.
Test cases added.
Fixed review comments.
REGULAR SQL VS PREPARED STATEMENT
Analysis:
---------
When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.
Consider the example:
SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1 | sqlif |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+
Executing a prepared statement where the parameters are
supplied:
PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ? | ps_if_fail |
+----------+------------+
| 0.038687 | 1 |
+----------+------------+
1 row in set (0.00 sec)
In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.
But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.
Fix:
----
The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the
current default database becomes "NULL" none of the
statements will be binlogged.
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
GRANT STATEMENT
Description: A missing length check causes problem while
copying source to destination when
lower_case_table_names is set to a value
other than 0. This patch fixes the issue
by ensuring that requried bound check is
performed.
Problem:
When the VALUES() function is inappropriately used in the SET stmt the server
exits.
set port = values(v);
This happens because the values(v) will be parsed as an Item_insert_value by
the parser. Both Item_field and Item_insert_value return the type as
FIELD_ITEM. But for Item_insert_value the field_name member is NULL. In
set_var constructor, when the type of the item is FIELD_ITEM we try to access
the non-existent field_name.
The class hierarchy is as follows:
Item -> Item_ident -> Item_field -> Item_insert_value
The Item_ident::field_name is NULL for Item_insert_value.
Solution:
In the parsing stage, in the set_var constructor if the item type is
FIELD_ITEM and if the field_name is non-existent, then it is probably
the Item_insert_value. So leave it as it is for later evaluation.
rb://2004 approved by Roy and Norvald.