Commit graph

26836 commits

Author SHA1 Message Date
Ajo Robert
f3dce250f4 Bug #20760261 mysqld crashed in materialized_cursor::
send_result_set_metadata

Analysis
--------
Cursor inside trigger accessing NEW/OLD row leads server exit.

The reason for the bug was that implementation of function
create_tmp_table() was not considering Item::TRIGGER_FIELD_ITEM
as possible alternative for type of class being instantiated.
This was resulting in a mismatch between a number of columns
in result list and temp table definition. This mismatch leads
to the failure of assertion
DBUG_ASSERT(send_result_set_metadata.elements == item_list.elements)
in the method Materialized_cursor::send_result_set_metadata
in debug mode.

Fix:
---
Added code to consider Item::TRIGGER_FIELD_ITEM as valid
type while creating fields.
2015-08-07 16:26:10 +05:30
Mithun C Y
c28626d0af Bug #21096444: MYSQL IS TRYING TO PERFORM A CONSISTENT READ BUT THE READ VIEW IS NOT ASSIGNED!
Issue: A select for update subquery in having clause
resulted deadlock and its transaction was rolled back
by innodb. val_XXX interfaces do not handle errors and
it do not propogate errors to its caller. sub_select
did not see this error when it called
evaluate_join_record and later made a call to innodb.
As transaction is rolled back innodb asserted.

Fix: Now evaluate_join_record checks if there is any
error reported and then return the same to its caller.
2015-08-04 11:45:02 +05:30
Sreeharsha Ramanavarapu
9372c9ebd2 Bug #20909518: HANDLE_FATAL_SIGNAL (SIG=11) IN
FIND_USED_PARTITIONS | SQL/OPT_RANGE.CC:3884

Post-push fix.
2015-08-03 10:08:46 +05:30
Sreeharsha Ramanavarapu
8006ad8053 Bug #20909518: HANDLE_FATAL_SIGNAL (SIG=11) IN
FIND_USED_PARTITIONS | SQL/OPT_RANGE.CC:3884

Issue:
-----
During partition pruning, first we identify the partition
in which row can reside and then identify the subpartition.
If we find a partition but not the subpartion then we hit
a debug assert. While finding the subpartition we check
the current thread's error status in part_val_int()
function after some operation. In this case the thread's
error status is already set to an error (multiple rows
returned) so the function returns no partition found and
results in incorrect behavior.

SOLUTION:
---------
Currently any error encountered in part_val_int is
considered a "partition not found" type error. Instead of
an assert, a check needs to be done and a valid error
returned.
2015-08-03 08:15:59 +05:30
Sreeharsha Ramanavarapu
33a2e5abd8 Bug #20238729: ILLEGALLY CRAFTED UTF8 SELECT PROVIDES NO
WARNINGS

Backporting to 5.1 and 5.5
2015-07-10 07:52:00 +05:30
Arun Kuruvila
fdae90dd11 Bug #20181776 :- ACCESS CONTROL DOESN'T MATCH MOST SPECIFIC
HOST WHEN IT CONTAINS WILDCARD

Description :- Incorrect access privileges are provided to a
user due to wrong sorting of users when wildcard characters
is present in the hostname.

Analysis :- Function "get_sorts()" is used to sort the
strings of user name, hostname, database name. It is used
to arrange the users in the access privilege matching order.
When a user connects, it checks in the sorted user access
privilege list and finds a corresponding matching entry for
the user. Algorithm used in "get_sort()" sorts the strings
inappropriately. As a result, when a user connects to the
server, it is mapped to incorrect user access privileges.
Algorithm used in "get_sort()" counts the number of
characters before the first occurence of any one of the
wildcard characters (single-wildcard character '_' or
multi-wildcard character '%') and sorts in that order.
As a result of inconnect sorting it treats hostname "%" and
"%.mysql.com" as equally-specific values and therefore
the order is indeterminate.

Fix:- The "get_sort()" algorithm has been modified to treat
"%" seperately. Now "get_sort()" returns a number which, if
sorted in descending order, puts strings in the following
order:-
* strings with no wildcards
* strings containg wildcards and non-wildcard characters
* single muilt-wildcard character('%')
* empty string.
2015-04-28 14:56:55 +05:30
V S Murthy Sidagam
c655515d1b Bug #20683237 BACKPORT 19817663 TO 5.1 and 5.5
Restrict when user table hashes can be viewed. Require SUPER privileges.
2015-04-27 14:33:25 +05:30
Nisha
e65f3f6f2e BUG#20754369: BACKPORT BUG#20007583 TO 5.1
Backporting the patch to 5.1 and 5.5
2015-04-06 14:12:15 +05:30
Sreeharsha Ramanavarapu
c788e693e6 Bug #20730155: BACKPORT BUG#19699237 TO 5.1
Backport from mysql-5.5 to mysql-5.1

Bug# 19699237: UNINITIALIZED VARIABLE IN
               ITEM_FIELD::STR_RESULT LEADS TO INCORRECT
               BEHAVIOR

ISSUE:
------
When the following conditions are satisfied in a query, a
server crash occurs:
a) Two rows are compared using a NULL-safe equal-to operator.
b) Each of these rows belong to different charsets.

SOLUTION:
---------
When one charset is converted to another for comparision,
the constructor of "Item_func_conv_charset" is called.
This will attempt to use the Item_cache if the string is a
constant. This check succeeds because the "used_table_map"
of the Item_cache class is never set to the correct value.
Since it is mistakenly assumed to be a constant, it tries
to fetch the relevant null value related fields which are
yet to be initialized. This results in valgrind issues
and wrong results.

The fix is to update the "used_table_map" of "Item_cache".
This will allow "Item_func_conv_charset" to realise that
this is not a constant.
2015-03-26 07:40:35 +05:30
Vamsikrishna Bhagi
3c02e6ec2e Bug# 20730103 BACKPORT 19688008 TO 5.1
Problem: UDF doesn't handle the arguments properly when they
         are of string type due to a misplaced break.
         The length of arguments is also not set properly
         when the argument is NULL.

Solution: Fixed the code by putting the break at right place
          and setting the argument length to zero when the
          argument is NULL.
2015-03-25 15:28:55 +05:30
Chaithra Gopalareddy
044060fe16 Bug #20730220 : BACKPORT BUG#19880368 TO 5.1
Backport from mysql-5.5 to mysql-5.1

Bug#19880368 : GROUP_CONCAT CRASHES AFTER DUMP_LEAF_KEY

Problem:
find_order_by_list does not update the address of order_item
correctly after resolving.

Solution:
Change the ref_by address for a order_by field if its
SUM_FUNC_ITEM to the address of the field present in
all_fields.
2015-03-23 14:31:28 +05:30
Chaithra Gopalareddy
a2cd622f3a Bug #20730129: BACKPORT BUG#19612819 TO 5.1
Backport from mysql-5.5 to mysql-5.1

Bug #19612819 :  FILESORT: ASSERTION FAILED: POS->FIELD != 0 || POS->ITEM != 0

Problem:
While getting the temp table field for a REF_ITEM
make_sortorder is using the real_item. As a result
server fails later with an assert.

Solution:
Do not use real_item to get the temp table field.
Instead use the REF_ITEM itself as temp table fields
are created for REF_ITEM not the real_item.
2015-03-23 12:05:55 +05:30
Jon Olav Hauglid
c7581bb5a1 Bug#20730053: BACKPORT BUG#19770858 TO 5.1
Backport from mysql-5.5 to mysql-5.1 of:

Bug19770858: MYSQLD CAN BE DRIVEN TO OOM WITH TWO SIMPLE SESSION VARS

The problem was that the maximum value of the transaction_prealloc_size
session system variable was ULONG_MAX which meant that it was possible
to cause the server to allocate excessive amounts of memory.

This patch fixes the problem by reducing the maxmimum value of
transaction_prealloc_size and transaction_alloc_block_size down
to 128K.

Note that transactions will still be able to allocate more than
128K if needed, this patch just reduces the amount that can be
preallocated - as well as the maximum size of the incremental
allocation blocks.

(cherry picked from commit 540c9f7ebb428bbf9ec028feabe1f7f919fdefd9)

Conflicts:
	mysql-test/suite/sys_vars/r/transaction_alloc_block_size_basic.result
	mysql-test/suite/sys_vars/r/transaction_alloc_block_size_basic_64.result
	mysql-test/suite/sys_vars/t/disabled.def
	mysql-test/suite/sys_vars/t/transaction_alloc_block_size_basic.test
	sql/sys_vars.cc
2015-03-19 11:29:13 +01:00
Tor Didriksen
a990d5c715 Bug#17617945 BUFFER OVERFLOW IN GET_MERGE_MANY_BUFFS_COST WITH SMALL SORT_BUFFER_SIZE
get_cost_calc_buff_size() could return wrong value for the size of imerge_cost_buff.
2013-11-01 16:39:19 +01:00
Tor Didriksen
3b63182ec4 Bug#17326567 MYSQL SERVER FILESORT IMPLEMENTATION HAS A VERY SERIOUS BUG
The filesort implementation needs space for at least 15 records
(plus some internal overhead) in its main sort buffer.
2013-10-29 17:26:20 +01:00
Aditya A
df5018f2b1 Bug#17559867 AFTER REBUILDING,A MYISAM PARTITION ENDS UP
AS A INNODB PARTITTION.

PROBLEM
-------
The correct engine_type was not being set during 
rebuild of the partition due to which the handler
was always created with the default engine,
which is innodb for 5.5+ ,therefore even if the
table was myisam, after rebuilding the partitions
ended up as innodb partitions.

FIX
---
Set the correct engine type during rebuild.  

[Approved by mattiasj #rb3599]
2013-10-18 12:26:28 +05:30
Venkatesh Duggirala
29e45f155f Bug#17234370 LAST_INSERT_ID IS REPLICATED INCORRECTLY IF
REPLICATION FILTERS ARE USED.

Problem:
When Filtered-slave applies Int_var_log_event and when it
tries to write the event to its own binlog, LAST_INSERT_ID
value is written wrongly.

Analysis:
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is a variable which is set when LAST_INSERT_ID() is used by
a statement. If it is set, first_successful_insert_id_in_
prev_stmt_for_binlog will be stored in the statement-based
binlog. This variable is CUMULATIVE along the execution of
a stored function or trigger: if one substatement sets it
to 1 it will stay 1 until the function/trigger ends,
thus making sure that first_successful_insert_id_in_
prev_stmt_for_binlog does not change anymore and is
propagated to the caller for binlogging. This is achieved
using the following code
if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)               
{                                                                           
  /* It's the first time we read it */                                      
  first_successful_insert_id_in_prev_stmt_for_binlog=                       
  first_successful_insert_id_in_prev_stmt;                                
  stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;               
}

Slave server, after receiving Int_var_log_event event from
master, it is setting
stmt_depends_on_first_successful_insert_id_in_prev_stmt
to true(*which is wrong*) and not setting
first_successful_insert_id_in_prev_stmt_for_binlog. Because
of this problem, when the actual DML statement with
LAST_INSERT_ID() is parsed by slave SQL thread,
first_successful_insert_id_in_prev_stmt_for_binlog is not
set. Hence the value zero (default value) is written to
slave's binlog.

Why only *Filtered slave* is effected when the code is
in common place:
-------------------------------------------------------
In Query_log_event::do_apply_event,
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is reset to zero at the end of the function. In case of
normal slave (No Filters), this variable will be reset. 
In Filtered slave, Slave SQL thread defers all IRU events's
execution until IRU's Query_log event is received. Once it
receives Query_log_event it executes all pending IRU events
and then it executes Query_log_event. Hence the variable is
not getting reset to 0, causing this bug.

Fix: As described above, the root cause was setting 
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
when Int_var_log_event was executed by a SQL thread. Hence
removing the problematic line from the code.
2013-10-16 22:12:23 +05:30
Venkata Sidagam
9fc5122471 Bug#16900358 FIX FOR CVE-2012-5611 IS INCOMPLETE
Description: Fix for bug CVE-2012-5611 (bug 67685) is 
incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and 
check_grant_db() can be overflown by up to two bytes. That's 
probably not enough to do anything more serious than crashing 
mysqld.
Analysis: In acl_get() when "copy_length" is calculated it 
just adding the variable lengths. But when we are using them 
with strmov() we are adding +1 to each. This will lead to a 
three byte buffer overflow (i.e two +1's at strmov() and one 
byte for the null added by strmov() function). Similarly it 
happens for check_grant_db() function as well.
Fix: We need to add "+2" to "copy_length" in acl_get() 
and "+1" to "copy_length" in check_grant_db().
2013-10-16 14:14:44 +05:30
Libing Song
d5fdf9ef88 Bug#17402313 DUMP THREAD SENDS SOME EVENTS MORE THAN ONCE
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.

To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.

This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.

Finally, two warnings are added when dump threads encounter the temporary
errors.
2013-09-10 09:35:49 +08:00
Neeraj Bisht
4f0e7c036d Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS
Problem:-
In a Procedure, when we are comparing value of select query 
with IN clause and they both have different collation, cause 
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
              sel1                   sel2

Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call 
Item_in_subselect::fix_fields while will resolve sel2. There 
in Item_in_subselect::select_transformer, we evaluate the 
left expression(sel1) and store it in Item_cache_* object 
(to avoid re-evaluating it many times during subquery execution) 
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2  
in Item_in_subselect::select_transformer() which will compare 
t2.d and sel1(which is cached in Item_in_optimizer).

Later while checking the collation in agg_item_collations() 
we get error and we cleanup the item. While cleaning up we cleaned 
the cached value in Item_in_optimizer object.

When we execute the procedure second time, we have condition for 
sel2 and while setup_cond(), we can't able to find reference item 
as it is cleanup while item cleanup.So it assert.


Solution:-
We should not cleanup the cached value for Item_in_optimizer object, 
if we have put the condition to subselect.
2013-08-23 16:54:25 +05:30
Praveenkumar Hulakund
10a6aa256e Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND
"SHOW PROCESSLIST"

Analysis:
----------
The problem here is, if one connection changes its
default db and at the same time another connection executes
"SHOW PROCESSLIST", when it wants to read db of the another
connection then there is a chance of accessing the invalid
memory. 

The db name stored in THD is not guarded while changing user
DB and while reading the user DB in "SHOW PROCESSLIST".
So, if THD.db is freed by thd "owner" thread and if another
thread executing "SHOW PROCESSLIST" statement tries to read
and copy THD.db at the same time then we may endup in the issue
reported here.

Fix:
----------
Used mutex "LOCK_thd_data" to guard THD.db while freeing it
and while copying it to processlist.
2013-08-21 10:39:40 +05:30
prabakaran thirumalai
d95e57a328 Bug#17083851 BACKPORT BUG#11765744 TO 5.1, 5.5 AND 5.6
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on 
LOCK_status mutex.  
On Windows, locking read-write lock recursively is not safe. 
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their 
documentation. For our own implementation of read-write lock, 
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.

Fix:  
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using 
counter in THD object and acquiring lock only once when we enter 
fill_status() function first time.
2013-07-30 09:44:11 +05:30
Aditya A
5f3c0a451d Bug#13548704 ALGORITHM USED FOR DROPPING PARTITIONED TABLE CAN LEAD
TO INCONSISTENCY 

PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the 
table_name.par file and store it in an internal data 
structure.Then we delete this file and the data in 
the table. If the server crashes  after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.


FIX
---
1. We move the part of deleting par file after deleting 
   all the table data from the storage egine.
2. During drop operation if we detect that the par 
   file is missing then we delete the .frm file,since 
   there is no way of recovering without par file.
  
[Approved by Mattias rb#2576 ]
2013-06-14 11:22:05 +05:30
Murthy Narkedimilli
8325f2cf78 Bug 16919882 - WRONG FSF ADDRESS IN LICENSES HEADERS 2013-06-11 01:13:07 +05:30
Chaithra Gopalareddy
5bf9b7d0cb Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH
STRING CONVERSION FUNCTIONS
            
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
                  
Analysis:
                
1. First time when Item_func_set_user_var::check is called,
   memory is allocated for "value" to store the result.
   (In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
   is called again. But, this time, its called with
   "use_result_field" set to true. 
   As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
   gets set to "result_field", with "str_length" being that of
   result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
   freed as this is allocated in a chunk as part of the
   temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
   "value" will be set to memory which is already freed.
   Valgrind error occurs as "str_length" is positive 
   (set at Step 3)
                  
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
            
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
as well.So backporting the same.
            
In the solution for Bug#11764371, we create another object of 
user_var and repoint it to temp_table's field. As a result while 
deleting the alloced buffer in Step 3, since the cloned object 
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the 
original object will be used and since deletion did not happen 
valgrind will not complain about dangling pointer.
2013-05-23 15:00:31 +05:30
Chaithra Gopalareddy
d0367abaff Bug#11766191:INVALID MEMORY READ IN DO_DIV_MOD WITH DOUBLY ASSIGNED VARIABLES
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS

Backporting these two fixes to 5.1 
Added unittest to test my_decimal construtor and assignment operators
2013-05-22 14:36:43 +05:30
Chaithra Gopalareddy
4203b985e3 Bug#16119355:PREPARED STATEMENT: READ OF FREED MEMORY WITH STRING CONVERSION FUNCTIONS
Reverting fix for Bug#16119355 in 5.1 as this needs two patches 
from 5.5+ to work for a certain case
2013-05-10 19:18:21 +05:30
Chaithra Gopalareddy
12a26cd6e0 Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH
STRING CONVERSION FUNCTIONS
            
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
            
Analysis:
            
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true. 
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive 
(set at Step 3)
            
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
      
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
as well.So backporting the same.
      
In the solution for Bug#11764371, we create another object of 
user_var and repoint it to temp_table's field. As a result while 
deleting the alloced buffer in Step 3, since the cloned object 
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the 
original object will be used and since deletion did not happen 
valgrind will not complain about dangling pointer.
2013-05-07 16:08:48 +05:30
Sergey Glukhov
a250331593 Bug#16095534 CRASH: PREPARED STATEMENT CRASHES IN ITEM_BOOL_FUNC2::FIX_LENGTH_AND_DEC
The problem happened due to broken left expression in Item_in_optimizer object.
In case of the bug left expression is runtime created Item_outer_ref item which
is deleted at the end of the statement and one of Item_in_optimizer arguments
becomes bad when re-executed. The fix is to use real_item() instead of original
left expression. Note: It feels a bit weird that after preparing, the field is
directly part of the generated Item_func_eq, whereas in execution it is replaced
with an Item_outer_ref wrapper object.
2013-05-07 13:10:58 +04:00
Neeraj Bisht
ed694b0c09 BUG#16222245 - CRASH WITH EXPLAIN FOR A QUERY WITH LOOSE SCAN FOR
GROUP BY, MYISAM 

Problem:-
In a query, where we are using loose index scan optimization and 
we have MIN() causes segmentation fault(where table row length 
is less then key_length).

Analysis:

While using loose index scan for MIN(), we call key_copy(), to copy 
the key data from record.
This function is using temporary record buffer to store key data 
from the record buffer.But in case where the key length is greater 
then the buffer length, this will cause a segmentation fault.


Solution:
Give a proper buffer to store a key record.
2013-04-30 22:38:34 +05:30
Neeraj Bisht
c066c30822 Bug#16073689 : CRASH IN ITEM_FUNC_MATCH::INIT_SEARCH
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.

Analysis:
In union type query like

(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);

We skip resolving of order by a for 1st query and order by of a and b in 
2nd query.


This means that, in case when our order by have Item_func_match class, 
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we 
Perform FULLTEXT search before all regular searches on the bases of the 
list we call Item_func_match::init_search() which will cause debug assert 
as the item is not resolved.


Solution:
We will skip execution if the item is not fixed and we will not 
fix index(Item_func_match::fix_index()) for which 
Item_func_match::fix_field() is not called so that on later changes 
we can check the dependency on fix field.
2013-04-20 12:28:22 +05:30
Chaithra Gopalareddy
4db726c0fa Bug#16347426:ASSERTION FAILED: (SELECT_INSERT &&
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
      
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
      
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
      
Solution:
Make group_concat's order by parsing independent of the select
2013-04-14 07:30:49 +05:30
Tor Didriksen
559af20ca4 Bug#14700180 CRASH IN COPY_FUNCS
This is a backport of the fix for
Bug#13966809 CRASH IN COPY_FUNCS WHEN GROUPING BY OUTER QUERY BLOB FIELD IN SUBQUERY
2013-04-02 16:05:10 +02:00
Chaithra Gopalareddy
94346a8b6c Bug #16347343 : CRASH, GROUP_CONCAT, DERIVED TABLES
Problem:
A select query inside a group_concat function having an 
outer reference results in a crash.
      
Analysis:
In function Item_group_concat::add, we do not check if 
return value of get_tmp_table_field can be NULL for 
a non-const item. This can happen for a query with a 
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache" 
to false. As a result in the call to const_item() from 
Item_func_group_concat::add, it returns false and goes on 
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type 
Item_field, Item_result_field and Item_ref. 
For all other items, it returns NULL. 
     
Solution:
Check for the return value of get_tmp_table_field before we 
access field contents.
2013-03-31 06:48:30 +05:30
Georgi Kodinov
2739ee3848 Addendum #1 to the fix for bug #16451878 : GEOMETRY QUERY CRASHES SERVER
Fixed the get_data_size() methods for multi-point features to check properly for end 
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper 
function.
Test cases added.
Fixed review comments.
2013-03-28 17:37:29 +02:00
Nisha Gopalakrishnan
0de3047952 BUG#11753852: IF() VALUES ARE EVALUATED DIFFERENTLY IN A
REGULAR SQL VS PREPARED STATEMENT

Analysis:
---------

When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.

Consider the example:

SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1    | sqlif    |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+

Executing a prepared statement where the parameters are
supplied:

PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ?        | ps_if_fail |
+----------+------------+
| 0.038687 | 1          |
+----------+------------+
1 row in set (0.00 sec)

In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.

But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.

Fix:
----

The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
2013-03-28 19:11:26 +05:30
Sujatha Sivakumar
c78c1fe52d Bug#14324766:PARTIALLY WRITTEN INSERT STATEMENT IN BINLOG
NO ERRORS REPORTED
      
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
      
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
2013-03-28 14:14:39 +05:30
Georgi Kodinov
0f31bfeaab Bug #16451878: GEOMETRY QUERY CRASHES SERVER
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
2013-03-27 16:03:00 +02:00
Nuno Carvalho
daa3ab6ff8 BUG#16541422: LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
Fixed possible uninitialized variable.
2013-03-27 11:19:29 +00:00
Sujatha Sivakumar
5745b67e02 Bug#11829838: ALTER TABLE NOT BINLOGGED WITH
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
      
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
      
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the 
current default database becomes "NULL" none of the 
statements will be binlogged.
      
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
      
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
2013-03-27 11:53:01 +05:30
Andrei Elkin
0a31d4f411 Bug#16541422 LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.

Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id 
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
2013-03-26 19:24:01 +02:00
Murthy Narkedimilli
d20a70fb55 Bug 16395495 - OLD FSF ADDRESS IN GPL HEADER 2013-03-19 13:29:12 +01:00
Venkatesh Duggirala
bf064c5b1d Bug#16056813-MEMORY LEAK ON FILTERED SLAVE
Back porting fix from mysql-5.5
2013-03-15 08:56:20 +05:30
Sergey Glukhov
ca5caac14f Bug#16075310 SERVER CRASH OR VALGRIND ERRORS IN ITEM_FUNC_GROUP_CONCAT::SETUP AND ::ADD
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
2013-03-14 11:11:17 +03:00
Venkatesh Duggirala
5b523ee7fe BUG#14593883-REPLICATION BREAKS WHEN SET DATA TYPE
COLUMNS ARE USED INSIDE A STORED PROCEDURE                                      
                                                                                
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments

Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not  an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the 
mysql-5.1 code base. 

Fix: Handle coping str_charset member in operator '=' overload                  
method.
2013-03-12 22:36:13 +05:30
Gleb Shchepa
9e80a7891a Bug #16311231: MISSING DATA ON SUBQUERY WITH WHERE + XOR
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE

Backport from 5.6. Original message:

The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
  outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
  row with a row of NULLs; one of that NULL columns
  represents the select list of the subquery.

Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
  its underlying field is NULL (because of LEFT JOIN ... ON 
  FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
  of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
  out the whole fake HAVING expression as FALSE as well
  as a whole subquery with a zero result:
  Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
  clause (the WHERE expression is FALSE in our case, so
  the subquery should return empty set);
* however, during the evaluation of the 
  Item_is_not_null_test::val_int() in the optimizer,
  it marked its "owner" with the "was_null" flag -- that
  forced the subquery to return UNKNOWN instead of empty
  set.
That caused a wrong result.


The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.

The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
2013-02-27 23:21:34 +04:00
Harin Vadodaria
f032a9acf7 Bug#16372927: STACK OVERFLOW WITH LONG DATABASE NAME IN
GRANT STATEMENT

Description: A missing length check causes problem while
             copying source to destination when
             lower_case_table_names is set to a value
             other than 0. This patch fixes the issue
             by ensuring that requried bound check is
             performed.
2013-02-26 21:23:06 +05:30
Murthy Narkedimilli
69d8812a61 Updated/added copyright headers. 2013-02-25 15:26:00 +01:00
Annamalai Gurusami
15f14ff281 Bug #14211565 CRASH WHEN ATTEMPTING TO SET SYSTEM VARIABLE TO RESULT OF VALUES()
Problem:

When the VALUES() function is inappropriately used in the SET stmt the server
exits.  

set port = values(v);

This happens because the values(v) will be parsed as an Item_insert_value by
the parser.  Both Item_field and Item_insert_value return the type as
FIELD_ITEM.  But for Item_insert_value the field_name member is NULL.  In
set_var constructor, when the type of the item is FIELD_ITEM we try to access
the non-existent field_name. 

The class hierarchy is as follows:
Item -> Item_ident -> Item_field -> Item_insert_value

The Item_ident::field_name is NULL for Item_insert_value.  

Solution:

In the parsing stage, in the set_var constructor if the item type is
FIELD_ITEM and if the field_name is non-existent, then it is probably
the Item_insert_value.  So leave it as it is for later evaluation.

rb://2004 approved by Roy and Norvald.
2013-02-22 14:56:17 +05:30
Pedro Gomes
7fc3efdc4f BUG#13545447: RPL_ROTATE_LOGS FAILS DUE TO CONCURRENCY ISSUES IN REP. CODE
Post-push fix, broken build:
sql/rpl_master.cc:1049:70: error: converting ‘false’ to pointer type ‘bool*’ [-Werror=conversion-null]
2013-02-18 17:02:26 +00:00