Commit graph

29253 commits

Author SHA1 Message Date
Aditya A
5f3c0a451d Bug#13548704 ALGORITHM USED FOR DROPPING PARTITIONED TABLE CAN LEAD
TO INCONSISTENCY 

PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the 
table_name.par file and store it in an internal data 
structure.Then we delete this file and the data in 
the table. If the server crashes  after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.


FIX
---
1. We move the part of deleting par file after deleting 
   all the table data from the storage egine.
2. During drop operation if we detect that the par 
   file is missing then we delete the .frm file,since 
   there is no way of recovering without par file.
  
[Approved by Mattias rb#2576 ]
2013-06-14 11:22:05 +05:30
Sivert Sorumgard
e7d8f19bc1 Bug #14227431: CHARACTER SET MISMATCH WHEN ALTERING FOREIGN KEYS
CAN LEAD TO MISSING TABLES

Overview
--------
If the FOREIGN_KEY_CHECKS system variable is set to 0, it is
possible to break a foreign key constraint by changing the type
or character set of the foreign key column, or by dropping the
foreign key index (without carrying out corresponding changes on
another table in the relationship).

If we subsequently set FOREIGN_KEY_CHECKS to 1 and execute ALTER
TABLE involving the COPY algorithm on such a table, the following
happens:

1) If ALTER TABLE does not contain a RENAME clause, the attempt 
   to install the new version of the table instead of the old one
   will fail due to the fact that the inconsistency will be 
   detected. An attempt to revert the partially executed alter 
   table operation by restoring the old table definition will 
   fail as well due to FOREIGN_KEY_CHECKS == 1. As a result, the 
   table being altered will be lost.
2) If ALTER TABLE contains the RENAME clause, the inconsistency 
   will not be detected (most probably due to other bugs). But if
   an attempt to install the new version of the table fails (for 
   example, due to a failure when updating triggers associated 
   with the table), reverting the partially executed alter table 
   by restoring the old table definition will fail too. So the 
   table being altered might be lost as well.


Suggested fix
-------------
The suggested fix is to temporarily unset the option bit
representing FOREIGN_KEY_CHECKS when the old table definition is
restored while reverting the partially executed operation.
2013-06-12 09:35:33 +02:00
Murthy Narkedimilli
cf2d852653 Fixing the bug 16919882 - WRONG FSF ADDRESS IN LICENSES HEADERS 2013-06-10 22:29:41 +02:00
Murthy Narkedimilli
8325f2cf78 Bug 16919882 - WRONG FSF ADDRESS IN LICENSES HEADERS 2013-06-11 01:13:07 +05:30
Maitrayi Sabaratnam
94a708f5cf 4371 Maitrayi Sabaratnam 2013-05-23
Bug#13116514 - CREATE LOGFILE GROUP INITIAL_SIZE & UNDO_BUFFER_SIZE FAILS
      
      Fixing parser to accept the syntax: to give a size with suffix 'M', eg. undo_buffer_size=10M (M for mega bytes), in 'create logfile group' command.
2013-05-24 18:17:36 +02:00
Chaithra Gopalareddy
5bf9b7d0cb Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH
STRING CONVERSION FUNCTIONS
            
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
                  
Analysis:
                
1. First time when Item_func_set_user_var::check is called,
   memory is allocated for "value" to store the result.
   (In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
   is called again. But, this time, its called with
   "use_result_field" set to true. 
   As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
   gets set to "result_field", with "str_length" being that of
   result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
   freed as this is allocated in a chunk as part of the
   temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
   "value" will be set to memory which is already freed.
   Valgrind error occurs as "str_length" is positive 
   (set at Step 3)
                  
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
            
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
as well.So backporting the same.
            
In the solution for Bug#11764371, we create another object of 
user_var and repoint it to temp_table's field. As a result while 
deleting the alloced buffer in Step 3, since the cloned object 
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the 
original object will be used and since deletion did not happen 
valgrind will not complain about dangling pointer.
2013-05-23 15:00:31 +05:30
Chaithra Gopalareddy
d0367abaff Bug#11766191:INVALID MEMORY READ IN DO_DIV_MOD WITH DOUBLY ASSIGNED VARIABLES
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS

Backporting these two fixes to 5.1 
Added unittest to test my_decimal construtor and assignment operators
2013-05-22 14:36:43 +05:30
Ashish Agarwal
918b6a3e7a Bug#16194302: SUPPORT FOR FLOATING-POINT SYSTEM VARIABLES
USING THE PLUGIN INTERFACE.

ISSUE: No support for floating-point plugin
       system variables.

SOLUTION: Allowing plugins to define and expose floating-point
          system variables of type double. MYSQL_SYSVAR_DOUBLE
          and MYSQL_THDVAR_DOUBLE are added.

ISSUE: Fractional part of the def, min, max values of system
       variables are ignored.

SOLUTION: Adding functions that are used to store the raw
          representation of a double in the raw bits of unsigned
          longlong in a way that the binary representation
          remains the same.
2013-05-19 23:38:06 +05:30
Mattias Jonsson
23c5840d52 Bug#16447483: PARTITION PRUNING IS NOT CORRECT FOR RANGE COLUMNS
The problem was in get_partition_id_cols_range_for_endpoint
and cmp_rec_and_tuple_prune, which stepped one partition too long.

Solution was to move a small portion of logic to cmp_rec_and_tuple_prune,
to simplify both get_partition_id_cols_range_for_endpoint and
get_partition_id_cols_list_for_endpoint.
2013-05-16 11:02:39 +02:00
Shubhangi Garg
1a613f89a8 Bug#16607258 :Linker Errors Due To Inclusion Of An Implementation File
In log_event.h
      
DESCRIPTION:
Due to inclusion of an implementation file, namely 'rpl_tblmap.cc'
in a header file, namely 'log_event.h'; linker errors occur if
log_event.h is included in an application containing multiple source
files, such as in the case of Binlog API.
      
Binlog API requires including log_event.h in its source files;
which leads to multiple definition errors, for functions defined
in rpl_tblmap.cc for class 'table_mapping'.
            
FIX:
Change the inclusion from header file(log_event.h) to source files
using this header and have flag MYSQL_CLIENT set. The only file in
the current server repository is mysqlbinlog.cc.
2013-05-14 22:52:42 +05:30
Neeraj Bisht
2812634b6c Bug#12328597 - MULTIPLE COUNT(DISTINCT) IN SAME SELECT FALSE
WITH COMPOSITE KEY COLUMNS

Problem:-
While running a SELECT query with several AGGR(DISTINCT) function 
and these are referring to different field of same composite key, 
Returned incorrect value.

Analysis:-

In a table, where we have composite key like (a,b,c)
and when we give a query like

select COUNT(DISTINCT b), SUM(DISTINCT a) from ....

here, we first make a list of items in Aggr(distinct) function
(which is a, b), where order of item doesn't matter. 
and then we see, whether we have a composite key where the prefix 
of index columns matches the items of the aggregation function.
(in this case we have a,b,c).

if yes, so we can use loose index scan and we need not perform 
duplicate removal to distinct in our aggregate function.

In our table, we traverse column marked with <-- and get the result as
(a,b,c)      count(distinct b)           sum(distinct a)
             treated as count b          treated as sum(a)
(1,1,2)<--              1                      1		
(1,2,2)<--              1++=2                  1+1=2
(1,2,3)		
(2,1,2)<--              2++=3                  1+1+2=4
(2,2,2)<--              3++=4                  1+1+2+2=6
(2,2,3)

result will be 4,6, but it should be (2,3)

As in this case, our assumption is incorrect. If we have
query like 
select count(distinct a,b), sum(distinct a,b)from ..
then we can use loose index scan

Solution:-
In our query, when we have more then one aggr(distinct) function 
then they should refer to same  fields like

select count(distinct a,b), sum(distinct a,b) from .. 

-->we can use loose scan index as both aggr(distinct) refer to same fields a,b.

If they are referring to different field like

select count(distinct a), sum(distinct b) from .. 

-->will not use loose scan index as both aggr(distinct) refer to different fields.
2013-05-13 17:15:25 +05:30
Chaithra Gopalareddy
4203b985e3 Bug#16119355:PREPARED STATEMENT: READ OF FREED MEMORY WITH STRING CONVERSION FUNCTIONS
Reverting fix for Bug#16119355 in 5.1 as this needs two patches 
from 5.5+ to work for a certain case
2013-05-10 19:18:21 +05:30
Jon Olav Hauglid
4f858dcd0c Bug#16779374: NEW ERROR MESSAGE ADDED TO 5.5 AFTER 5.6 GA - REUSING
NUMBER ALREADY USED BY 5.6

The problem was that the patch for Bug#13004581 added a new error
message to 5.5. This causes it to use an error number already used
in 5.6 by ER_CANNOT_LOAD_FROM_TABLE_V2. Which means that error
message number stability between GA releases is broken.

This patch fixes the problem by removing the error message and
using ER_UNKNOWN_ERROR instead.
2013-05-08 12:52:12 +02:00
Chaithra Gopalareddy
ff55c9da68 Merge from 5.1 to 5.5 2013-05-07 18:00:00 +05:30
Chaithra Gopalareddy
12a26cd6e0 Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH
STRING CONVERSION FUNCTIONS
            
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
            
Analysis:
            
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true. 
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive 
(set at Step 3)
            
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
      
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
as well.So backporting the same.
      
In the solution for Bug#11764371, we create another object of 
user_var and repoint it to temp_table's field. As a result while 
deleting the alloced buffer in Step 3, since the cloned object 
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the 
original object will be used and since deletion did not happen 
valgrind will not complain about dangling pointer.
2013-05-07 16:08:48 +05:30
Sergey Glukhov
1414a0ed7f 5.1 -> 5.5 merge 2013-05-07 13:14:01 +04:00
Sergey Glukhov
a250331593 Bug#16095534 CRASH: PREPARED STATEMENT CRASHES IN ITEM_BOOL_FUNC2::FIX_LENGTH_AND_DEC
The problem happened due to broken left expression in Item_in_optimizer object.
In case of the bug left expression is runtime created Item_outer_ref item which
is deleted at the end of the statement and one of Item_in_optimizer arguments
becomes bad when re-executed. The fix is to use real_item() instead of original
left expression. Note: It feels a bit weird that after preparing, the field is
directly part of the generated Item_func_eq, whereas in execution it is replaced
with an Item_outer_ref wrapper object.
2013-05-07 13:10:58 +04:00
Jon Olav Hauglid
db99fd7450 Bug#16757869: INNODB: POSSIBLE REGRESSION IN 5.5.31, BUG#16004999
The problem was that if UPDATE with subselect caused a
deadlock inside InnoDB, this deadlock was not properly
handled by the SQL layer. This meant that the SQL layer
would try to unlock the row after InnoDB had rolled
back the transaction. This caused an assertion inside
InnoDB.
  
This patch fixes the problem by checking for errors
reported by SQL_SELECT::skip_record() and not calling
unlock_row() if any errors have been reported.

This bug is similar to Bug#13586591, but for UPDATE
rather than DELETE. Similar issues in filesort/opt_range/
sql_select will be investigated and handled in the scope
of Bug#16767929
2013-05-06 15:01:57 +02:00
Neeraj Bisht
84421e8e6c BUG#16222245 - CRASH WITH EXPLAIN FOR A QUERY WITH LOOSE SCAN FOR
GROUP BY, MYISAM 

Merge fix for Bug#16222245 from mysql-5.1 to mysql-5.5
2013-04-30 22:46:37 +05:30
Neeraj Bisht
ed694b0c09 BUG#16222245 - CRASH WITH EXPLAIN FOR A QUERY WITH LOOSE SCAN FOR
GROUP BY, MYISAM 

Problem:-
In a query, where we are using loose index scan optimization and 
we have MIN() causes segmentation fault(where table row length 
is less then key_length).

Analysis:

While using loose index scan for MIN(), we call key_copy(), to copy 
the key data from record.
This function is using temporary record buffer to store key data 
from the record buffer.But in case where the key length is greater 
then the buffer length, this will cause a segmentation fault.


Solution:
Give a proper buffer to store a key record.
2013-04-30 22:38:34 +05:30
Bill Qu
0424897cff Bug #13004581 BLACKHOLE BINARY LOG WITH ROW IGNORES UPDATE AND DELETE STATEMENTS
When logging to the binary log in row, updates and deletes to a BLACKHOLE
engine table are skipped.
  
It is impossible to log binary log in row format for updates and deletes to
a BLACKHOLE engine table, as no row events can be generated in these cases.
After fix, generate a warning for UPDATE/DELETE statements that modify a
BLACKHOLE table, as row events are not logged in row format.
2013-04-27 16:04:54 +08:00
Neeraj Bisht
bae6667d86 Bug#16073689 : CRASH IN ITEM_FUNC_MATCH::INIT_SEARCH
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.

Analysis:
In union type query like

(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);

We skip resolving of order by a for 1st query and order by of a and b in 
2nd query.


This means that, in case when our order by have Item_func_match class, 
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we 
Perform FULLTEXT search before all regular searches on the bases of the 
list we call Item_func_match::init_search() which will cause debug assert 
as the item is not resolved.


Solution:
We will skip execution if the item is not fixed and we will not 
fix index(Item_func_match::fix_index()) for which 
Item_func_match::fix_field() is not called so that on later changes 
we can check the dependency on fix field.
bz
2013-04-20 12:36:11 +05:30
Neeraj Bisht
c066c30822 Bug#16073689 : CRASH IN ITEM_FUNC_MATCH::INIT_SEARCH
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.

Analysis:
In union type query like

(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);

We skip resolving of order by a for 1st query and order by of a and b in 
2nd query.


This means that, in case when our order by have Item_func_match class, 
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we 
Perform FULLTEXT search before all regular searches on the bases of the 
list we call Item_func_match::init_search() which will cause debug assert 
as the item is not resolved.


Solution:
We will skip execution if the item is not fixed and we will not 
fix index(Item_func_match::fix_index()) for which 
Item_func_match::fix_field() is not called so that on later changes 
we can check the dependency on fix field.
2013-04-20 12:28:22 +05:30
Chaithra Gopalareddy
fcb0ecfae3 Merge from 5.1 to 5.5 2013-04-14 08:09:56 +05:30
Chaithra Gopalareddy
4db726c0fa Bug#16347426:ASSERTION FAILED: (SELECT_INSERT &&
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
      
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
      
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
      
Solution:
Make group_concat's order by parsing independent of the select
2013-04-14 07:30:49 +05:30
Jorgen Loland
d4dcaea072 Bug#16540042: WRONG QUERY RESULT WHEN USING RANGE OVER
PARTIAL INDEX

Consider the following table definition:

CREATE TABLE t (
  my_col CHAR(10),
  ...
  INDEX my_idx (my_col(1))
)

The my_idx index is not able to distinguish between rows with
equal first-character my_col-values (e.g. "f", "foo", "fee").

Prior to this CS, the range optimizer would translate

"WHERE my_col NOT IN ('f', 'h')" into (optimizer trace syntax)

"ranges": [
  "NULL < my_col < f",
  "f < my_col"
]

But this was not correct because the rows with values "foo" 
and "fee" would not belong to any of those ranges. However, the
predicate "my_col != 'f' AND my_col != 'h'" would translate
to 

"ranges": [
  "NULL < my_col"
]

because get_mm_leaf() changes from "<" to "<=" for partial
keyparts. This CS changes the range optimizer implementation 
for NOT IN to behave like a conjunction of NOT EQUAL: it 
replaces "<" with "<=" for all but the first range when the
keypart is partial.
2013-04-12 09:39:56 +02:00
Raghav Kapoor
b170dff8ba BUG#15978766 - TEST VALGRIND_REPORT FAILS INNODB TESTS
BACKGROUND:
The testcase i_innodb.innodb_bug14036214 when run under valgrind
leaks memory.

ANALYSIS:
In the code path of mysql_update, a temporary file is opened
using open_cached_file().
When an error has occured in that code path, this temporary
file was not closed since call to close_cached_file() was 
missing.
This problem exists in 5.5 but it does not exists in 5.6 and 
trunk. 
This is because in 5.6 and trunk, when we issue the update
statement in the test case, it does not take the same code path
as in 5.5. The code path is different because a different plan 
is chosen by optimizer. 
See Bug#14036214 for details.
However, the problem can still be examined in 5.6 and trunk
by code inspection.

FIX:
The file opened by open_cached_file() has been closed by calling
close_cached_file() when an error occurs so that it does not 
results in a memory leak.
2013-04-08 15:25:45 +05:30
Tor Didriksen
3b9185793d merge 5.1 => 5.5 2013-04-02 16:20:49 +02:00
Tor Didriksen
559af20ca4 Bug#14700180 CRASH IN COPY_FUNCS
This is a backport of the fix for
Bug#13966809 CRASH IN COPY_FUNCS WHEN GROUPING BY OUTER QUERY BLOB FIELD IN SUBQUERY
2013-04-02 16:05:10 +02:00
Chaithra Gopalareddy
260fce8f8c Merge from 5.1 to 5.5 2013-03-31 06:52:16 +05:30
Chaithra Gopalareddy
94346a8b6c Bug #16347343 : CRASH, GROUP_CONCAT, DERIVED TABLES
Problem:
A select query inside a group_concat function having an 
outer reference results in a crash.
      
Analysis:
In function Item_group_concat::add, we do not check if 
return value of get_tmp_table_field can be NULL for 
a non-const item. This can happen for a query with a 
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache" 
to false. As a result in the call to const_item() from 
Item_func_group_concat::add, it returns false and goes on 
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type 
Item_field, Item_result_field and Item_ref. 
For all other items, it returns NULL. 
     
Solution:
Check for the return value of get_tmp_table_field before we 
access field contents.
2013-03-31 06:48:30 +05:30
Chaithra Gopalareddy
4a3708a4f6 Bug#14261010: ON DUPLICATE KEY UPDATE CRASHES THE SERVER
Problem:
Insert with 'on duplicate key update' on a view,
crashes the server.
      
Analysis:
During an insert on to a view, we do the following:
      
For insert fields and values -
1. Resolve insert values.
2. Resolve insert fields.
3. Check if the fields and values are all from a 
   single table of a view in case of INSERT VALUES.
   Do not check the same in case of INSERT SELECT,
   as the values can be read from different table than
   that of the view.
      
For the update fields (if DUP UPDATE is used)
1. Create a name resolution context with 'table_list' only.
2. Resolve update fields in this context.
3. Check if update fields and values are from the same
   table as the insert fields.
4. Get the next name resolution context. Concatinate this
   with the previous one.
5. Resolve update values in this context as we can refer
   to other tables in the values clause.
      
Note that at step 3(of update fields), we check for
'used_tables map' of update values, without resolving them
first. Hence the crash.
      
Fix:
At step 3, do not pass the update values to check if its a
single table view update, as update values can refer other table.
      
Code has been re-organized to function like check_insert_fields.
2013-03-30 19:24:54 +05:30
Venkatesh Duggirala
6f3e77e516 Bug#15948818-SEMI-SYNC ENABLED MASTER CRASHES WHEN EVENT
SCHEDULER DROPS EVENTS

Problem: On a semi sync enabled server (Master/Slave),
if event scheduler drops an event after completion,
server crashes.

Analaysis: If an event is created with "ON COMPLETION
NOT PRESERVE" clause, event scheduler deletes the event
upon event completion(expiration) and the thread object
will be destroyed. In the destructor of the thread object,
mysys_var member is set to zero explicitly. Later from
the same destructor call(same execution path),
incase of semi sync enabled server, while cleanup is called,
THD::mysys_var member is accessed by THD::enter_cond()
function which causes server to crash.

Fix: mysys_var should not be explicitly set to zero and
also it is not required.
2013-03-29 09:28:31 +05:30
Georgi Kodinov
7c2a140911 merge 2013-03-28 17:41:22 +02:00
Georgi Kodinov
2739ee3848 Addendum #1 to the fix for bug #16451878 : GEOMETRY QUERY CRASHES SERVER
Fixed the get_data_size() methods for multi-point features to check properly for end 
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper 
function.
Test cases added.
Fixed review comments.
2013-03-28 17:37:29 +02:00
Nisha Gopalakrishnan
65b3449391 Merge from 5.1 to 5.5 2013-03-28 19:17:28 +05:30
Nisha Gopalakrishnan
0de3047952 BUG#11753852: IF() VALUES ARE EVALUATED DIFFERENTLY IN A
REGULAR SQL VS PREPARED STATEMENT

Analysis:
---------

When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.

Consider the example:

SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1    | sqlif    |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+

Executing a prepared statement where the parameters are
supplied:

PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ?        | ps_if_fail |
+----------+------------+
| 0.038687 | 1          |
+----------+------------+
1 row in set (0.00 sec)

In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.

But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.

Fix:
----

The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
2013-03-28 19:11:26 +05:30
Sujatha Sivakumar
5c6611b546 Merge from mysql-5.1 to mysql-5.5 2013-03-28 14:18:51 +05:30
Sujatha Sivakumar
c78c1fe52d Bug#14324766:PARTIALLY WRITTEN INSERT STATEMENT IN BINLOG
NO ERRORS REPORTED
      
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
      
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
2013-03-28 14:14:39 +05:30
Georgi Kodinov
3c358724d6 merge 5.1->5.5 2013-03-27 16:06:33 +02:00
Georgi Kodinov
0f31bfeaab Bug #16451878: GEOMETRY QUERY CRASHES SERVER
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
2013-03-27 16:03:00 +02:00
Nuno Carvalho
accc5d9274 BUG#16541422: LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
Merge from mysql-5.1 into mysql-5.5.
2013-03-27 11:22:25 +00:00
Nuno Carvalho
daa3ab6ff8 BUG#16541422: LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
Fixed possible uninitialized variable.
2013-03-27 11:19:29 +00:00
Sujatha Sivakumar
ad14564344 Merge from mysql-5.1 to mysql-5.5 2013-03-27 11:59:40 +05:30
Sujatha Sivakumar
5745b67e02 Bug#11829838: ALTER TABLE NOT BINLOGGED WITH
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
      
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
      
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the 
current default database becomes "NULL" none of the 
statements will be binlogged.
      
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
      
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
2013-03-27 11:53:01 +05:30
Andrei Elkin
fd434bca5f merge from 5.1 2013-03-26 20:52:01 +02:00
Andrei Elkin
0a31d4f411 Bug#16541422 LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.

Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id 
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
2013-03-26 19:24:01 +02:00
Manish Kumar
142fbb9eaa BUG#16438800 - SLAVE_MAX_ALLOWED_PACKET NOT HONORED ON SLAVE IO CONNECT
Problem - When the slave was disconnected from the master, under certain 
          conditions, upon reconnect, it will report that it received a 
          packet larger the slave_max_allowed_packet which causes the
          replication to stop.
 
Analysis -The reason of this failure is that on reconnect
          the slave sets the max_allowed_packet from the master's mi->mysql
          object which keeps the max_allowed_packet as 1MB. This causes the 
          slave to report such error on recieving packet bigger than 1MB. 
          START SLAVE on the slave fixes the problem since it restarts
          slave threads which initializes the max_allowed_packet to
          slave_max_allowed_packet.
      
Fix - The problem is fixed by some code refactoring and introduction of a new
      function which updates the max_allowed_packet for the THD object of the
      slave thread and the mysql->options max_allowed_packet.
2013-03-25 11:27:12 +05:30
Nirbhay Choubey
f5b4c8f1e5 Bug#16500013 : ADD VERSION CHECK TO MYSQL_UPGRADE
(Based on Sinisa's patch)

Added a version checking facility to mysql_upgrade.
The versions used for checking is the version of the
server that mysql_upgrade is going to upgrade and the
server version that mysql_upgrade was build/distributed
with.
Also added an option '--version-check' to enable/disable
the version checking.
2013-03-21 22:51:40 +05:30
Annamalai Gurusami
63dc91d7da Bug #16051728 SERVER CRASHES IN ADD_IDENTIFIER ON CONCURRENT ALTER TABLE AND
SHOW ENGINE INNOD

Problem:

The purpose of explain_filename() is to provide useful additional
information regarding the partitions given the filename.  This function
was returning an error when it was not able to parse the given filename.
For example, within InnoDB, temporary files are created with #sql-
prefix.  But this function was not able to parse it correctly.

Solution:

It is not an error, if explain_filename() could not parse the given
filename.  If there is no partition information to explain, then silently
return from the function.

rb#1940 approved by mattiasj
2013-03-21 11:40:43 +05:30