Commit graph

2748 commits

Author SHA1 Message Date
Knut Anders Hatlen
95825fa28a Bug#21682356: STOP INJECTING DATA ITEMS IN AN ERROR MESSAGE
GENERATED BY THE EXP() FUNCTION

When generating the error message for numeric overflow, pass a flag to
Item::print() that prevents it from expanding constant expressions and
parameters to the values they evaluate to.

For consistency, also pass the flag to Item::print() when
Item_func_spatial_collection::fix_length_and_dec() generates an error
message. It doesn't make any difference at the moment, since constant
expressions haven't been evaluated yet when this function is called.
2016-01-17 20:28:00 +01:00
Chaithra Gopalareddy
a7fb5aecfd Bug#19941403: FATAL_SIGNAL(SIG 6) IN BUILD_EQUAL_ITEMS_FOR_COND | IN SQL/SQL_OPTIMIZER.CC:1657
Problem:
At the end of first execution select_lex->prep_where is pointing to
a runtime created object (temporary table field). As a result
server exits trying to access a invalid pointer during second
execution.

Analysis:
While optimizing the join conditions for the query, after the
permanent transformation, optimizer makes a copy of the new
where conditions in select_lex->prep_where. "prep_where" is what
is used as the "where condition" for the query at the start of execution.
W.r.t the query in question, "where" condition is actually pointing
to a field in the temporary table. As a result, for the  second
execution the pointer is no more valid resulting in server exit.

Fix:
At the end of the first execution, select_lex->where will have the
original item of the where condition.
Make prep_where the new place where the original item of select->where
has to be rolled back.
Fixed in 5.7 with the wl#7082 - Move permanent transformations from
JOIN::optimize to JOIN::prepare

Patch for 5.5 includes the following backports from 5.6:

Bugfix for Bug12603141 - This makes the first execute statement in the testcase
pass in 5.5

However it was noted later in in Bug16163596 that the above bugfix needed to
be modified. Although Bug16163596 is reproducible only with changes done for
Bug12582849, we have decided include the fix.

Considering that Bug12582849 is related to Bug12603141, the fix is
also included here. However this results in Bug16317817, Bug16317685,
Bug16739050. So fix for the above three bugs is also part of this patch.
2015-11-20 12:30:15 +05:30
Sreeharsha Ramanavarapu
5e9a50efc3 Bug #22023218: MYSQL 5.5: MAIN.FULLTEXT HAS VALGRIND ISSUES.
Issue
-----
This problem occurs when varchar columns are used in a
internal temporary table. The type of the field is set
incorrectly to the generic FIELD_NORMAL type. This in turn
results in an inaccurate calculation of the record length.
Valgrind issues will occur since initialization has not
happend for some bytes.

Fix
----
While creating the temporary table, the type of the field
needs to be to set FIELD_VARCHAR. This will allow myisam
to calculate the record length accurately.

This fix is a backport of BUG#13350136.
2015-11-03 07:43:54 +05:30
Sreeharsha Ramanavarapu
415faa122b Bug #19434916: FATAL_SIGNAL IN ADD_KEY_EQUAL_FIELDS() WITH
UPDATE VIEW USING OUTER SUBQUERY

Issue:
-----
While resolving a column which refers to a table/view in an
outer query, it's respecitve item object is marked with the
outer query's select_lex object. But when the column refers
to a view or if the column is part of a subquery in the
HAVING clause, an Item_ref object is created. While the
reference to the outer query is stored by the Item_ref
object, the same is not stored in it's real_item.

This creates a problem with the IN-TO-EXISTS optmization.
When there is an index over the column in the inner query,
it will be considered since the column's real_item object
will be mistaken for a local field. This will lead to a
crash.

SOLUTION:
---------
Under the current design, the only way to fix this issue is
to check the reginfo.join_tab for a NULL value. If yes, the
query should not be worrying about the key use.

The testcase and comments added as part of the fix for
Bug#17766653 have been backported.
2015-10-01 07:45:27 +05:30
Ajo Robert
552b1c8ab6 Merge branch 'mysql-5.1' into mysql-5.5 2015-08-07 16:27:48 +05:30
Ajo Robert
f3dce250f4 Bug #20760261 mysqld crashed in materialized_cursor::
send_result_set_metadata

Analysis
--------
Cursor inside trigger accessing NEW/OLD row leads server exit.

The reason for the bug was that implementation of function
create_tmp_table() was not considering Item::TRIGGER_FIELD_ITEM
as possible alternative for type of class being instantiated.
This was resulting in a mismatch between a number of columns
in result list and temp table definition. This mismatch leads
to the failure of assertion
DBUG_ASSERT(send_result_set_metadata.elements == item_list.elements)
in the method Materialized_cursor::send_result_set_metadata
in debug mode.

Fix:
---
Added code to consider Item::TRIGGER_FIELD_ITEM as valid
type while creating fields.
2015-08-07 16:26:10 +05:30
Mithun C Y
c20911dbe0 Merge branch 'mysql-5.1' into mysql-5.5 2015-08-04 12:28:56 +05:30
Mithun C Y
c28626d0af Bug #21096444: MYSQL IS TRYING TO PERFORM A CONSISTENT READ BUT THE READ VIEW IS NOT ASSIGNED!
Issue: A select for update subquery in having clause
resulted deadlock and its transaction was rolled back
by innodb. val_XXX interfaces do not handle errors and
it do not propogate errors to its caller. sub_select
did not see this error when it called
evaluate_join_record and later made a call to innodb.
As transaction is rolled back innodb asserted.

Fix: Now evaluate_join_record checks if there is any
error reported and then return the same to its caller.
2015-08-04 11:45:02 +05:30
Chaithra Gopalareddy
044060fe16 Bug #20730220 : BACKPORT BUG#19880368 TO 5.1
Backport from mysql-5.5 to mysql-5.1

Bug#19880368 : GROUP_CONCAT CRASHES AFTER DUMP_LEAF_KEY

Problem:
find_order_by_list does not update the address of order_item
correctly after resolving.

Solution:
Change the ref_by address for a order_by field if its
SUM_FUNC_ITEM to the address of the field present in
all_fields.
2015-03-23 14:31:28 +05:30
Chaithra Gopalareddy
a2cd622f3a Bug #20730129: BACKPORT BUG#19612819 TO 5.1
Backport from mysql-5.5 to mysql-5.1

Bug #19612819 :  FILESORT: ASSERTION FAILED: POS->FIELD != 0 || POS->ITEM != 0

Problem:
While getting the temp table field for a REF_ITEM
make_sortorder is using the real_item. As a result
server fails later with an assert.

Solution:
Do not use real_item to get the temp table field.
Instead use the REF_ITEM itself as temp table fields
are created for REF_ITEM not the real_item.
2015-03-23 12:05:55 +05:30
Chaithra Gopalareddy
674367afd5 Bug#19880368 : GROUP_CONCAT CRASHES AFTER DUMP_LEAF_KEY
Problem:
find_order_by_list does not update the address of order_item
correctly after resolving.

Solution:
Change the ref_by address for a order_by field if its
SUM_FUNC_ITEM to the address of the field present in
all_fields.
2015-02-20 11:04:43 +05:30
Chaithra Gopalareddy
1a5b8122b6 Bug #19612819 : FILESORT: ASSERTION FAILED: POS->FIELD != 0 || POS->ITEM != 0
Problem:
While getting the temp table field for a REF_ITEM
make_sortorder is using the real_item. As a result
server fails later with an assert.

Solution:
Do not use real_item to get the temp table field.
Instead use the REF_ITEM itself as temp table fields
are created for REF_ITEM not the real_item.
2015-02-18 22:28:03 +05:30
Nisha Gopalakrishnan
5a587b6d28 BUG#11747548: DETECT ORPHAN TEMP-POOL FILES, AND HANDLE GRACEFULLY
Analysis:
--------
Certain queries using intrinsic temporary tables may fail due to
name clashes in the file name for the temporary table when the
'temp-pool' enabled.

'temp-pool' tries to reduce the number of different filenames used for
temp tables by allocating them from small pool in order to avoid
problems in the Linux kernel by using a three part filename:
<tmp_file_prefix>_<pid>_<temp_pool_slot_num>.
The bit corresponding to the temp_pool_slot_num is set in the bit
map maintained for the temp-pool when it used for the file name.
It is cleared after the temp table is deleted for re-use.

The 'create_tmp_table()' function call under error condition
tries to clear the same bit twice by calling 'free_tmp_table()'
and 'bitmap_lock_clear_bit()'. 'free_tmp_table()' does a delete
of the table/file and clears the bit by calling the same function
'bitmap_lock_clear_bit()'.

The issue reported can be triggered under the timing window mentioned
below for an error condition while creating the temp table:
a) THD1: Due to an error clears the temp pool slot number used by it
   by calling 'free_tmp_table'.
b) THD2: In the process of creating the temp table by using an unused
   slot number in the bit map.
c) THD1: Clears the slot number used THD2 by calling
  'bitmap_lock_clear_bit()' after completing the call 'free_tmp_table'.
d) THD3: Uses the slot number used the THD2 since it is freed by THD1.
   When it tries to create the temp file using that slot number,
   an error is reported since it is currently in use by THD2.
   [The error: Error 'Can't create/write to file
   '/tmp/#sql_277e_0.MYD' (Errcode: 17)']

Another issue which may occur in 5.6 and trunk is that:
When the open temporary table fails after its creation(due to ulimit
or OOM error), the file is not deleted. Thus further attempts to use
the same slot number in the 'temp-pool' results in failure.

Fix:
---
a) Under the error condition calling the 'bitmap_lock_clear_bit()'
   function to clear the bit is unnecessary since 'free_tmp_table()'
   deletes the table/file and clears the bit. Hence removed the
   redundant call 'bitmap_lock_clear_bit()' in 'create_tmp_table()'
   This prevents the timing window under which the issue reported
   can be seen.

b) If open of the temporary table fails, then the file is deleted
   thus allowing the temp-pool slot number to be utilized for the
   subsequent temporary table creation.

c) Also if the attempt to create temp table fails since it already
   exists, the temp-pool slot for it is marked as used, to avoid
   the problem from re-appearing.
2014-11-24 20:24:18 +05:30
mithun
11f5d757d3 Bug #18167356: EXPLAIN W/ EXISTS(SELECT* UNION SELECT*)
WHERE ONE OF SELECT* IS DISTINCT FAILS.
ISSUE:
------
There are 2 issues related to explain union.
1. If we have subquery with union of selects. And, one of
   the select need temp table to materialize its results
   then it will replace its query structure with a simple
   select from temporary table. Trying to display new
   internal temporary table scan resulted in crash. But to
   display the query plan, we should save the original
   query structure.
2. Multiple execution of prepared explain statement which
   have union of subqueries resulted in crash. If we have
   constant subqueries, fake select used in union operation
   will be evaluated once before using it for explain.
   During first execution we have set fake select options to
   SELECT_DESCRIBE, but did not reset after the explain.
   Hence during next execution of prepared statement during
   first time evaluation of fake select we had our select
   options as SELECT_DESCRIBE this resulted in improperly
   initialized data structures and crash.

SOLUTION:
---------
1. If called by explain now we save the original query
   structure. And this will be used for displaying.
2. Reset the fake select options after it is called for
   explain of union.
2014-04-28 21:07:27 +05:30
Chaithra Gopalareddy
5bf9b7d0cb Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH
STRING CONVERSION FUNCTIONS
            
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
                  
Analysis:
                
1. First time when Item_func_set_user_var::check is called,
   memory is allocated for "value" to store the result.
   (In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
   is called again. But, this time, its called with
   "use_result_field" set to true. 
   As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
   gets set to "result_field", with "str_length" being that of
   result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
   freed as this is allocated in a chunk as part of the
   temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
   "value" will be set to memory which is already freed.
   Valgrind error occurs as "str_length" is positive 
   (set at Step 3)
                  
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
            
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
as well.So backporting the same.
            
In the solution for Bug#11764371, we create another object of 
user_var and repoint it to temp_table's field. As a result while 
deleting the alloced buffer in Step 3, since the cloned object 
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the 
original object will be used and since deletion did not happen 
valgrind will not complain about dangling pointer.
2013-05-23 15:00:31 +05:30
Neeraj Bisht
2812634b6c Bug#12328597 - MULTIPLE COUNT(DISTINCT) IN SAME SELECT FALSE
WITH COMPOSITE KEY COLUMNS

Problem:-
While running a SELECT query with several AGGR(DISTINCT) function 
and these are referring to different field of same composite key, 
Returned incorrect value.

Analysis:-

In a table, where we have composite key like (a,b,c)
and when we give a query like

select COUNT(DISTINCT b), SUM(DISTINCT a) from ....

here, we first make a list of items in Aggr(distinct) function
(which is a, b), where order of item doesn't matter. 
and then we see, whether we have a composite key where the prefix 
of index columns matches the items of the aggregation function.
(in this case we have a,b,c).

if yes, so we can use loose index scan and we need not perform 
duplicate removal to distinct in our aggregate function.

In our table, we traverse column marked with <-- and get the result as
(a,b,c)      count(distinct b)           sum(distinct a)
             treated as count b          treated as sum(a)
(1,1,2)<--              1                      1		
(1,2,2)<--              1++=2                  1+1=2
(1,2,3)		
(2,1,2)<--              2++=3                  1+1+2=4
(2,2,2)<--              3++=4                  1+1+2+2=6
(2,2,3)

result will be 4,6, but it should be (2,3)

As in this case, our assumption is incorrect. If we have
query like 
select count(distinct a,b), sum(distinct a,b)from ..
then we can use loose index scan

Solution:-
In our query, when we have more then one aggr(distinct) function 
then they should refer to same  fields like

select count(distinct a,b), sum(distinct a,b) from .. 

-->we can use loose scan index as both aggr(distinct) refer to same fields a,b.

If they are referring to different field like

select count(distinct a), sum(distinct b) from .. 

-->will not use loose scan index as both aggr(distinct) refer to different fields.
2013-05-13 17:15:25 +05:30
Chaithra Gopalareddy
4203b985e3 Bug#16119355:PREPARED STATEMENT: READ OF FREED MEMORY WITH STRING CONVERSION FUNCTIONS
Reverting fix for Bug#16119355 in 5.1 as this needs two patches 
from 5.5+ to work for a certain case
2013-05-10 19:18:21 +05:30
Chaithra Gopalareddy
ff55c9da68 Merge from 5.1 to 5.5 2013-05-07 18:00:00 +05:30
Chaithra Gopalareddy
12a26cd6e0 Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH
STRING CONVERSION FUNCTIONS
            
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
            
Analysis:
            
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true. 
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive 
(set at Step 3)
            
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
      
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
as well.So backporting the same.
      
In the solution for Bug#11764371, we create another object of 
user_var and repoint it to temp_table's field. As a result while 
deleting the alloced buffer in Step 3, since the cloned object 
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the 
original object will be used and since deletion did not happen 
valgrind will not complain about dangling pointer.
2013-05-07 16:08:48 +05:30
Tor Didriksen
3b9185793d merge 5.1 => 5.5 2013-04-02 16:20:49 +02:00
Tor Didriksen
559af20ca4 Bug#14700180 CRASH IN COPY_FUNCS
This is a backport of the fix for
Bug#13966809 CRASH IN COPY_FUNCS WHEN GROUPING BY OUTER QUERY BLOB FIELD IN SUBQUERY
2013-04-02 16:05:10 +02:00
Jorgen Loland
015c320a75 Bug#16394084: LOOSE INDEX SCAN WITH QUOTED INT PREDICATE
RETURNS RANDOM DATA
                 
MySQL 5.5 specific version of bugfix.
      
When Loose Index Scan Range access is used, MySQL execution needs
to copy non-aggregated fields. end_send() checked if this was
necessary by checking if join_tab->select->quick had type
QS_TYPE_GROUP_MIN_MAX.
      
In this bug, however, MySQL created a sort index to sort the rows
read from this range access method. create_sort_index() deletes
join_tab->select->quick which makes it impossible to inquire
the join_tab if LIS has been used.
      
The fix for MySQL 5.5 is to introduce a variable in JOIN_TAB
that stores whether or not LIS has been used. There is no need
for this variable in later MySQL versions because the relevant
code has been refactored.
2013-03-20 11:20:12 +01:00
Murthy Narkedimilli
d978016d93 Fix for Bug 16395495 - OLD FSF ADDRESS IN GPL HEADER 2013-03-19 15:53:48 +01:00
Murthy Narkedimilli
d20a70fb55 Bug 16395495 - OLD FSF ADDRESS IN GPL HEADER 2013-03-19 13:29:12 +01:00
Tor Didriksen
370ac0669e Bug#16192219 CRASH IN TEST_IF_SKIP_SORT_ORDER ON SELECT DISTINCT WITH ORDER BY
This is a backport of the fix for:

Bug#13633549 HANDLE_FATAL_SIGNAL IN TEST_IF_SKIP_SORT_ORDER/CREATE_SORT_INDEX
Don't invoke the range optimizer for a NULL select.
2013-02-07 17:05:07 +01:00
Tor Didriksen
84731d897c merge 5.1 => 5.5 2013-02-07 17:08:59 +01:00
Gleb Shchepa
4743ba00bb Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY
Manual up-merge from 5.1 to 5.5.
2013-01-31 09:13:42 +04:00
Gleb Shchepa
e53345f04b Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.

There were a few different issues with similar assertion
failures on different queries:

1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).

Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.

Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.

2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.

Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been with the has_subquery() function call.

3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.

Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.

4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.

Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.

5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
  <  <is_not_null_test>(a),
  ---
  >  <cache>(<is_not_null_test>(a)),
or
  < having (<is_not_null_test>(a) and <is_not_null_test>(a))
  ---
  > having 1
etc.

Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
2013-01-23 09:51:50 +04:00
Nisha Gopalakrishnan
62e8f25677 Bug#11757464:SERVER CRASH IN RECURSIVE CALL WHEN OOM
Analysis:
---------

When the server is out of memory, an error is raised
to indicate the same. Handling the error requires
more memory to be allocated which fails, hence the
error handling loops in a recursion and causes the
server to crash.

Fix:
---
a) Prevents pushing the 'out of memory' error condition
to the diagnostic area as it requires memory allocation.
GET DIAGNOSTICS, SHOW WARNINGS and SHOW ERRORS statements
will not show information about this error. However the
'out of memory' error is returned to the client.
b) It sets the ME_FATALERROR flag when 'out of memory' errors
are reported (for places where the flag is not already set).
This flag prevents activation of SP error handlers which also
require memory allocation and therefore are likely to fail.
2013-01-15 15:30:26 +05:30
Olav Sandstaa
e810f7f4eb Fix for Bug#14636211 WRONG RESULT (EXTRA ROW) ON A FROM SUBQUERY
WITH A VARIABLE AND ORDER BY
        Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
      
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
      
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
      
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select 
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
2013-01-14 10:58:17 +01:00
Roy Lyseng
8b1d1cf5c0 Bug#15972635: Incorrect results returned in 32 table join with HAVING
The problem is a shift operation that is not 64-bit safe.
The consequence is that used tables information for a join with 32 tables
or more will be incorrect.

Fixed by adding a type cast in Item_sum::update_used_tables().

Also used the opportunity to fix some other potential bugs by adding an
explicit type-cast to an integer in a left-shift operation.
Some of them were quite harmless, but was fixed in order to get the same
signed-ness as the other operand of the operation it was used in.

sql/item_cmpfunc.cc
  Adjusted signed-ness for some integers in left-shift.

sql/item_subselect.cc
  Added type-cast to nesting_map (which is a 32/64 bit type, so
  potential bug for deeply nested queries).

sql/item_sum.cc
  Added type-cast to nesting_map (32/64-bit type) and table_map
  (64-bit type).

sql/opt_range.cc
  Added type-cast to ulonglong (which is a 64-bit type).

sql/sql_base.cc
  Added type-cast to nesting_map (which is a 32/64-bit type).

sql/sql_select.cc
  Added type-cast to nesting_map (32/64-bit type) and key_part_map
  (64-bit type).

sql/strfunc.cc
  Changed type-cast from longlong to ulonglong, to preserve signed-ness.
2012-12-21 09:53:42 +01:00
Neeraj Bisht
becab17a87 Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.

Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.

One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.

and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).

both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
2012-10-18 23:54:18 +05:30
Neeraj Bisht
4aaadc1236 Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.

Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.

One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.

and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).

both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
2012-10-18 23:45:15 +05:30
Annamalai Gurusami
bd7c9815ce Bug #14036214 MYSQLD CRASHES WHEN EXECUTING UPDATE IN TRX WITH
CONSISTENT SNAPSHOT OPTION

A transaction is started with a consistent snapshot.  After 
the transaction is started new indexes are added to the 
table.  Now when we issue an update statement, the optimizer
chooses an index.  When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports 
the error code HA_ERR_TABLE_DEF_CHANGED, with message 
stating that "insufficient history for index".

This error message is propagated up to the SQL layer.  But
the my_error() api is never called.  The statement level
diagnostics area is not updated with the correct error 
status (it remains in Diagnostics_area::DA_EMPTY).  

Hence the following check in the Protocol::end_statement()
fails.

 516   case Diagnostics_area::DA_EMPTY:
 517   default:
 518     DBUG_ASSERT(0);
 519     error= send_ok(thd->server_status, 0, 0, 0, NULL);
 520     break;

The fix is to backport the fix of bugs 14365043, 11761652 
and 11746399. 

14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED

rb://1227 approved by guilhem and mattiasj.
2012-10-08 19:40:30 +05:30
Tor Didriksen
06e4a9696b Backport Bug#13724099 2012-09-12 08:36:12 +02:00
Tor Didriksen
b044278020 Bug#14542543 FIX BUG #12694872 IN 5.5
Bug#14530242 CRASH / MEMORY CORRUPTION IN FILESORT_BUFFER::GET_RECORD_BUFFER WITH MYISAM

This is a backport of
Bug#12694872 - VALGRIND: 18,816 BYTES IN 196 BLOCKS ARE DEFINITELY LOST
Bug#13340270: assertion table->sort.record_pointers == __null
Bug#14536113 CRASH IN CLOSEFRM (TABLE.CC) OR UNPACK (FIELD.H) ON SUBQUERY WITH MYISAM TABLES

Also:
removed and re-added test files with file-ids from trunk.
2012-09-18 17:32:02 +02:00
Tor Didriksen
26269e0d4b merge 5.1 => 5.5 2012-09-12 08:59:44 +02:00
Tor Didriksen
8b0ae035b0 Bug#14463247 ORDER BY SUBQUERY REFERENCING OUTER ALIAS FAILS
Documentation for class Item_outer_ref was wrong:
(*ref) may point to Item_field as well
(see e.g. Item_outer_ref::fix_fields)

So this casting in get_store_key() was wrong:
(*(Item_ref**)((Item_ref*)keyuse->val)->ref)->ref_type()
2012-08-23 16:29:41 +02:00
Sergey Glukhov
ec766b5dab 5.1 -> 5.5 merge 2012-08-09 15:50:29 +04:00
Sergey Glukhov
af3fdefca5 Bug #14409015 MEMORY LEAK WHEN REFERENCING OUTER FIELD IN HAVING
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.
2012-08-09 15:34:52 +04:00
Chaithra Gopalareddy
0e729b5d53 Merge from 5.1 to 5.5 2012-07-18 15:18:15 +05:30
Chaithra Gopalareddy
a56c4692d4 Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH
"ORDER BY" AND "LIMIT BY" CLAUSE

PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.

ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.

While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function. 
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and 
hence the estimated number of rows calculated above are 
wrong (which results in choosing the second index).

FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.

This fix is backported from 5.6. This problem is fixed in 5.6 as   
part of changes for work log #5558
2012-07-18 14:36:08 +05:30
Norvald H. Ryeng
0d5109883a Merge 5.1->5.5. 2012-06-18 09:45:42 +02:00
Norvald H. Ryeng
5f61cc438b Bug#13003736 CRASH IN ITEM_REF::WALK WITH SUBQUERIES
Problem: Some queries with subqueries and a HAVING clause that
consists only of a column not in the select or grouping lists causes
the server to crash.

During parsing, an Item_ref is constructed for the HAVING column. The
name of the column is resolved when JOIN::prepare calls fix_fields()
on its having clause. Since the column is not mentioned in the select
or grouping lists, a ref pointer is not found and a new Item_field is
created instead. The Item_ref is replaced by the Item_field in the
tree of HAVING clauses. Since the tree consists only of this item, the
pointer that is updated is JOIN::having. However,
st_select_lex::having still points to the Item_ref as the root of the
tree of HAVING clauses.

The bug is triggered when doing filesort for create_sort_index(). When
find_all_keys() calls select->cond->walk() it eventually reaches
Item_subselect::walk() where it continues to walk the having clauses
from lex->having. This means that it finds the Item_ref instead of the
new Item_field, and Item_ref::walk() tries to dereference the ref
pointer, which is still null.

The crash is reproducible only in 5.5, but the problem lies latent in
5.1 and trunk as well.

Fix: After calling fix_fields on the having clause in JOIN::prepare(),
set select_lex::having to point to the same item as JOIN::having.

This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
query is executed as a prepared statement. The Item_field is created
in the runtime arena when the query is prepared, and the pointer to
the item is saved by st_select_lex::fix_prepare_information() and
brought back as a dangling pointer when the query is executed, after
the runtime arena has been reclaimed.

Fix: Backport fix from trunk that switches to the permanent arena
before calling Item_ref::fix_fields() in JOIN::prepare().
2012-06-18 09:20:12 +02:00
Annamalai Gurusami
08f360703f Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
WHEN KILLING

Suppose there is a query waiting for a lock.  If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file.  Since this is a user
requested interruption, no spurious error message must be logged
in the server log.  This patch will remove the error message from
the log.

approved by joh and tatjana
2012-06-01 14:12:57 +05:30
Annamalai Gurusami
633fcac144 Merge from mysql-5.1 to mysql-5.5. 2012-06-01 11:49:07 +05:30
Annamalai Gurusami
9b92ce0dbd Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
WHEN KILLING

Suppose there is a query waiting for a lock.  If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file.  Since this is a user
requested interruption, no spurious error message must be logged
in the server log.  This patch will remove the error message from
the log.

approved by joh and tatjana
2012-05-30 10:05:04 +05:30
Olav Sandstaa
1e6966228d Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3

This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.

Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:

1. The t3 table contains four records. The outer query will read
   these and for each of these it will execute the subquery.

2. Before the first execution of the subquery it will be optimized. In
   this case the important is what happens to the first table t1:
   -make_join_select() will call the range optimizer which decides
    that t1 should be accessed using a range scan on the k1 index
    It creates a QUICK_RANGE_SELECT object for this.
   -As the last part of optimization the ICP code pushes the
    condition down to the storage engine for table t1 on the k1 index.

   This produces the following information in the explain for this table:

     2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort

   Note the use of filesort.

3. The first execution of the subquery does (among other things) due
   to the need for sorting:
   a. Call create_sort_index() which again will call find_all_keys():
   b. find_all_keys() will read the required keys for all qualifying
      rows from the storage engine. To do this it checks if it has a
      quick-select for the table. It will use the quick-select for
      reading records. In this case it will read four records from the
      storage engine (based on the range criteria). The storage engine
      will evaluate the pushed index condition for each record.
   c. At the end of create_sort_index() there is code that cleans up a
      lot of stuff on the join tab. One of the things that is cleaned
      is the select object. The result of this is that the
      quick-select object created in make_join_select is deleted.

4. The second execution of the subquery does the same as the first but
   the result is different:
   a. Call create_sort_index() which again will call find_all_keys()
      (same as for the first execution)
   b. find_all_keys() will read the keys from the storage engine. To
      do this it checks if it has a quick-select for the table. Now
      there is NO quick-select object(!) (since it was deleted in
      step 3c). So find_all_keys defaults to read the table using a
      table scan instead. So instead of reading the four relevant records
      in the range it reads the entire table (6 records). It then
      evaluates the table's condition (and here it goes wrong). Since
      the entire condition has been pushed down to the storage engine
      using ICP all 6 records qualify. (Note that the storage engine
      will not evaluate the pushed index condition in this case since
      it was pushed for the k1 index and now we do a table scan
      without any index being used).
      The result is that here we return six qualifying key values
      instead of four due to not evaluating the table's condition.
   c. As above.

5. The two last execution of the subquery will also produce wrong results
   for the same reason.

Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.

Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.

The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.
2012-05-16 09:49:23 +02:00
Sunanda Menon
d37a28c9b0 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Joerg Bruehe
ad1e123f47 Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00