Commit graph

2524 commits

Author SHA1 Message Date
Tor Didriksen
e908e9bc7e Bug#16192219 CRASH IN TEST_IF_SKIP_SORT_ORDER ON SELECT DISTINCT WITH ORDER BY
This is a backport of the fix for:

Bug#13633549 HANDLE_FATAL_SIGNAL IN TEST_IF_SKIP_SORT_ORDER/CREATE_SORT_INDEX
Don't invoke the range optimizer for a NULL select.
2013-02-07 17:05:07 +01:00
Gleb Shchepa
19ea7c031d Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.

There were a few different issues with similar assertion
failures on different queries:

1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).

Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.

Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.

2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.

Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been with the has_subquery() function call.

3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.

Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.

4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.

Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.

5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
  <  <is_not_null_test>(a),
  ---
  >  <cache>(<is_not_null_test>(a)),
or
  < having (<is_not_null_test>(a) and <is_not_null_test>(a))
  ---
  > having 1
etc.

Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
2013-01-23 09:51:50 +04:00
Neeraj Bisht
eef1a1957e Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.

Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.

One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.

and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).

both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.


sql/sql_select.cc:
  Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
2012-10-18 23:45:15 +05:30
Tor Didriksen
f0b52a9e7e Backport Bug#13724099 2012-09-12 08:36:12 +02:00
Sergey Glukhov
2f30b34095 Bug #14409015 MEMORY LEAK WHEN REFERENCING OUTER FIELD IN HAVING
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.


sql/item_subselect.cc:
  addon, fix for 11829691, item could be created in
  runtime memroot, so we need to use real_item instead.
2012-08-09 15:34:52 +04:00
Chaithra Gopalareddy
ddcd6867e9 Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH
"ORDER BY" AND "LIMIT BY" CLAUSE

PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.

ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.

While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function. 
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and 
hence the estimated number of rows calculated above are 
wrong (which results in choosing the second index).

FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.

This fix is backported from 5.6. This problem is fixed in 5.6 as   
part of changes for work log #5558


mysql-test/r/subselect.result:
  Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
  Do not pass the actual 'limit' value if 'need_tmp' is true.
2012-07-18 14:36:08 +05:30
Norvald H. Ryeng
cac1cd88ab Bug#13003736 CRASH IN ITEM_REF::WALK WITH SUBQUERIES
Problem: Some queries with subqueries and a HAVING clause that
consists only of a column not in the select or grouping lists causes
the server to crash.

During parsing, an Item_ref is constructed for the HAVING column. The
name of the column is resolved when JOIN::prepare calls fix_fields()
on its having clause. Since the column is not mentioned in the select
or grouping lists, a ref pointer is not found and a new Item_field is
created instead. The Item_ref is replaced by the Item_field in the
tree of HAVING clauses. Since the tree consists only of this item, the
pointer that is updated is JOIN::having. However,
st_select_lex::having still points to the Item_ref as the root of the
tree of HAVING clauses.

The bug is triggered when doing filesort for create_sort_index(). When
find_all_keys() calls select->cond->walk() it eventually reaches
Item_subselect::walk() where it continues to walk the having clauses
from lex->having. This means that it finds the Item_ref instead of the
new Item_field, and Item_ref::walk() tries to dereference the ref
pointer, which is still null.

The crash is reproducible only in 5.5, but the problem lies latent in
5.1 and trunk as well.

Fix: After calling fix_fields on the having clause in JOIN::prepare(),
set select_lex::having to point to the same item as JOIN::having.

This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
query is executed as a prepared statement. The Item_field is created
in the runtime arena when the query is prepared, and the pointer to
the item is saved by st_select_lex::fix_prepare_information() and
brought back as a dangling pointer when the query is executed, after
the runtime arena has been reclaimed.

Fix: Backport fix from trunk that switches to the permanent arena
before calling Item_ref::fix_fields() in JOIN::prepare().


sql/item.cc:
  Set context when creating Item_field.
sql/sql_select.cc:
  Switch to permanent arena and update select_lex->having.
2012-06-18 09:20:12 +02:00
Annamalai Gurusami
a28a2ca798 Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
WHEN KILLING

Suppose there is a query waiting for a lock.  If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file.  Since this is a user
requested interruption, no spurious error message must be logged
in the server log.  This patch will remove the error message from
the log.

approved by joh and tatjana
2012-06-01 14:12:57 +05:30
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Chaithra Gopalareddy
25f82f8a26 Bug#12713907:STRANGE OPTIMIZE & WRONG RESULT UNDER
ORDER BY COUNT(*) LIMIT.

PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)

The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.

Following is the code flow:

Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan

Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort

Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched

FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.


mysql-test/r/func_group_innodb.result:
  Added test result for Bug#12713907
mysql-test/t/func_group_innodb.test:
  Added test case for Bug#12713907
sql/sql_select.cc:
  Remove the call to set_keyread as we do it from access
  functions 'join_read_first' and 'join_read_last'
2012-04-18 11:25:01 +05:30
Sergey Glukhov
b5c690aa54 Bug#11766300 59387: FAILING ASSERTION: CURSOR->POS_STATE == 1997660512 (BTR_PCUR_IS_POSITIONE
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).


mysql-test/r/subselect_innodb.result:
  test result
mysql-test/t/subselect_innodb.test:
  test case
sql/sql_select.cc:
  added OUTER_REF_TABLE_BIT to the used_tables parameter
  for create_ref_for_key() function.
storage/innobase/handler/ha_innodb.cc:
  added assertion, request from Inno team
storage/innodb_plugin/handler/ha_innodb.cc:
  added assertion, request from Inno team
2012-04-04 13:29:45 +04:00
Tor Didriksen
f3eb021d5e Bug#13519724 63793: CRASH IN DTCOLLATION::SET(DTCOLLATION &SET)
Backport of fix for:
Bug#53236 Segfault in DTCollation::set(DTCollation&)
2012-02-22 11:17:50 +01:00
Martin Hansson
9ada2f8ec5 Bug #11765810 58813: SERVER THREAD HANGS WHEN JOIN + WHERE + GROUP BY
IS EXECUTED TWICE FROM P

This bug is a duplicate of bug 12567331, which was pushed to the
optimizer backporting tree on 2011-06-11. This is just a back-port of
the fix. Both test cases are included as they differ somewhat.
2012-02-07 14:16:09 +01:00
Jorgen Loland
523c849d14 Backmerge of BUG#12997905 2011-11-18 14:47:11 +01:00
Sergey Glukhov
14dc91ff83 Bug#11747970 34660: CRASH WHEN FEDERATED TABLE LOSES CONNECTION DURING INSERT ... SELECT
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from  `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.


mysql-test/r/error_simulation.result:
  test case
mysql-test/t/error_simulation.test:
  test case
sql/item_subselect.cc:
  added new flag subselect_single_select_engine::optimize_error
  which is used for error detection which could happen at optimize
  stage.
sql/item_subselect.h:
  added new flag subselect_single_select_engine::optimize_error
sql/sql_select.cc:
  test case
2011-10-05 13:28:20 +04:00
Sergey Glukhov
3468b55a21 Bug#11766594 59736: SELECT DISTINCT.. INCORRECT RESULT WITH DETERMINISTIC FUNCTION IN WHERE C
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.


mysql-test/r/sp.result:
  test case
mysql-test/t/sp.test:
  test case
sql/sql_base.cc:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_class.cc:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_class.h:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_insert.cc:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.cc:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.h:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_prepare.cc:
  THD::used_tables is replaced with LEX::used_tables
sql/sql_select.cc:
  THD::used_tables is replaced with LEX::used_tables
2011-08-02 11:33:45 +04:00
Georgi Kodinov
d706f1a768 weave merge of mysql-5.1->mysql-5.1-security 2011-05-10 16:57:40 +03:00
Tor Didriksen
e257fb3319 merge 5.0 => 5.1 : Bug#12329653 2011-05-04 17:12:45 +02:00
Tor Didriksen
1cf483aa58 Bug#12329653 - EXPLAIN, UNION, PREPARED STATEMENT, CRASH, SQL_FULL_GROUP_BY
The query was re-written *after* we had tagged it with NON_AGG_FIELD_USED.
Remove the flag before continuing.


mysql-test/r/explain.result:
  Update test case for Bug#48295.
mysql-test/r/subselect.result:
  New test case.
mysql-test/t/explain.test:
  Update test case for Bug#48295.
mysql-test/t/subselect.test:
  New test case.
sql/item.cc:
  Use accessor functions for non_agg_field_used/agg_func_used.
sql/item_subselect.cc:
  Remove non_agg_field_used when we rewrite query '1 < some (...)' => '1 < max(...)'
sql/item_sum.cc:
  Use accessor functions for non_agg_field_used/agg_func_used.
sql/mysql_priv.h:
  Remove unused #defines.
sql/sql_lex.cc:
  Initialize new member variables.
sql/sql_lex.h:
  Replace full_group_by_flag with two boolean flags,
  and itroduce accessors for manipulating them.
sql/sql_select.cc:
  Use accessor functions for non_agg_field_used/agg_func_used.
2011-05-04 16:18:21 +02:00
Sergey Glukhov
a5e8d9029b Bug#11756928 48916: SERVER INCORRECTLY PROCESSING HAVING CLAUSES WITH AN ORDER BY CLAUSE
Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.


mysql-test/r/having.result:
  test case
mysql-test/t/having.test:
  test case
sql/sql_select.cc:
  The fix is to use (table_map) 0 instead of used_tables in
  third argument for make_cond_for_table() function.
  It allows to extract elements which belong to sorted
  table and in addition elements which are independend
  subqueries.
2011-04-22 11:20:55 +04:00
Tor Didriksen
dd3d9477b2 Bug#11765713 58705: OPTIMIZER LET ENGINE DEPEND ON UNINITIALIZED VALUES CREATED BY OPT_SUM_QU
Valgrind warnings were caused by comparing index values to an un-initialized field.


mysql-test/r/subselect.result:
  New test cases.
mysql-test/t/subselect.test:
  New test cases.
sql/opt_sum.cc:
  Add thd to opt_sum_query enabling it to test for errors.
  If we have a non-nullable index, we cannot use it to match null values,
  since set_null() will be ignored, and we might compare uninitialized data.
sql/sql_select.cc:
  Add thd to opt_sum_query, enabling it to test for errors.
sql/sql_select.h:
  Add thd to opt_sum_query, enabling it to test for errors.
2011-04-14 16:35:24 +02:00
Sergey Glukhov
3abe56f31d Bug#11756242 48137: PROCEDURE ANALYSE() LEAKS MEMORY WHEN RETURNING NULL
There are two problems with ANALYSE():

1. Memory leak 
   it happens because do_select() can overwrite
   JOIN::procedure field(with zero value in our case) and
   JOIN destructor don't free the memory allocated for
   JOIN::procedure. The fix is to save original JOIN::procedure
   before do_select() call and restore it after do_select
   execution.

2. Wrong result
   If ANALYSE() procedure is used for the statement with LIMIT clause
   it could retrun empty result set. It happens because of missing 
   analyse::end_of_records() call. First end_send() function call
   returns NESTED_LOOP_QUERY_LIMIT and second call of end_send() with
   end_of_records flag enabled does not happen. The fix is to return
   NESTED_LOOP_OK from end_send() if procedure is active.


mysql-test/r/analyse.result:
  test case
mysql-test/t/analyse.test:
  test case
sql/sql_select.cc:
  --save original JOIN::procedure before do_select() call and
    restore it after do_select execution.
  --return NESTED_LOOP_OK from end_send() if procedure is active
2011-04-14 12:11:57 +04:00
Martin Hansson
61b256177b Bug#11766675 - 59839: Aggregation followed by subquery yields wrong result
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.

Fixed by resetting the variable upon each new iteration.
2011-02-18 11:50:06 +01:00
unknown
17fe23e46c Merge from mysql-5.1.55-release 2011-02-08 12:52:33 +01:00
Ole John Aske
221ce9223d Fix for bug#59308: Incorrect result for SELECT DISTINCT <col>... ORDER BY <col> DESC.
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
      
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
      
This might happen in 'check_reverse_order:' if we have a 
select->quick which could not be made descending by appending 
a QUICK_SELECT_DESC. ().
      
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
      
  - An incorrect 'precomputed_group_by= TRUE' may have been set, 
    and not reverted, as part of the already modifified QEP (Bug#59308)
  - A 'select->quick' might have been created which we fail to delete (bug#59110).
      
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
      
The refactorication above contains now intentional changes to the logic which 
has been moved to the end of the function.
      
Furthermore, a smaller part of the fix address the handling of the 
select->quick objects which may already exists when we call 
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
      
  - Before new select->quick may be created by calling ::test_quick_select(), we
    set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
    delete the save_quick's. (After this call we may have both a 'save_quick' 
    and 'select->quick')
      
  - All returns from ::test_if_skip_sort_order() where we may have both a
    'save_quick' and a 'select->quick' has been changed to goto's to the
    exit points 'skiped_sort_order:' or 'need_filesort:' where we
    decide which of the QUICK_SELECT's to keep, and delete the other.
2011-02-07 10:36:21 +01:00
Ole John Aske
59269b1da1 Fix for bug#57030: ('BETWEEN' evaluation is incorrect')
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
      
'<field>  BETWEEN c1 AND c1' and handle this as the condition '<field>  = c1'
            
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
            
In a 'normal' BETWEEN condition of the form '<field>  BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
            
However, if the BETWEEN predicate specified:
            
 1)  '<const1>  BETWEEN<const2>  AND<field>
            
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
            
 2) '<const1>  GE<const2>  AND<const1>  LE<field>'
            
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1>  and<const2>  for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
            
 3) '<field>  EQ <const1>'
            
This fix moves this optimization of '<field>  BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect 
key equality / comparison for the key fields in the BETWEEN condition.
2011-02-01 13:20:16 +01:00
Karen Langford
de3c4428b8 Updating header copyright/README in source for 2011 2011-01-25 15:42:40 +01:00
Martin Hansson
3c5662c195 Bug#58207: invalid memory reads when using default column value and
tmptable needed

The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.

Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
2011-01-12 09:55:31 +01:00
Jan Wedvik
0a7cfad080 Fix for bug#58553, "Queries with pushed conditions causes 'explain extended'
to crash mysqld". 
      
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
      
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should 
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
2011-01-11 12:09:54 +01:00
Georgi Kodinov
b831c6dbbb automerge 2011-01-07 15:28:36 +02:00
Kent Boortz
4acfdb9df1 Merge 2010-12-29 00:47:05 +01:00
Kent Boortz
85323eda8a - Added/updated copyright headers
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
2010-12-28 19:57:23 +01:00
Sergey Glukhov
fcb83cbf15 Fixed following problems:
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
  before being populated and it leads to incorrect result

We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:

1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.

2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
   process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.

3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.

Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;

4. Derived tables with subquery where subquery
   is evaluated before table locking(Bug#54475, Bug#52157)

Suggested solution is following:

-Introduce new field LEX::context_analysis_only with the following
 possible flags:
 #define CONTEXT_ANALYSIS_ONLY_PREPARE 1
 #define CONTEXT_ANALYSIS_ONLY_VIEW    2
 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
 context analysis operation
-Item_subselect::const_item() returns
 result depending on LEX::context_analysis_only.
 If context_analysis_only is set then we return
 FALSE that means that subquery is non-const.
 As all subquery types are wrapped by Item_subselect
 it allow as to make subquery non-const when
 it's necessary.


mysql-test/r/derived.result:
  test case
mysql-test/r/multi_update.result:
  test case
mysql-test/r/view.result:
  test case
mysql-test/suite/innodb/r/innodb_multi_update.result:
  test case
mysql-test/suite/innodb/t/innodb_multi_update.test:
  test case
mysql-test/suite/innodb_plugin/r/innodb_multi_update.result:
  test case
mysql-test/suite/innodb_plugin/t/innodb_multi_update.test:
  test case
mysql-test/t/derived.test:
  test case
mysql-test/t/multi_update.test:
  test case
mysql-test/t/view.test:
  test case
sql/item.cc:
  --removed unnecessary code
sql/item_cmpfunc.cc:
  --removed unnecessary checks
  --THD::is_context_analysis_only() is replaced with LEX::is_ps_or_view_context_analysis()
sql/item_func.cc:
  --refactored context analysis checks
sql/item_row.cc:
  --removed unnecessary checks
sql/item_subselect.cc:
  --removed unnecessary code
  --added DBUG_ASSERT into Item_subselect::exec()
    which asserts that subquery execution can not happen
    if LEX::context_analysis_only is set, i.e. at context
    analysis stage.
  --Item_subselect::const_item()
    Return FALSE if LEX::context_analysis_only is set.
    It prevents subquery evaluation in ::fix_fields &
    ::fix_length_and_dec at context analysis stage.
sql/item_subselect.h:
  --removed unnecessary code
sql/mysql_priv.h:
  --Added new set of flags.
sql/sql_class.h:
  --removed unnecessary code
sql/sql_derived.cc:
  --added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_lex.cc:
  --init LEX::context_analysis_only field
sql/sql_lex.h:
  --New LEX::context_analysis_only field
sql/sql_parse.cc:
  --removed unnecessary code
sql/sql_prepare.cc:
  --removed unnecessary code
  --added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_select.cc:
  --refactored context analysis checks
sql/sql_show.cc:
  --added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_view.cc:
  --added LEX::context_analysis_only analysis intialization/cleanup
2010-12-14 12:33:03 +03:00
Georgi Kodinov
db8bd7beb8 merge 2010-11-26 14:51:48 +02:00
Gleb Shchepa
47bb750c9d backport: Bug #55568 from 5.1-security to 5.0-security
> revision-id: alexey.kopytov@sun.com-20100824103548-ikm79qlfrvggyj9h
> parent: sunny.bains@oracle.com-20100816001222-xqc447tr6jwh8c53
> committer: Alexey Kopytov <Alexey.Kopytov@Sun.com>
> branch nick: 5.1-security
> timestamp: Tue 2010-08-24 14:35:48 +0400
> message:
>   Bug #55568: user variable assignments crash server when used
>               within query
>   
>   The server could crash after materializing a derived table
>   which requires a temporary table for grouping.
>   
>   When destroying the temporary table used to execute a query for
>   a derived table, JOIN::destroy() did not clean up Item_fields
>   pointing to fields in the temporary table. This led to
>   dereferencing a dangling pointer when printing out the items
>   tree later in the outer SELECT.
>   
>   The solution is an addendum to the patch for bug37362: in
>   addition to cleaning up items in tmp_all_fields3, do the same
>   for items in tmp_all_fields1, since now we have an example
>   where this is necessary.


sql/field.cc:
  Make sure field->table_name is not set to NULL in
  Field::make_field() to avoid assertion failure in 
  Item_field::make_field() after cleaning up items
  (the assertion fired in udf.test when running
  the test suite with the patch applied).
sql/sql_select.cc:
  In addition to cleaning up items in tmp_all_fields3, do the
  same for items in tmp_all_fields1.
  Introduce a new helper function to avoid code duplication.
sql/sql_select.h:
  Introduce a new helper function to avoid code duplication in
  JOIN::destroy().
2010-11-23 00:29:47 +03:00
Guilhem Bichot
96b0404940 Fix for Bug#56138 "valgrind errors about overlapping memory when double-assigning same variable",
and related small fixes.

mysql-test/t/user_var.test:
  test for bug
sql/field_conv.cc:
  From the C standard, memcpy() has undefined behaviour if to->ptr==from->ptr
sql/item_func.cc:
  In the case of BUG#56138, entry->value==ptr in which case memcpy()
  has undefined results per the C standard.
sql/sql_select.cc:
  Work around a bug in old gcc
2010-11-22 09:57:59 +01:00
Sergey Glukhov
50a3c55ee7 Bug#52711 Segfault when doing EXPLAIN SELECT with union...order by (select... where...)
backport from 5.1


mysql-test/r/subselect.result:
  backport from 5.1
mysql-test/t/subselect.test:
  backport from 5.1
sql/sql_select.cc:
  backport from 5.1
2010-11-08 13:51:39 +03:00
Gleb Shchepa
20d704978d Bug #52160: crash and inconsistent results when grouping
by a function and column

The bugreport reveals two different bugs about grouping
on a function:

1) grouping by the TIME_TO_SEC function result caused
   a server crash or wrong results and
2) grouping by the function returning a blob caused
   an unexpected "Duplicate entry" error and wrong
   result.

Details for the 1st bug:

TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.

Details for the 2nd bug:

The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.


mysql-test/r/func_time.result:
  Test case for bug #52160.
mysql-test/r/type_blob.result:
  Test case for bug #52160.
mysql-test/t/func_time.test:
  Test case for bug #52160.
mysql-test/t/type_blob.test:
  Test case for bug #52160.
sql/item_timefunc.h:
  Bug #52160: crash and inconsistent results when grouping
              by a function and column
  
  TIME_TO_SEC() returns NULL if its argument is invalid (empty
  string for example). Thus its nullability depends not only
  Fixed by (overoptimistically) setting TIME_TO_SEC() to be
  nullable despite the nullability of its arguments.
sql/sql_select.cc:
  Bug #52160: crash and inconsistent results when grouping
              by a function and column
  
  The server is unable to create indices on blobs without
  explicit blob key part length. However, this fact was
  ignored for blob function result fields of GROUP BY
  intermediate tables.
  Fixed by disabling GROUP BY index creation for blob
  function result fields like regular blob fields.
2010-10-31 19:04:38 +03:00
Sergey Glukhov
4a23ac20d9 Bug#57688 Assertion `!table || (!table->write_set || bitmap_is_set(table->write_set, field
Lines below which were added in the patch for Bug#56814 cause this crash:

+      if (table->table)
+        table->table->maybe_null= FALSE;

Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);

SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;

DROP TABLE t1;
--

We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.


mysql-test/r/group_by.result:
  test case
mysql-test/r/join_outer.result:
  test case
mysql-test/t/group_by.test:
  test case
mysql-test/t/join_outer.test:
  test case
sql/sql_select.cc:
  --remove wrong code
  --use Field::real_maybe_null() check instead of
    Field::maybe_null() and add addition check of
    TABLE_LIST::outer_join
2010-10-29 12:23:06 +04:00
Sergey Glukhov
d0ac4e2c5a Bug#56814 Explain + subselect + fulltext crashes server
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).


mysql-test/r/explain.result:
  test case
mysql-test/t/explain.test:
  test case
sql/item_subselect.cc:
  Make subquery uncacheable in case of EXPLAIN. It allows to keep
  original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
  on re-execution.
sql/sql_select.cc:
  -restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
  -Update TABLE::maybe_null field as it could have
   the value(maybe_null==TRUE) based on the assumption
   that this join is outer(see setup_table_map() func).
   This change is not related to the crash problem but
   affects EXPLAIN results in the test case.
2010-10-18 16:12:27 +04:00
Sergey Glukhov
127c721cef Bug#54484 explain + prepared statement: crash and Got error -1 from storage engine
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.


mysql-test/r/fulltext.result:
  test case
mysql-test/t/fulltext.test:
  test case
sql/item_func.cc:
  reinit ft_handler before re-execution of subquery
sql/item_func.h:
  Fixed method name
sql/sql_select.cc:
  reinit ft_handler before re-execution of subquery
2010-10-18 14:47:26 +04:00
Georgi Kodinov
292a72a043 merged mysql-5.1 into mysql-5.1-bugteam 2010-10-05 11:11:56 +03:00
Mattias Jonsson
9d1ed095f5 merge 2010-09-13 16:07:50 +02:00
Martin Hansson
3beeb5d045 Bug #50394: Regression in EXPLAIN with index scan, LIMIT, GROUP BY and
ORDER BY computed col
      
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.

There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.

Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
2010-09-13 13:33:19 +02:00
Mattias Jonsson
2ac69d648a merge 2010-09-10 11:50:38 +02:00
Martin Hansson
446cc653c0 Bug#54543: update ignore with incorrect subquery leads to assertion failure:
inited==INDEX

When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.

Fixed by doing the cleanup even if errors occur.
2010-09-07 09:58:05 +02:00
Jimmy Yang
9b3a3944e4 Merge from mysql-5.1-bugteam to mysql-5.1-security 2010-09-01 17:43:02 -07:00
Mattias Jonsson
e5bab33a2a Bug#53806: Wrong estimates for range query in partitioned MyISAM table
Bug#46754: 'rows' field doesn't reflect partition pruning

The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.

The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.

The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.

mysql-test/r/partition_pruning.result:
  updated result
mysql-test/t/partition_pruning.test:
  Added test.
sql/sql_select.cc:
  Use ::info + stats.records instead of ::records().
2010-08-26 17:14:18 +02:00
Alexey Kopytov
6cf49743e8 Bug #53544: Server hangs during JOIN query in stored procedure
called twice in a row

Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.

When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.

Fixed by making sure fix_name_res is reset on every loop
iteration.

mysql-test/r/join.result:
  Added a test case for bug #53544.
mysql-test/t/join.test:
  Added a test case for bug #53544.
sql/sql_select.cc:
  Make sure fix_name_res is reset on every loop iteration.
2010-08-26 14:13:02 +04:00
Evgeny Potemkin
b4dc600af9 Bug #55656: mysqldump can be slower after bug 39653 fix.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.


mysql-test/suite/innodb/r/innodb_mysql.result:
  Added a test case for the bug#55656.
mysql-test/suite/innodb/t/innodb_mysql.test:
  Added a test case for the bug#55656.
sql/sql_select.cc:
  Bug #55656: mysqldump can be slower after bug #39653 fix.
  The find_shortest_key function now prefers clustered primary key
  if found secondary key includes all fields of the table.
2010-08-26 13:31:04 +04:00