Commit graph

64 commits

Author SHA1 Message Date
Martin Hansson
d03133dccf Post-push fix to disable a subset of the test case for Bug#47762.
This has been back-ported from 6.0 as the problems proved to afflict 
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
      
Bug no 52174: Sometimes wrong plan when reading a MAX value 
from non-NULL index
      
Bug no 52173: Reading NULL value from non-NULL index gives wrong 
result in embedded server 
      
Both bugs taken together affect a much smaller class of queries than #47762, 
so the fix stays for now.
2010-03-19 09:23:44 +01:00
Martin Hansson
7cb796717e Bug#47762: Incorrect result from MIN() when WHERE tests NOT
NULL column for NULL

The optimization to read MIN() and MAX() values from an
index did not properly handle comparisons with NULL
values. Fixed by giving up the particular optimization step
if there are non-NULL safe comparisons with NULL values, as 
the result is NULL anyway.

Also, Oracle copyright notice was added to all files.
2010-03-16 15:51:00 +01:00
Sergey Vojtovich
06fb46a029 BUG#49902 - SELECT returns incorrect results
Queries optimized with GROUP_MIN_MAX didn't cleanup KEYREAD
optimization properly. As a result subsequent queries may
return incomplete rows (fields are initialized to default
values).
2010-02-09 12:53:13 +04:00
Alexey Kopytov
eaf20c303f Automerge. 2009-11-23 13:04:17 +03:00
Alexey Kopytov
7f2ba28ef9 Bug #48472: Loose index scan inappropriately chosen for some
WHERE conditions 
 
check_group_min_max() checks if the loose index scan 
optimization is applicable for a given WHERE condition, that is 
if the MIN/MAX attribute participates only in range predicates 
comparing the corresponding field with constants. 
 
The problem was that it considered the whole predicate suitable 
for the loose index scan optimization as soon as it encountered 
a constant as a predicate argument. This is obviously wrong for 
cases when a constant is the first argument of a predicate 
which does not satisfy the above condition. 
 
Fixed check_group_min_max() so that all arguments of the input 
predicate are considered to decide if it passes the test, even 
though a constant has already been encountered.
2009-11-17 17:07:14 +03:00
Alexey Kopytov
62b95b90df Bug #46607: Assertion failed: (cond_type == Item::FUNC_ITEM)
results in server crash 
 
check_group_min_max_predicates() assumed the input condition 
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM. 
Since a condition of the form "field" is also a valid condition 
equivalent to "field <> 0", using such a condition in a query 
where the loose index scan was chosen resulted in a debug 
assertion failure. 
 
Fixed by handling conditions of the FIELD_ITEM type in 
check_group_min_max_predicates().
2009-08-30 11:03:37 +04:00
Georgi Kodinov
80dd3a593a Bug #40113: Embedded SELECT inside UPDATE or DELETE can timeout
without error

When using quick access methods for searching rows in UPDATE or 
DELETE there was no check if a fatal error was not already sent 
to the client while evaluating the quick condition.
As a result a false OK (following the error) was sent to the 
client and the error was thus transformed into a warning.

Fixed by checking for errors sent to the client during 
SQL_SELECT::check_quick() and treating them as real errors.

Fixed a wrong test case in group_min_max.test
Fixed a wrong return code in mysql_update() and mysql_delete()
2009-07-13 18:11:16 +03:00
Georgi Kodinov
de713f7e1b Addendum to the fix for bug #44821: move partition dependent test
to a test file that guarantees the presence of partition code
2009-06-16 12:59:57 +03:00
Martin Hansson
4c4c7ccc24 Merge 2009-06-16 10:34:32 +02:00
Georgi Kodinov
34ec15724f automerge 2009-06-12 16:58:48 +03:00
Georgi Kodinov
1f2b5b3037 Bug #45386: Wrong query result with MIN function in field list,
WHERE and GROUP BY clause

Loose index scan may use range conditions on the argument of 
the MIN/MAX aggregate functions to find the beginning/end of 
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum 
values. When the comparison returns 0 (equal) the code should 
check the type of the min/max values of the current interval 
and accept or reject the row based on whether the limit is 
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
2009-06-12 15:38:55 +03:00
Martin Hansson
f7ae038230 Bug#44821: select distinct on partitioned table returns wrong results
Range analysis did not request sorted output from the storage engine,
which cause partitioned handlers to process one partition at a time
while reading key prefixes in ascending order, causing values to be 
missed. Fixed by always requesting sorted order during range analysis.
This fix is introduced in 6.0 by the fix for bug no 41136.
2009-06-10 11:56:00 +02:00
Georgi Kodinov
f075494e00 merged the fix for bug 41610 to 5.1-bugteam 2009-02-27 17:07:27 +02:00
Georgi Kodinov
4d2f047e95 Bug #41610: key_infix_len can be overwritten causing some group by queries to
return no rows

The algorithm of determining the best key for loose index scan is doing a loop
over the available indexes and selects the one that has the best cost.
It retrieves the parameters of the current index into a set of variables.
If the cost of using the current index is lower than the best cost so far it 
copies these variables into another set of variables that contain the 
information for the best index so far.
After having checked all the indexes it uses these variables (outside of the 
index loop) to create the table read plan object instance.
The was a single omission : the key_infix/key_infix_len variables were used 
outside of the loop without being preserved in the loop for the best index 
so far.
This causes these variables to get overwritten by the next index(es) checked.
Fixed by adding variables to hold the data for the current index, passing 
the new variables to the function that assigns values to them and copying 
the new variables into the existing ones when selecting a new current best 
index.
To avoid further such problems moved the declarations of the variables used 
to keep information about the current index inside the loop's compound 
statement.
2009-02-27 15:25:06 +02:00
Georgi Kodinov
4c318bf6e8 merge 5.0-bugteam -> 5.1-bugteam 2008-08-28 12:54:50 +03:00
Evgeny Potemkin
1f28ee8875 Bug#38195: Incorrect handling of aggregate functions when loose index scan is
used causes server crash.
      
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
2008-08-27 17:03:17 +04:00
Georgi Kodinov
4b2dd02552 Bug#38195: Incorrect handling of aggregate functions when loose index scan
is used causes server crash.
  Revert the fix : unstable test case revealed by pushbuild
2008-08-19 13:36:24 +03:00
Evgeny Potemkin
1c42e93fe9 Fixed failing test case for the bug#38195. 2008-08-14 23:55:18 +04:00
Evgeny Potemkin
cf28ff2616 Bug#38195: Incorrect handling of aggregate functions when loose index scan is
used causes server crash.

When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
2008-08-13 22:24:55 +04:00
gkodinov/kgeorge@magare.gmz
c7f0e08d2b Merge magare.gmz:/home/kgeorge/mysql/work/B32268-5.0-opt
into  magare.gmz:/home/kgeorge/mysql/work/B32268-5.1-opt
2007-11-26 13:33:36 +02:00
gkodinov/kgeorge@magare.gmz
50d8511136 Bug #32268: Indexed queries give bogus MIN and MAX results
Loose index scan does the grouping so the temp table does 
not need to do it, even when sorting.
Fixed by checking if the grouping is already done before
doing sorting and grouping in a temp table and do only 
sorting.
2007-11-20 16:07:24 +02:00
kostja@bodhi.(none)
2ce9194411 A fix for Bug#32030 "DELETE does not return an error and deletes rows if
error evaluating WHERE"

DELETE with a subquery in WHERE clause would sometimes ignore subquery
evaluation error and proceed with deletion.

The fix is to check for an error after evaluation of the WHERE clause
in DELETE.

Addressed review comments.
2007-11-02 02:36:12 +03:00
igor@olga.mysql.com
31bd1715db Merge olga.mysql.com:/home/igor/mysql-5.0-opt
into  olga.mysql.com:/home/igor/mysql-5.1-opt
2007-06-24 19:06:09 -07:00
igor@olga.mysql.com
59b9077ce4 Fixed bug #25602. A query with DISTINCT in the select list to which
the loose scan optimization for grouping queries was applied returned 
a wrong result set when the query was used with the SQL_BIG_RESULT
option.

The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.
2007-06-23 23:33:55 -07:00
gkodinov@dl145s.mysql.com
b58b551810 Merge dl145s.mysql.com:/data0/bk/team_tree_merge/mysql-5.0-opt
into  dl145s.mysql.com:/data0/bk/team_tree_merge/MERGE2/mysql-5.1-opt
2006-11-29 11:48:44 +01:00
gkodinov@dl145s.mysql.com
43c3dfeb27 Merge dl145s.mysql.com:/data0/bk/team_tree_merge/mysql-5.0
into  dl145s.mysql.com:/data0/bk/team_tree_merge/MERGE2/mysql-5.0-opt
2006-11-29 11:25:22 +01:00
gkodinov/kgeorge@macbook.gmz
a6574ac693 Bug#24156: Loose index scan not used with CREATE TABLE ...SELECT and similar
statements
Currently the optimizer evaluates loose index scan only for top-level SELECT
statements
Extend loose index scan applicability by :
 - Test the applicability of loose scan for each sub-select, instead of the
   whole query. This change enables loose index scan for sub-queries.
 - allow non-select statements with SELECT parts (like, e.g. 
   CREATE TABLE .. SELECT ...) to use loose index scan.
2006-11-28 18:06:47 +02:00
monty@nosik.monty.fi
89570bf966 Merge mysql.com:/home/my/mysql-5.0
into  mysql.com:/home/my/mysql-5.1
2006-11-22 14:11:36 +02:00
monty@mysql.com/nosik.monty.fi
e825879800 Remove compiler warnings
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
2006-11-20 22:42:06 +02:00
gkodinov@dl145s.mysql.com
aaed398254 Merge dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.0-opt
into  dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.1-opt
2006-10-19 16:43:46 +02:00
igor@rurik.mysql.com
3a4a9521d8 Changed test case for bug 22342 to make it platform independent. 2006-10-18 17:24:33 -07:00
gkodinov/kgeorge@macbook.gmz
0e954d2c1a Bug #22342: No results returned for query using max and group by
When using index for group by and range access the server isolates    
 a set of ranges based on the conditions over the key parts of the
 index used. Then it uses only the ranges over the GROUP BY fields to
 jump from one group to another. Since the GROUP BY fields may form a
 prefix over the index, we may use only a prefix of the ranges produced
 by the range optimizer.
 Each range contains a notion on whether it includes its border values.
 The problem is that when using a range prefix, the last range is open
 because it assumes that there is a range on the next keypart. Thus when
 we use a prefix range as it is, it excludes all border values.
 The solution is when ignoring the suffix of the range conditions 
 (to jump over the GROUP BY prefix only) the server must change the 
 remaining intervals so they always contain their borders, e.g. 
 if the whole range was :
 (1,-inf) <= (<group_by_col>,<min_max_arg_col>) < (1, 3) we must make
 (1) <= (<group_by_col>) <= (1) because (a,b) < (c1,c2) means :
 a < c1 OR (a = c1 AND b < c2).
2006-10-16 19:30:19 +03:00
evgen@sunlight.local
dda7a95c59 Merge epotemkin@bk-internal.mysql.com:/home/bk/mysql-5.1-opt
into  sunlight.local:/local_work/tmp_merge-5.1-opt-mysql
2006-08-01 09:24:19 +04:00
evgen@sunlight.local
ef4f149536 Merge sunlight.local:/local_work/tmp_merge-5.0-opt-mysql
into  sunlight.local:/local_work/tmp_merge-5.1-opt-mysql
2006-07-30 00:33:24 +04:00
sergefp@mysql.com
699291a8e6 BUG#14940 "MySQL choose wrong index", v.2
- Make the range-et-al optimizer produce E(#table records after table 
                                           condition is applied),
- Make the join optimizer use this value,
- Add "filtered" column to EXPLAIN EXTENDED to show 
  fraction of records left after table condition is applied
- Adjust test results, add comments
2006-07-28 21:27:01 +04:00
timour/timka@lamia.home
39ada21138 Fix for BUG#21007.
The problem was that store_top_level_join_columns() incorrectly assumed
that the left/right neighbor of a nested join table reference can be only
at the same level in the join tree.

The fix checks if the current nested join table reference has no immediate
left/right neighbor, and if so chooses the left/right neighbors of the
nearest upper level, where these references are != NULL.
2006-07-21 11:59:46 +03:00
msvensson@neptunus.(none)
82103ed9b3 Update test results
Fix merge problem
2006-06-11 09:04:23 +02:00
msvensson@neptunus.(none)
2c538f6cde Merge bk-internal:/home/bk/mysql-5.1
into  neptunus.(none):/home/msvensson/mysql/mysql-5.1-new-maint
2006-06-10 20:33:50 +02:00
gkodinov@mysql.com
d81a8437e4 Bug #18742: Test 'group_min_max' fails if "classic" configuration in 5.0
Moved the InnoDB related tests to innodb_mysql
2006-05-23 16:43:01 +03:00
msvensson@neptunus.(none)
592a2a7510 Move innodb dependent tests to group_min_max_innodb 2006-05-22 14:10:02 +02:00
msvensson@shellback.(none)
ed349f7c69 Tests uses innodb, add test to check if innodb is available 2006-05-18 20:07:35 +02:00
gkodinov@mysql.com
7bae0de398 BUG#18068: SELECT DISTINCT (with duplicates and covering index)
When converting DISTINCT to GROUP BY where the columns are from the covering
index and they are quoted twice in the SELECT list the optimizer is creating
improper processing sequence. This is because of the fact that the columns
of the covering index are not recognized as such and treated as non-index
columns.

Generally speaking duplicate columns can safely be removed from the GROUP
BY/DISTINCT list because this will not add or remove new rows in the
resulting set. Duplicates can be removed even if they are not consecutive
(as is the case for ORDER BY, where the duplicate columns can be removed
only if they are consecutive).

So we can safely transform "SELECT DISTINCT a,a FROM ... ORDER BY a" to
"SELECT a,a FROM ... GROUP BY a ORDER BY a" instead of 
"SELECT a,a FROM .. GROUP BY a,a ORDER BY a". We can even transform 
"SELECT DISTINCT a,b,a FROM ... ORDER BY a,b" to
"SELECT a,b,a FROM ... GROUP BY a,b ORDER BY a,b".

The fix to this bug consists of checking for duplicate columns in the SELECT
list when constructing the GROUP BY list in transforming DISTINCT to GROUP
BY and skipping the ones that are already in.
2006-05-09 18:13:01 +03:00
timour@mysql.com
b85bd1e835 Merge mysql.com:/home/timka/mysql/src/5.0-virgin
into  mysql.com:/home/timka/mysql/src/5.0-bug-16710
2006-03-31 12:39:33 +03:00
timour@mysql.com
eed7cf09dd Fix for BUG#16710.
The bug was due to a missed case in the detection of whether an index
can be used for loose scan. More precisely, the range optimizer chose
to use loose index scan for queries for which the condition(s) over
an index key part could not be pushed to the index together with the
loose scan.

As a result, loose index scan was selecting the first row in the
index with a given GROUP BY prefix, and was applying the WHERE
clause after that, while it should have inspected all rows with
the given prefix, and apply the WHERE clause to all of them.

The fix detects and skips such cases.
2006-03-31 12:34:28 +03:00
gkodinov@mysql.com
b35d97d02a Test case for BUG#15102 : select distinct returns empty result, select count distinct > 0 (correct)
Reproduced in 5.0.16-nt. Tested to work in 5.0.20
2006-03-27 12:20:25 +03:00
igor@rurik.mysql.com
3af0eabc7f Fixed bug #16203.
If check_quick_select returns non-empty range then the function cost_group_min_max
cannot return 0 as an estimate of the number of retrieved records.
Yet the function erroneously returned 0 as the estimate in some situations.
2006-02-06 11:35:13 -08:00
timour@mysql.com
d80feb9e21 Merge mysql.com:/home/timka/mysql/src/5.0-virgin
into  mysql.com:/home/timka/mysql/src/5.0-bug-14920
2005-12-01 09:26:17 +02:00
timour@mysql.com
999a73ace5 Fix for BUG#14920 Ordering aggregated result sets corrupts resultset.
The cause of the bug was the use of end_write_group instead of end_write
in the case when ORDER BY required a temporary table, which didn't take
into account the fact that loose index scan already computes the result
of MIN/MAX aggregate functions (and performs grouping).

The solution is to call end_write instead of end_write_group and to add
the MIN/MAX functions to the list of regular functions so that their
values are inserted into the temporary table.
2005-11-30 12:52:12 +02:00
evgen@moonbone.local
e093fb1fd7 Fix bug#13293 Wrongly used index results in endless loop.
Loose index scan using only second part of multipart index was choosen, which
results in creating wrong keys and endless loop.

get_best_group_min_max() now allows loose index scan for distinct only if used
keyparts forms a prefix of the index.
2005-11-24 19:54:02 +03:00
serg@serg.mylan
86ad035270 test case fixed to pass w/o innodb 2005-09-25 15:44:05 +02:00