FIND_USED_PARTITIONS | SQL/OPT_RANGE.CC:3884
Issue:
-----
During partition pruning, first we identify the partition
in which row can reside and then identify the subpartition.
If we find a partition but not the subpartion then we hit
a debug assert. While finding the subpartition we check
the current thread's error status in part_val_int()
function after some operation. In this case the thread's
error status is already set to an error (multiple rows
returned) so the function returns no partition found and
results in incorrect behavior.
SOLUTION:
---------
Currently any error encountered in part_val_int is
considered a "partition not found" type error. Instead of
an assert, a check needs to be done and a valid error
returned.
GROUP BY, MYISAM
Problem:-
In a query, where we are using loose index scan optimization and
we have MIN() causes segmentation fault(where table row length
is less then key_length).
Analysis:
While using loose index scan for MIN(), we call key_copy(), to copy
the key data from record.
This function is using temporary record buffer to store key data
from the record buffer.But in case where the key length is greater
then the buffer length, this will cause a segmentation fault.
Solution:
Give a proper buffer to store a key record.
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
Consider the following query:
SELECT f_1,..,f_m, AGGREGATE_FN(C)
FROM t1
WHERE ...
GROUP BY ...
Loose index scan ("Using index for group-by") can be used for
this query if there is an index 'i' covering all fields in the
select list, and the GROUP BY clause makes up a prefix f1,...,fn
of 'i'. Furthermore, according to rule NGA2 of
get_best_group_min_max(), the WHERE clause must contain a
conjunction of equality predicates for all fields fn+1,...,fm.
The problem in this bug was that a query with WHERE clause that
broke NGA2 was not detected and therefore used loose index scan.
This lead to wrong result. The query had an index
covering (c1,c2) and had:
"WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
or
"WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
This WHERE clause cannot be transformed to a conjunction of
equality predicates.
The solution is to introduce another rule, NGA3, that complements
NGA2. NGA3 says that if a gap field (field between those
listed in GROUP BY and C in the index) has a predicate, then
there can only be one range in the query. This requirement is
more strict than it has to be in theory. BUG 15947433 will deal
with that.
When executing row-ordered-retrieval index merge,
the handler was cloned, but it used the wrong
memory root, so instead of allocating memory
on the thread/query's mem_root, it used the table's
mem_root, resulting in non released memory in the
table object, and was not freed until the table was
closed.
Solution was to ensure that memory used during cloning
of a handler was allocated from the correct memory root.
This was implemented by fixing handler::clone() to also
take a name argument, so it can be used with partitioning.
And in ha_partition only allocate the ha_partition's ref, and
call the original ha_partition partitions clone() and set at cloned
partitions.
Fix of .bzrignore on Windows with VS 2010
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
Subselect executes twice, at JOIN::optimize stage
and at JOIN::execute stage. At optimize stage
Innodb prebuilt struct which is used for the
retrieval of column values is initialized in.
ha_innobase::index_read(), prebuilt->sql_stat_start is true.
After QUICK_ROR_INTERSECT_SELECT finished his job it
restores read_set/write_set bitmaps with initial values
and deactivates one of the handlers used by
QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup
(it's the case when we reuse original handler as one of
handlers required by QUICK_ROR_INTERSECT_SELECT object).
On second subselect execution inactive handler is activated
in QUICK_RANGE_SELECT::reset, file->ha_index_init().
In ha_index_init Innodb prebuilt struct is reinitialized
with inappropriate read_set/write_set bitmaps. Further
reinitialization in ha_innobase::index_read() does not
happen as prebuilt->sql_stat_start is false.
It leads to partial retrieval of required field values
and we get a mix of field values from different records
in the record buffer.
The fix is to reset
read_set/write_set bitmaps as these values
are required for proper intialization of
internal InnoDB struct which is used for
the retrieval of column values
(see build_template(), ha_innodb.cc)
Queries involving predicates of the form "const NOT BETWEEN
not_indexed_column AND indexed_column" could return wrong data
due to incorrect handling by the range optimizer.
For "c NOT BETWEEN f1 AND f2" predicates, get_mm_tree()
produces a disjunction of the SEL_ARG trees for "f1 > c" and
"f2 < c". If one of the trees is empty (i.e. one of the
arguments is not sargable) the resulting tree should be empty
as well, since the whole expression in this case is not
sargable.
The above logic is implemented in get_mm_tree() as follows. The
initial state of the resulting tree is NULL (aka empty). We
then iterate through arguments and compute the corresponding
SEL_ARG tree (either "f1 > c" or "f2 < c"). If the resulting
tree is NULL, it is simply replaced by the generated
tree. Otherwise it is replaced by a disjunction of itself and
the generated tree. The obvious flaw in this implementation is
that if the first argument is not sargable and thus produces a
NULL tree, the resulting tree will simply be replaced by the
tree for the second argument. As a result, "c NOT BETWEEN f1
AND f2" will end up as just "f2 < c".
Fixed by adding a check so that when the first argument
produces an empty tree for the NOT BETWEEN case, the loop is
aborted with an empty tree as a result. The whole idea of using
a loop for 2 arguments does not make much sense, but it was
probably used to avoid code duplication for several BETWEEN
variants.
Calculating the estimated number of records for a range scan
may take a significant time, and it was impossible for a user
to interrupt that process by killing the connection or the
query.
Fixed by checking the thread's 'killed' status in
check_quick_keys() and interrupting the calculation process if
it is set to a non-zero value.
require O(#scans) memory
When an index merge operation was restarted, it would
re-allocate the Unique object controlling the duplicate row
ID elimination. Fixed by making the Unique object a member
of QUICK_INDEX_MERGE_SELECT and thus reusing it throughout
the lifetime of this object.
In process of record search it is not taken into account
that inital quick->file->ref value could be inapplicable
to range interval. After proper row is found this value is
stored into the record buffer and later the record is
filtered out at condition evaluation stage.
The fix is store a refernce of found row to the handler ref field.
remember range endpoints
The Loose Index Scan optimization keeps track of a sequence
of intervals. For the current interval it maintains the
current interval's endpoints. But the maximum endpoint was
not stored in the SQL layer; rather, it relied on the
storage engine to retain this value in-between reads. By
coincidence this holds for MyISAM and InnoDB. Not for the
partitioning engine, however.
Fixed by making the key values iterator
(QUICK_RANGE_SELECT) keep track of the current maximum endpoint.
This is also more efficient as we save a call through the
handler API in case of open-ended intervals.
The code to calculate endpoints was extracted into
separate methods in QUICK_RANGE_SELECT, and it was possible to
get rid of some code duplication as part of fix.
Queries optimized with GROUP_MIN_MAX didn't cleanup KEYREAD
optimization properly. As a result subsequent queries may
return incomplete rows (fields are initialized to default
values).
for each record'
There was an error in an internal structure in the range
optimizer (SEL_ARG). Bad design causes parts of a data
structure not to be initialized when it is in a certain
state. All client code must check that this state is not
present before trying to access the structure's data. Fixed
by
- Checking the state before trying to access data (in
several places, most of which not covered by test case.)
- Copying the keypart id when cloning SEL_ARGs
When merging ranges during calculation of the result of OR
to two range sets the current range may be obsoleted by the
resulting merged range.
The first overlapping range can be obsoleted as well.
Fixed by moving the pointer to the first overlapping range to the
pointer of the resulting union range.
Added few comments at key places in key_or().
WHERE conditions
check_group_min_max() checks if the loose index scan
optimization is applicable for a given WHERE condition, that is
if the MIN/MAX attribute participates only in range predicates
comparing the corresponding field with constants.
The problem was that it considered the whole predicate suitable
for the loose index scan optimization as soon as it encountered
a constant as a predicate argument. This is obviously wrong for
cases when a constant is the first argument of a predicate
which does not satisfy the above condition.
Fixed check_group_min_max() so that all arguments of the input
predicate are considered to decide if it passes the test, even
though a constant has already been encountered.
When a query was using a DATE or DATETIME value formatted
using any other separator characters beside hyphen '-', a
query with a greater-or-equal '>=' condition matching only
the greatest value in an indexed column, the result was
empty if index range scan was employed.
The range optimizer got a new feature between 5.1.38 and
5.1.39 that changes a greater-or-equal condition to a
greater-than if the value matching that in the query was not
present in the table. But the value comparison function
compared the dates as strings instead of dates.
The bug was fixed by splitting the function
get_date_from_str in two: One part that parses and does
error checking. This function is now visible outside the
module. The old get_date_from_str now calls the new
function.
Problem: involving a spatial index for "non-spatial" queries
(that don't containt MBRXXX() functions) may lead to failed assert.
Fix: don't use spatial indexes in such cases.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
covering index
When two range predicates were combined under an OR
predicate, the algorithm tried to merge overlapping ranges
into one. But the case when a range overlapped several other
ranges was not handled. This lead to
1) ranges overlapping, which gave repeated results and
2) a range that overlapped several other ranges was cut off.
Fixed by
1) Making sure that a range got an upper bound equal to the
next range with a greater minimum.
2) Removing a continue statement
can lead to bad memory access
Problem: Field_bit is the only field which returns INT_RESULT
and doesn't have unsigned flag. As it's not a descendant of the
Field_num, so using ((Field_num *) field_bit)->unsigned_flag may lead
to unpredictable results.
Fix: check the field type before casting.
results in server crash
check_group_min_max_predicates() assumed the input condition
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM.
Since a condition of the form "field" is also a valid condition
equivalent to "field <> 0", using such a condition in a query
where the loose index scan was chosen resulted in a debug
assertion failure.
Fixed by handling conditions of the FIELD_ITEM type in
check_group_min_max_predicates().
with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
There were a problem since pruning uses the field
for comparison (while evaluate_join_record uses longlong),
resulting in pruning failures when comparing DATE to DATETIME.
Fix was to always comparing DATE vs DATETIME as DATETIME,
by adding ' 00:00:00' to the DATE string.
And adding optimization for comparing with 23:59:59, so that
DATETIME_col > '2001-02-03 23:59:59' ->
TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead
of '>='.
The code was using a special global buffer for the value of IS NULL ranges.
This was not always long enough to be copied by a regular memcpy. As a
result read buffer overflows may occur.
Fixed by setting the null byte to 1 and setting the rest of the field disk image
to NULL with a bzero (instead of relying on the buffer and memcpy()).
The check for stack overflow was independent of the size of the
structure stored in the heap.
Fixed by adding sizeof(PARAM) to the requested free heap size.
When the function exits with an error it was not
freeing the local Unique class instance.
Fixed my making sure all the places where the function
returns from are freeing the Unique instance