too much memory. Instead, either create the equvalent SEL_TREE manually, or create only two ranges that
strictly include the area to scan
(Note: just to re-iterate: increasing NOT_IN_IGNORE_THRESHOLD will make optimization run slower for big
IN-lists, but the server will not run out of memory. O(N^2) memory use has been eliminated)
* Produce right results for conditions that were transformed to "(partitioning_range) AND
(list_of_subpartitioning_ranges)": make each partition id set iterator auto-reset itself
after it has returned all partition ids in the sequence
* Fix "Range mapping" and "Range mapping" partitioning interval analysis functions to
correctly deal with NULL values.
The bug was due to a missed case in the detection of whether an index
can be used for loose scan. More precisely, the range optimizer chose
to use loose index scan for queries for which the condition(s) over
an index key part could not be pushed to the index together with the
loose scan.
As a result, loose index scan was selecting the first row in the
index with a given GROUP BY prefix, and was applying the WHERE
clause after that, while it should have inspected all rows with
the given prefix, and apply the WHERE clause to all of them.
The fix detects and skips such cases.
Added missing DBUG_xxx_RETURN statements
Fixed some usage of not initialized variables (as found by valgrind)
Ensure that we don't remove locked tables used as name locks from open table cache until unlock_table_names() are called.
This was fixed by having drop_locked_name() returning any table used as a name lock so that we can free it in unlock_table_names()
This will allow Tomas to continue with his work to use namelocks to syncronize things.
Note: valgrind still produces a lot of warnings about using not initialized code and shows memory loss errors when running the ndb tests
Bug # 17894 - Comparison with "less than" operator fails with range partition
The problem here was that on queries such as < 3, the range given is NULL < n < 3.
The null part works correctly where the null value is stored in rec[0] and the
field is marked as being null. However, when the 3 is processed, the 3 is places
on rec[0] but the null flag is left uncleared.
partition_range.result:
Results block for bug #17894
partition_range.test:
Test block for bug #17894
partition_list.result:
Results block for bug #17173
partition_list.test:
Test block for bug #17173
opt_range.cc:
call set_notnull to clear any null flag that may have been set
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
Also, moved some of the code out of handler.h and into partition specific files for better
separation.
Also, moved some of the C funcs into partition_info as formal C++ methods
1. dbug state is now local to a thread
2. new macros: DBUG_EXPLAIN, DBUG_EXPLAIN_INITIAL,
DBUG_SET, DBUG_SET_INITIAL, DBUG_EVALUATE, DBUG_EVALUATE_IF
3. macros are do{}while(0) wrapped
4. incremental modifications to the dbug state (e.g. "+d,info:-t")
5. dbug code cleanup, style fixes
6. _db_on_ and DEBUGGER_ON/OFF removed
7. rest of MySQL code fixed because of 3 (missing ;) and 6
8. dbug manual updated
9. server variable @@debug (global and local) to control dbug from SQL!
a. -#T to print timestamps in the log
If check_quick_select returns non-empty range then the function cost_group_min_max
cannot return 0 as an estimate of the number of retrieved records.
Yet the function erroneously returned 0 as the estimate in some situations.
Final patch
-----------
This WL is about using this bitmap in all parts of the partition handler.
Thus for:
rnd_init/rnd_next
index_init/index_next and all other variants of index scans
read_range_... the various range scans implemented in the partition handler.
Also use those bitmaps in the various other calls that currently loop over all
partitions.
- Fixed tests
- Optimized new code
- Fixed some unlikely core dumps
- Better bug fixes for:
- #14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
- #14850 (ERROR 1062 when a quering a view using a Group By on a column that can be null
- post-...-post review fixes
- Added "integer range walking" that allows to do partition pruning for "a <=? t.field <=? b"
by finding used partitions for a, a+1, a+2, ..., b-1, b.