- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
To correctly decide which predicates can be evaluated with a given table
the optimizer must know the exact set of tables that a predicate depends
on. If that mask is too wide (refer to non-existing tables) the optimizer
can erroneously skip a predicate.
One such case of wrong table usage mask were the aggregate functions.
The have a all-1 mask (meaning depend on all tables, including non-existent
ones).
Fixed by making a real used_tables mask for the aggregates. The mask is
constructed in the following way :
1. OR the table dependency masks of all the arguments of the aggregate.
2. If all the arguments of the function are from the local name resolution
context and it is evaluated in the same name resolution
context where it is referenced all the tables from that name resolution
context are OR-ed to the dependency mask. This is to denote that an
aggregate function depends on the number of rows it processes.
3. Handle correctly the case of an aggregate function optimization (such that
the aggregate function can be pre-calculated and made a constant).
Made sure that an aggregate function is never a constant (unless subject of a
specific optimization and pre-calculation).
One other flaw was revealed and fixed in the process : references were
not calling the recalculation method for used_tables of their targets.
can be specified
Currently MySQL allows one to specify what indexes to ignore during
join optimization. The scope of the current USE/FORCE/IGNORE INDEX
statement is only the FROM clause, while all other clauses are not
affected.
However, in certain cases, the optimizer
may incorrectly choose an index for sorting and/or grouping, and
produce an inefficient query plan.
This task provides the means to specify what indexes are
ignored/used for what operation in a more fine-grained manner, thus
making it possible to manually force a better plan. We do this
by extending the current IGNORE/USE/FORCE INDEX syntax to:
IGNORE/USE/FORCE INDEX [FOR {JOIN | ORDER | GROUP BY}]
so that:
- if no FOR is specified, the index hint will apply everywhere.
- if MySQL is started with the compatibility option --old_mode then
an index hint without a FOR clause works as in 5.0 (i.e, the
index will only be ignored for JOINs, but can still be used to
compute ORDER BY).
See the WL#3527 for further details.
When checking if an IN predicate can be evaluated using a key
the optimizer makes sure that all the arguments of IN are of
the same result type. To assure that it check whether
Item_func_in::array is filled in.
However Item_func_in::array is set if the types are
the same AND all the arguments are compile time constants.
Fixed by introducing Item_func_in::arg_types_compatible
flag to allow correct checking of the desired condition.
A wrong order of statements in QUICK_GROUP_MIN_MAX_SELECT::reset
caused a crash when a query with DISTINCT was executed by a loose scan
for an InnoDB table that had been emptied.
This performance degradation for UPDATEs could be observed in the update
statements for which the search key cannot be converted to any valid
value of the type of the search column, like for a the condition
int_fld=99999999999999999999999999, though it can be guaranteed here
that there is no row with such a key value.
The bug could cause choosing a sub-optimal execution plan for
a single-table query if a unique index with many null keys were
defined for the table.
It happened because the code of the check_quick_keys function
made an assumption that any key may occur in an unique index
only once. Yet this is not true for keys with nulls that may
have multiple occurrences in the index.
index_read(), index_read_idx(), index_read_last(), and
records_in_range() - instead of 'uint keylen' argument take
'ulonglong keypart_map', a bitmap showing which keyparts are
present in the key value.
Fallback method is provided for handlers that are lagging behind.
Removed a lot of compiler warnings
Removed not used variables, functions and labels
Initialize some variables that could be used unitialized (fatal bugs)
%ll -> %l
Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
repair it
Multi-table delete that is optimized with QUICK_RANGE reports table
corruption.
DELETE statement must not use KEYREAD optimization, and sets
table->no_keyread to 1. This was ignored in QUICK_RANGE optimization.
With this fix QUICK_RANGE optimization honors table->no_keyread
value and does not enable KEYREAD when it is requested.
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes