with temporary tables
There were two problems the test case from this bug was
triggering:
1. JOIN::rollup_init() was supposed to wrap all constant Items
into another object for queries with the WITH ROLLUP modifier
to ensure they are never considered as constants and therefore
are written into temporary tables if the optimizer chooses to
employ them for DISTINCT/GROUP BY handling.
However, JOIN::rollup_init() was called before
make_join_statistics(), so Items corresponding to fields in
const tables could not be handled as intended, which was
causing all kinds of problems later in the query execution. In
particular, create_tmp_table() assumed all constant items
except "hidden" ones to be removed earlier by remove_const()
which led to improperly initialized Field objects for the
temporary table being created. This is what was causing crashes
and valgrind errors in storage engines.
2. Even when the above problem had been fixed, the query from
the test case produced incorrect results due to some
DISTINCT/GROUP BY optimizations being performed by the
optimizer that are inapplicable in the WITH ROLLUP case.
Fixed by disabling inapplicable DISTINCT/GROUP BY optimizations
when the WITH ROLLUP modifier is present, and splitting the
const-wrapping part of JOIN::rollup_init() into a separate
method which is now invoked after make_join_statistics() when
the const tables are already known.
subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
having clause...
The fix for bug 46184 was not very complete. It was not covering
views using temporary tables and multiple tables in a FROM clause.
Fixed by reverting the fix for 46184 and making a more general
check that is checking at the right execution stage and for all
of the non-supported cases.
Now PROCEDURE ANALYZE on non-top level SELECT is also forbidden.
Updated the analyse.test and subselect.test accordingly.
Queries with nested outer joins may lead to crashes or
bad results because an internal data structure is not handled
correctly.
The optimizer uses bitmaps of nested JOINs to determine
if certain table can be placed at a certain place in the
JOIN order.
It does maintain a bitmap describing in which JOINs
last placed table is nested.
When it puts a table it makes sure the bit of every JOIN that
contains the table in question is set (because JOINs can be nested).
It does that by recursively setting the bit for the next enclosing
JOIN when this is the first table in the JOIN and recursively
resetting the bit if it's the last table in the JOIN.
When it removes a table from the join order it should do the
opposite : recursively unset the bit if it's the only remaining
table in this join and and recursively set the bit if it's removing
the last table of a JOIN.
There was an error in how the bits was set for the upper levels :
when removing a table it was setting the bit for all the enclosing
nested JOINs even if there were more tables left in the current JOIN
(which practically means that the upper nested JOINs were not affected).
Fixed by stopping the recursion at the relevant level.
Item_sum::set_aggregator() may be called multiple times during query preparation.
On subsequent calls: verify that the aggregator type is the same,
and re-use the existing Aggregator.
2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29,
2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and
some other minor revisions.
This patch implements:
WL#4264 "Backup: Stabilize Service Interface" -- all the
server prerequisites except si_objects.{h,cc} themselves (they can
be just copied over, when needed).
WL#4435: Support OUT-parameters in prepared statements.
(and all issues in the initial patches for these two
tasks, that were discovered in pushbuild and during testing).
Bug#39519: mysql_stmt_close() should flush all data
associated with the statement.
After execution of a prepared statement, send OUT parameters of the invoked
stored procedure, if any, to the client.
When using the binary protocol, send the parameters in an additional result
set over the wire. When using the text protocol, assign out parameters to
the user variables from the CALL(@var1, @var2, ...) specification.
The following refactoring has been made:
- Protocol::send_fields() was renamed to Protocol::send_result_set_metadata();
- A new Protocol::send_result_set_row() was introduced to incapsulate
common functionality for sending row data.
- Signature of Protocol::prepare_for_send() was changed: this operation
does not need a list of items, the number of items is fully sufficient.
The following backward incompatible changes have been made:
- CLIENT_MULTI_RESULTS is now enabled by default in the client;
- CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
BUG#38049 "incorrect rows estimations with references from preceding table"
(from revid:sergefp@mysql.com-20090126194259-ue20il3qro529l4d).
Compared to 6.0 where EXPLAIN indicates "Using index condition", here in join_optimizer.result
we see "Using where"; it's normal; 6.0 shows the same if disabling Index Condition Pushdown.
EXPLAIN EXTENDED warning.
Query optimizer searches for the constant tables and optimizes them away. This
means that fields of such tables are substituted for their values and on later
phases they are treated as constants. After this constant tables are removed
from the query execution plan. Nevertheless constant tables were shown in
the EXPLAIN EXTENDED warning thus producing query that might be not an
equivalent of the original query.
Now the print_join function skips all tables that were optimized away from
printing to the EXPLAIN EXTENDED warning. If all tables were optimized away it
produces the 'FROM dual' clause.
----------------------------------------------------------
revno: 2630.13.6
committer: Konstantin Osipov <konstantin@mysql.com>
branch nick: mysql-6.0-3288
timestamp: Fri 2008-07-11 20:22:44 +0400
message:
WL#3288, step 1: ensure that the SQL layer always closes an open
cursor (rnd or index read) before closing a handler.
Temporary tables may set join->group to 0 even though there is
grouping. Also need to test if sum_func_count>0 when JOIN::exec()
decides whether to present results in a grouped manner.
columns without where/group
Simple SELECT with implicit grouping used to return many rows if
the query was ordered by the aggregated column in the SELECT
list. This was incorrect because queries with implicit grouping
should only return a single record.
The problem was that when JOIN:exec() decided if execution needed
to handle grouping, it was assumed that sum_func_count==0 meant
that there were no aggregate functions in the query. This
assumption was not correct in JOIN::exec() because the aggregate
functions might have been optimized away during JOIN::optimize().
The reason why queries without ordering behaved correctly was
that sum_func_count is only recalculated if the optimizer chooses
to use temporary tables (which it does in the ordered case).
Hence, non-ordered queries were correctly treated as grouped.
The fix for this bug was to remove the assumption that
sum_func_count==0 means that there is no need for grouping. This
was done by introducing variable "bool implicit_grouping" in the
JOIN object.
local storage for query cache).
We need more than one pointer in a thread to
represent the query cache and net->query_cache_query can not be used
any more (due to ABI compatibility issues and to different life
time of NET and THD).
This is a backport of the following patch from 6.0:
----------------------------------------------------------
revno: 2476.1157.2
committer: kostja@bodhi.(none)
timestamp: Sat 2007-06-16 13:29:24 +0400
buffering is used
FORCE INDEX FOR ORDER BY now prevents the optimizer from
using join buffering. As a result the optimizer can use
indexed access on the first table and doesn't need to
sort the complete resultset at the end of the statement.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.