Bug#41756 "Strange error messages about locks from InnoDB".
In JT_EQ_REF (join_read_key()) access method,
don't try to unlock rows in the handler, unless certain that
a) they were locked
b) they are not used.
Unlocking of rows is done by the logic of the nested join loop,
and is unaware of the possible caching that the access method may
have. This could lead to double unlocking, when a row
was unlocked first after reading into the cache, and then
when taken from cache, as well as to unlocking of rows which
were actually used (but taken from cache).
Delegate part of the unlocking logic to the access method,
and in JT_EQ_REF count how many times a record was actually
used in the join. Unlock it only if it's usage count is 0.
Implemented review comments.
Bug#41756 "Strange error messages about locks from InnoDB".
In JT_EQ_REF (join_read_key()) access method,
don't try to unlock rows in the handler, unless certain that
a) they were locked
b) they are not used.
Unlocking of rows is done by the logic of the nested join loop,
and is unaware of the possible caching that the access method may
have. This could lead to double unlocking, when a row
was unlocked first after reading into the cache, and then
when taken from cache, as well as to unlocking of rows which
were actually used (but taken from cache).
Delegate part of the unlocking logic to the access method,
and in JT_EQ_REF count how many times a record was actually
used in the join. Unlock it only if it's usage count is 0.
Implemented review comments.
For application compatibility reasons MySQL converts "<autoincrement_column> IS NULL"
predicates to "<autoincrement_column> = LAST_INSERT_ID()" in the first SELECT following an
INSERT regardless of whether they're top level predicates or not. This causes wrong and
obscure results when these predicates are combined with others on the same columns. Fixed
by only doing the transformation on a single top-level predicate if a special SQL mode is
turned on (sql_auto_is_null).
Also made sql_auto_is_null off by default.
per-file comments:
mysql-test/r/func_isnull.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
test result updated
mysql-test/t/func_isnull.test
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
test case added
sql/mysqld.cc
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
sql_auto_is_null now is OFF by default.
sql/sql_select.cc
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
remove_eq_conds() split in two parts - one only checks the upper condition,
the req_remove_eq_conds() recursively checks all the condition tree.
mysql-test/extra/rpl_tests/rpl_insert_id.test
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
test fixed (set the sql_auto_is_null variable)
mysql-test/r/mysqlbinlog.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/r/mysqlbinlog2.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/r/odbc.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/r/query_cache.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/r/user_var-binlog.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/suite/binlog/r/binlog_row_ctype_ucs.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/suite/binlog/r/binlog_stm_ctype_ucs.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/suite/rpl/r/rpl_insert_id.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/suite/rpl/r/rpl_sp.result
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
result updated
mysql-test/t/odbc.test
Bug#41371 Select returns 1 row with condition "col is not null and col is null"
test fixed (set the sql_auto_is_null variable)
with temporary tables
There were two problems the test case from this bug was
triggering:
1. JOIN::rollup_init() was supposed to wrap all constant Items
into another object for queries with the WITH ROLLUP modifier
to ensure they are never considered as constants and therefore
are written into temporary tables if the optimizer chooses to
employ them for DISTINCT/GROUP BY handling.
However, JOIN::rollup_init() was called before
make_join_statistics(), so Items corresponding to fields in
const tables could not be handled as intended, which was
causing all kinds of problems later in the query execution. In
particular, create_tmp_table() assumed all constant items
except "hidden" ones to be removed earlier by remove_const()
which led to improperly initialized Field objects for the
temporary table being created. This is what was causing crashes
and valgrind errors in storage engines.
2. Even when the above problem had been fixed, the query from
the test case produced incorrect results due to some
DISTINCT/GROUP BY optimizations being performed by the
optimizer that are inapplicable in the WITH ROLLUP case.
Fixed by disabling inapplicable DISTINCT/GROUP BY optimizations
when the WITH ROLLUP modifier is present, and splitting the
const-wrapping part of JOIN::rollup_init() into a separate
method which is now invoked after make_join_statistics() when
the const tables are already known.
subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
having clause...
The fix for bug 46184 was not very complete. It was not covering
views using temporary tables and multiple tables in a FROM clause.
Fixed by reverting the fix for 46184 and making a more general
check that is checking at the right execution stage and for all
of the non-supported cases.
Now PROCEDURE ANALYZE on non-top level SELECT is also forbidden.
Updated the analyse.test and subselect.test accordingly.
Queries with nested outer joins may lead to crashes or
bad results because an internal data structure is not handled
correctly.
The optimizer uses bitmaps of nested JOINs to determine
if certain table can be placed at a certain place in the
JOIN order.
It does maintain a bitmap describing in which JOINs
last placed table is nested.
When it puts a table it makes sure the bit of every JOIN that
contains the table in question is set (because JOINs can be nested).
It does that by recursively setting the bit for the next enclosing
JOIN when this is the first table in the JOIN and recursively
resetting the bit if it's the last table in the JOIN.
When it removes a table from the join order it should do the
opposite : recursively unset the bit if it's the only remaining
table in this join and and recursively set the bit if it's removing
the last table of a JOIN.
There was an error in how the bits was set for the upper levels :
when removing a table it was setting the bit for all the enclosing
nested JOINs even if there were more tables left in the current JOIN
(which practically means that the upper nested JOINs were not affected).
Fixed by stopping the recursion at the relevant level.
Item_sum::set_aggregator() may be called multiple times during query preparation.
On subsequent calls: verify that the aggregator type is the same,
and re-use the existing Aggregator.
2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29,
2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and
some other minor revisions.
This patch implements:
WL#4264 "Backup: Stabilize Service Interface" -- all the
server prerequisites except si_objects.{h,cc} themselves (they can
be just copied over, when needed).
WL#4435: Support OUT-parameters in prepared statements.
(and all issues in the initial patches for these two
tasks, that were discovered in pushbuild and during testing).
Bug#39519: mysql_stmt_close() should flush all data
associated with the statement.
After execution of a prepared statement, send OUT parameters of the invoked
stored procedure, if any, to the client.
When using the binary protocol, send the parameters in an additional result
set over the wire. When using the text protocol, assign out parameters to
the user variables from the CALL(@var1, @var2, ...) specification.
The following refactoring has been made:
- Protocol::send_fields() was renamed to Protocol::send_result_set_metadata();
- A new Protocol::send_result_set_row() was introduced to incapsulate
common functionality for sending row data.
- Signature of Protocol::prepare_for_send() was changed: this operation
does not need a list of items, the number of items is fully sufficient.
The following backward incompatible changes have been made:
- CLIENT_MULTI_RESULTS is now enabled by default in the client;
- CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
BUG#38049 "incorrect rows estimations with references from preceding table"
(from revid:sergefp@mysql.com-20090126194259-ue20il3qro529l4d).
Compared to 6.0 where EXPLAIN indicates "Using index condition", here in join_optimizer.result
we see "Using where"; it's normal; 6.0 shows the same if disabling Index Condition Pushdown.
EXPLAIN EXTENDED warning.
Query optimizer searches for the constant tables and optimizes them away. This
means that fields of such tables are substituted for their values and on later
phases they are treated as constants. After this constant tables are removed
from the query execution plan. Nevertheless constant tables were shown in
the EXPLAIN EXTENDED warning thus producing query that might be not an
equivalent of the original query.
Now the print_join function skips all tables that were optimized away from
printing to the EXPLAIN EXTENDED warning. If all tables were optimized away it
produces the 'FROM dual' clause.
----------------------------------------------------------
revno: 2630.13.6
committer: Konstantin Osipov <konstantin@mysql.com>
branch nick: mysql-6.0-3288
timestamp: Fri 2008-07-11 20:22:44 +0400
message:
WL#3288, step 1: ensure that the SQL layer always closes an open
cursor (rnd or index read) before closing a handler.
Temporary tables may set join->group to 0 even though there is
grouping. Also need to test if sum_func_count>0 when JOIN::exec()
decides whether to present results in a grouped manner.
columns without where/group
Simple SELECT with implicit grouping used to return many rows if
the query was ordered by the aggregated column in the SELECT
list. This was incorrect because queries with implicit grouping
should only return a single record.
The problem was that when JOIN:exec() decided if execution needed
to handle grouping, it was assumed that sum_func_count==0 meant
that there were no aggregate functions in the query. This
assumption was not correct in JOIN::exec() because the aggregate
functions might have been optimized away during JOIN::optimize().
The reason why queries without ordering behaved correctly was
that sum_func_count is only recalculated if the optimizer chooses
to use temporary tables (which it does in the ordered case).
Hence, non-ordered queries were correctly treated as grouped.
The fix for this bug was to remove the assumption that
sum_func_count==0 means that there is no need for grouping. This
was done by introducing variable "bool implicit_grouping" in the
JOIN object.
local storage for query cache).
We need more than one pointer in a thread to
represent the query cache and net->query_cache_query can not be used
any more (due to ABI compatibility issues and to different life
time of NET and THD).
This is a backport of the following patch from 6.0:
----------------------------------------------------------
revno: 2476.1157.2
committer: kostja@bodhi.(none)
timestamp: Sat 2007-06-16 13:29:24 +0400
buffering is used
FORCE INDEX FOR ORDER BY now prevents the optimizer from
using join buffering. As a result the optimizer can use
indexed access on the first table and doesn't need to
sort the complete resultset at the end of the statement.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
bzr branch mysql-5.1-performance-version mysql-trunk # Summit
cd mysql-trunk
bzr merge mysql-5.1-innodb_plugin # which is 5.1 + Innodb plugin
bzr rm innobase # remove the builtin
Next step: build, test fixes.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
use partial primary key if another index can prevent filesort
The fix for bug #28404 causes the covering ordering indexes to be
preferred unconditionally over non-covering and ref indexes.
Fixed by comparing the cost of using a covering index to the cost of
using a ref index even for covering ordering indexes.
Added an assertion to clarify the condition the local variables should
be in.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
uninitialized variable used as subscript
Grouping select from a "constant" InnoDB table (a table
of a single row) joined with other tables caused a crash.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.
HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the set of
types of the IN/CASE arguments at compile time, so if any of the
arguments of the CASE function changes its type to a string it will
still be covered by the information prepared at compile time.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
'INSERT ... SELECT' statements
The code that produces result rows expected that a duplicate row
error could not occur in INSERT ... SELECT statements with
unfulfilled WHERE conditions. This may happen, however, if the
SELECT list contains only aggregate functions.
Fixed by checking if an error occured before trying to send EOF
to the client.
EXPLAIN EXTENDED of nested query containing a error:
1054 Unknown column '...' in 'field list'
may cause a server crash.
Parse error like described above forces a call to
JOIN::destroy() on malformed subquery.
That JOIN::destroy function closes and frees temporary
tables. However, temporary fields of these tables
may be listed in st_select_lex::group_list of outer
query, and that st_select_lex may not cleanup them
properly. So, after the JOIN::destroy call that
st_select_lex::group_list may have Item_field
objects with dangling pointers to freed temporary
table Field objects. That caused a crash.
Original commentary:
Bug #37348: Crash in or immediately after JOIN::make_sum_func_list
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
mysqld is optimized for the default
case (up to 64-indices); for a greater
number of indices it goes through a
different code path. As that code-path
is a compile-time option and can not
easily be covered in standard tests,
bitrot occurred. key-fields need an
explicit initialization in the non-
optimized case; this setup was
presumably not added when a new key-
vector was added.
Changeset adds the necessary
initialisations.
No test case added due to dependence
on compile-time option.
connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
ORDER BY could cause a server crash
Dependent subqueries like
SELECT COUNT(*) FROM t1, t2 WHERE t2.b
IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a)
caused a memory leak proportional to the
number of outer rows.
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only
(make_simple_join of tmp_join may access these values
via 'this' pointer of the parent JOIN).
NOTE: this patch doesn't include standard test case (this is
"out of memory" bug). See bug #42037 page for test cases.
messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
The greedy optimizer tracks the current level of nested joins and the position
inside these by setting and maintaining a state that's global for the whole FROM
clause.
This state was correctly maintained inside the selection of the next partial plan
table (in best_extension_by_limited_search()).
greedy_search() also moves the current position by adding the last partial match
table when there's not enough tables in the partial plan found by
best_extension_by_limited_search().
This may require update of the global state variables that describe the current
position in the plan if the last table placed by greedy_search is not a top-level
join table.
Fixed by updating the state after placing the partial plan table in greedy_search()
in the same way this is done on entering the best_extension_by_limited_search().
Fixed the signature of the function called to update the state :
check_interleaving_with_nj
Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
Bug#37671 crash on prepared statement + cursor + geometry + too many open files!
if mysql_execute_command() returns error then free materialized_cursor object.
is_rnd_inited is added to satisfy rnd_end() assertion
(handler may be uninitialized in some cases)
ONLY_FULL_GROUP_BY
The check for non-aggregated columns in queries with aggregate function, but without
GROUP BY was treating all the parts of the query as if they are in the SELECT list.
Fixed by ignoring the non-aggregated fields in the WHERE clause.
fails after the first time
Two separate problems :
1. When flattening joins the linked list used for name resolution
(next_name_resolution_table) was not updated.
Fixed by updating the pointers when extending the table list
2. The items created by expanding a * (star) as a column reference
were marked as fixed, but no cached table was assigned to them
(unlike what Item_field::fix_fields does).
Fixed by assigning a cached table (so the re-preparation is done
faster).
Note that the fix for #2 hides the fix for #1 in most cases
(except when a table reference cannot be cached).
Server crashed during a sort order optimization
of a dependent subquery:
SELECT
(SELECT t1.a FROM t1, t2
WHERE t1.a = t2.b AND t2.a = t3.c
ORDER BY t1.a)
FROM t3;
Bitmap of tables, that the reference to outer table
column uses, in addition to the regular table bit
has the OUTER_REF_TABLE_BIT bit set.
The only_eq_ref_tables function traverses this map
bit by bit simultaneously with join->map2table list.
Obviously join->map2table never contains an entry
for the OUTER_REF_TABLE_BIT pseudo-table, so the
server crashed there.
The only_eq_ref_tables function has been modified
to traverse regular table bits only like the
update_depend_map function (resetting of the
OUTER_REF_TABLE_BIT there is enough, but
resetting of the whole set of PSEUDO_TABLE_BITS
is used there for sure).
with COALESCE and JOIN
The server returned to a client the VARBINARY column type
instead of the DATE type for a result of the COALESCE,
IFNULL, IF, CASE, GREATEST or LEAST functions if that result
was filesorted in an anonymous temporary table during
the query execution.
For example:
SELECT COALESCE(t1.date1, t2.date2) AS result
FROM t1 JOIN t2 ON t1.id = t2.id ORDER BY result;
To create a column of various date/time types in a
temporary table the create_tmp_field_from_item() function
uses the Item::tmp_table_field_from_field_type() method
call. However, fields of the MYSQL_TYPE_NEWDATE type were
missed there, and the VARBINARY columns were created
by default.
Necessary condition has been added.
crashes server
When creating temporary table that contains aggregate functions a
non-reversible source transformation was performed to redirect aggregate
function arguments towards temporary table columns.
This caused EXPLAIN EXTENDED to fail because it was trying to resolve
references to the (freed) temporary table.
Fixed by preserving the original aggregate function arguments and
using them (instead of the transformed ones) for EXPLAIN EXTENDED.
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
Calling List<Cached_item>::delete_elements for the same list twice
caused a crash of the server in the function JOIN::cleaunup.
Ensured that delete_elements() in JOIN::cleanup would be called only once.
Range scan in descending order for c <= <col> <= c type of
ranges was ignoring the DESC flag.
However some engines like InnoDB have the primary key parts
as a suffix for every secondary key.
When such primary key suffix is used for ordering ignoring
the DESC is not valid.
But we generally would like to do this because it's faster.
Fixed by performing only reverse scan if the primary key is used.
Removed some dead code in the process.
Range scan in descending order for c <= <col> <= c type of
ranges was ignoring the DESC flag.
However some engines like InnoDB have the primary key parts
as a suffix for every secondary key.
When such primary key suffix is used for ordering ignoring
the DESC is not valid.
But we generally would like to do this because it's faster.
Fixed by performing only reverse scan if the primary key is used.
Removed some dead code in the process.
- In QUICK_INDEX_MERGE_SELECT::read_keys_and_merge: when we got table->sort from Unique,
tell init_read_record() not to use rr_from_cache() because a) rowids are already sorted
and b) it might be that the the data is used by filesort(), which will need record rowids
(which rr_from_cache() cannot provide).
- Fully de-initialize the table->sort read in QUICK_INDEX_MERGE_SELECT::get_next(). This fixes BUG#35477.
(bk trigger: file as fix for BUG#35478).
InnoDB table, where all selected columns
belong to the same unique index key, returns
incorrect results
Server executes some queries via QUICK_GROUP_MIN_MAX_SELECT
(MIN/MAX optimization for queries with GROUP BY or DISTINCT
clause) and that optimization implies loose index scan, so all
grouping is done by the QUICK_GROUP_MIN_MAX_SELECT::get_next
method.
The server does not set the precomputed_group_by flag for some
QUICK_GROUP_MIN_MAX_SELECT queries and duplicates grouping by
call to the end_send_group function.
Fix: when the test_if_skip_sort_order function selects loose
index scan as a best way to satisfy an ORDER BY/GROUP BY type
of query, the precomputed_group_by flag has been set to use
end_send/end_write functions instead of end_send_group/
end_write_group functions.
with dependent subqueries
An IN subquery is executed on EXPLAIN when it's not correlated.
If the subquery required a temporary table for its execution
not all the internal structures were restored from pointing to
the items of the temporary table to point back to the items of
the subquery.
Fixed by restoring the ref array when a temp tables were used in
executing the IN subquery during EXPLAIN EXTENDED.
- Disable the "prefer full scan on clustered primary key over full scan
of any secondary key" rule introduced by BUG#35850.
- Update test results accordingly
(bk trigger: file this for BUG#35850)
The function test_if_skip_sort_order ignored any covering index used for ref
access of a table in a query with ORDER BY if this index was incompatible
with the ORDER BY list and there was another covering index compatible with
this list.
As a result sub-optimal execution plans were chosen for some queries with
ORDER BY clause.
impossible WHERE/HAVING clause
(subselect_single_select_engine::exec).
Allocation and initialization of joined table list t1, t2... of
subqueries like:
NOT IN (SELECT ... FROM t1,t2,... WHERE 0)
is optimized out, however server tries to traverse this list.
The code for executing indexed ORDER BY was not setting all the
internal fields correctly when selecting to execute ORDER BY over
and index.
Fixed by change the access method to one that will use the
quick indexed access if one is selected while selecting indexed
ORDER BY.
Mixing aggregate functions and non-grouping columns is not allowed in the
ONLY_FULL_GROUP_BY mode. However in some cases the error wasn't thrown because
of insufficient check.
In order to check more thoroughly the new algorithm employs a list of outer
fields used in a sum function and a SELECT_LEX::full_group_by_flag.
Each non-outer field checked to find out whether it's aggregated or not and
the current select is marked accordingly.
All outer fields that are used under an aggregate function are added to the
Item_sum::outer_fields list and later checked by the Item_sum::check_sum_func
function.
View definition as SELECT ... FROM DUAL WHERE ... has
valid syntax, but use of such view in SELECT or
SHOW CREATE VIEW syntax causes unexpected syntax error.
Server omits FROM DUAL clause when storing view body
string in a .frm file for further evaluation.
However, syntax of SELECT-witout-FROM query is more
restrictive than SELECT FROM DUAL syntax, and doesn't
allow the WHERE clause.
NOTE: this syntax difference is not documented.
View registration procedure has been modified to
preserve original structure of view's body.
The bug is a regression introduced in 5.1 by the patch for bug28404.
Under some circumstances test_if_skip_sort_order() could leave some
data structures in an inconsistent state so that some parts of code
could assume the selected execution strategy for GROUP BY/DISTINCT as
a loose index scan (e.g. JOIN_TAB::is_using_loose_index_scan()), while
the actual strategy chosen was an ordered index scan, which led to
wrong data being returned.
Fixed test_if_skip_sort_order() so that when changing the type for a
join table, select->quick is reset not only for EXPLAIN, but for the
actual join execution as well, to not confuse code that depends on its
value to determine the chosen GROUP BY/DISTINCT strategy.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
and Item_direct_ref constructor calls.
Order of ref->field_name and ref->table_name arguments
is of Item_ref and Item_direct_ref in the fix_inner_refs
function is inverted.
between 5.0 and 5.1.
The problem was that in the patch for Bug#11986 it was decided
to store original query in UTF8 encoding for the INFORMATION_SCHEMA.
This approach however turned out to be quite difficult to implement
properly. The main problem is to preserve the same IS-output after
dump/restore.
So, the fix is to rollback to the previous functionality, but also
to fix it to support multi-character-set-queries properly. The idea
is to generate INFORMATION_SCHEMA-query from the item-tree after
parsing view declaration. The IS-query should:
- be completely in UTF8;
- not contain character set introducers.
For more information, see WL4052.