field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
mysql-test/r/type_newdecimal.result:
Add test case result for Bug#45261. Also, update test case to
reflect that an additive operation increases the precision of
the resulting type by 1.
mysql-test/t/type_newdecimal.test:
Add test case for Bug#45261
sql/field.cc:
Added DBUG_ASSERT to ensure object's invariant is maintained.
Implement method to create a field to hold a decimal value
from an item.
sql/field.h:
Explain member variable. Add method to create a new decimal field.
sql/item.cc:
The precision should only be capped when storing the value
on a table. Also, this makes it impossible to calculate the
integer part if Item::decimals (the scale) is larger than the
precision.
sql/item.h:
Simplify calculation of integer part.
sql/item_cmpfunc.cc:
Do not limit the precision. It will be capped later.
sql/item_func.cc:
Use new method for allocating a new decimal field.
Add a specialized method for retrieving the precision
of a user variable item.
sql/item_func.h:
Add method to return the precision of a user variable.
sql/item_sum.cc:
Use new method for allocating a new decimal field.
sql/my_decimal.h:
The integer part could be improperly calculated for a decimal
with 31 digits in the fractional part.
sql/sql_select.cc:
Use new method which truncates the integer or decimal parts
as needed.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
mysql-test/r/insert_select.result:
Added a test case for bug #46075.
mysql-test/t/insert_select.test:
Added a test case for bug #46075.
sql/sql_select.cc:
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
mysql-test/r/order_by.result:
Bug#46454: Test result.
mysql-test/t/order_by.test:
Bug#46454: Test case.
sql/sql_select.cc:
Bug#46454:
Problem 1 fixed in make_join_select()
Problem 2 fixed in test_if_skip_sort_order()
sql/table.h:
Bug#46454: Added comment to field.
use partial primary key if another index can prevent filesort
The fix for bug #28404 causes the covering ordering indexes to be
preferred unconditionally over non-covering and ref indexes.
Fixed by comparing the cost of using a covering index to the cost of
using a ref index even for covering ordering indexes.
Added an assertion to clarify the condition the local variables should
be in.
mysql-test/include/mix1.inc:
Bug #36259: fixed a non-stable test case
mysql-test/r/innodb_mysql.result:
Bug #36259 and #45828 : test case
mysql-test/t/innodb_mysql.test:
Bug #36259 and #45828 : test case
sql/sql_select.cc:
Bug #36259 and #45828 : don't consider covering indexes supperior to
ref keys.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
mysql-test/r/type_newdecimal.result:
Added a test case for bug #45262.
mysql-test/t/type_newdecimal.test:
Added a test case for bug #45262.
sql/item.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_cmpfunc.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_func.cc:
1. Use the new 'truncate' parameter in
my_decimal_precision_to_length().
2. Do not truncate decimal precision to DECIMAL_MAX_PRECISION
for additive expressions involving long DECIMAL constants.
3. Fixed an incosistency in how DECIMAL constants and
expressions are handled for CREATE ... SELECT.
sql/item_func.h:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_sum.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/my_decimal.h:
Do not truncate precision to DECIMAL_MAX_PRECISION
when calculating length in
my_decimal_precision_to_length() if 'truncate' parameter
is FALSE.
sql/sql_select.cc:
1. Use the new 'truncate' parameter in
my_decimal_precision_to_length().
2. Use a more correct logic when adjusting value's length.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
mysql-test/r/select.result:
A test case for the bug#45266: Uninitialized variable lead to an empty result.
mysql-test/t/select.test:
A test case for the bug#45266: Uninitialized variable lead to an empty result.
sql/sql_base.cc:
Bug#45266: Uninitialized variable lead to an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
sql/sql_select.cc:
Bug#45266: Uninitialized variable lead to an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
sql/structs.h:
Bug#45266: Uninitialized variable lead to an empty result.
A comment is added.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
mysql-test/r/index_merge.result:
Bug #44810: test case
mysql-test/t/index_merge.test:
Bug #44810: test case
sql/sql_select.cc:
Bug #44810: preserve the io_cache produced by filesort while cleaning up
the index merge quick access method (QUICK_INDEX_MERGE_SELECT).
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
uninitialized variable used as subscript
Grouping select from a "constant" InnoDB table (a table
of a single row) joined with other tables caused a crash.
mysql-test/r/innodb_mysql.result:
Added test case for bug bug #44886.
mysql-test/t/innodb_mysql.test:
Added test case for bug bug #44886.
sql/sql_select.cc:
Bug #44886: SIGSEGV in test_if_skip_sort_order() -
uninitialized variable used as subscript
1. The test_if_order_by_key function returned unitialized
used_key_parts parameter in case of a "constant" InnoDB
table. Calling function uses this parameter values as
an array index, thus sometimes it caused a crash.
The test_if_order_by_key function has been modified
to set used_key_parts to 0 (no need for ordering).
2. The test_if_skip_sort_order function has been
modified to accept zero used_key_parts value and
to prevent an array access by negative index.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.
sql/ha_myisam.cc:
Let go of (inno, for now) latch when doing MyISAM-repair.
(optimize passes through repair.) ("Stuck" in "Repair with
keycache".)
sql/sql_insert.cc:
Let go of (inno, for now) latch when doing CREATE...SELECT
in select_insert::send_data() -- it might take a while.
("stuck" in "Sending data")
sql/sql_select.cc:
Release temporary (inno, for now) latch on
- free_tmp_table() (this can take surprisingly long, "removing tmp table")
- create_myisam_from_heap() (HEAP table overflowing onto disk as MyISAM,
"converting HEAP to MyISAM")
HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the set of
types of the IN/CASE arguments at compile time, so if any of the
arguments of the CASE function changes its type to a string it will
still be covered by the information prepared at compile time.
mysql-test/include/mix1.inc:
Bug #44399: extended the test to cover the different types
mysql-test/r/func_in.result:
Bug #44399: test case
mysql-test/r/innodb_mysql.result:
Bug #44399: extended the test to cover the different types
mysql-test/t/func_in.test:
Bug #44399: test case
sql/item.cc:
Bug #44399: Implement typed caching for GROUP BY
sql/item.h:
Bug #44399: Implement typed caching for GROUP BY
sql/item_cmpfunc.cc:
Bug #44399: remove the special case
sql/sql_select.cc:
Bug #44399: Implement typed caching for GROUP BY
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
mysql-test/include/mix1.inc:
Add test case for bug #44290.
mysql-test/r/innodb_mysql.result:
Add test case for bug #44290.
sql/sql_select.cc:
Bug #44290: explain crashes for subquery with distinct in
SQL_SELECT::test_quick_select
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
'INSERT ... SELECT' statements
The code that produces result rows expected that a duplicate row
error could not occur in INSERT ... SELECT statements with
unfulfilled WHERE conditions. This may happen, however, if the
SELECT list contains only aggregate functions.
Fixed by checking if an error occured before trying to send EOF
to the client.
mysql-test/r/insert_select.result:
Bug#44306: Test result
mysql-test/t/insert_select.test:
Bug#44306: Test case
sql/sql_select.cc:
Bug#44306: Fix
EXPLAIN EXTENDED of nested query containing a error:
1054 Unknown column '...' in 'field list'
may cause a server crash.
Parse error like described above forces a call to
JOIN::destroy() on malformed subquery.
That JOIN::destroy function closes and frees temporary
tables. However, temporary fields of these tables
may be listed in st_select_lex::group_list of outer
query, and that st_select_lex may not cleanup them
properly. So, after the JOIN::destroy call that
st_select_lex::group_list may have Item_field
objects with dangling pointers to freed temporary
table Field objects. That caused a crash.
mysql-test/r/subselect3.result:
Added test case for bug #37362.
mysql-test/t/subselect3.test:
Added test case for bug #37362.
sql/sql_select.cc:
Bug #37362: Crash in do_field_eq
The JOIN::destroy function has been modified to
cleanup temporary table column items.
Original commentary:
Bug #37348: Crash in or immediately after JOIN::make_sum_func_list
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
mysql-test/r/func_group.result:
Backport bug #37348 fix 5.1 --> 5.0.
mysql-test/t/func_group.test:
Backport bug #37348 fix 5.1 --> 5.0.
sql/item.cc:
Backport bug #37348 fix 5.1 --> 5.0.
sql/item.h:
Backport bug #37348 fix 5.1 --> 5.0.
sql/sql_select.cc:
Backport bug #37348 fix 5.1 --> 5.0.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
mysql-test/r/select.result:
Fix for bug #42957: no results from
select where .. (col=col and col=col) or ... (false expression)
- test result.
mysql-test/t/select.test:
Fix for bug #42957: no results from
select where .. (col=col and col=col) or ... (false expression)
- test case.
sql/sql_select.cc:
Fix for bug #42957: no results from
select where .. (col=col and col=col) or ... (false expression)
- replacing equality predicates by multiple equality items check
if we eliminate all the predicates at the AND level and
replace them with TRUE if so.
mysqld is optimized for the default
case (up to 64-indices); for a greater
number of indices it goes through a
different code path. As that code-path
is a compile-time option and can not
easily be covered in standard tests,
bitrot occurred. key-fields need an
explicit initialization in the non-
optimized case; this setup was
presumably not added when a new key-
vector was added.
Changeset adds the necessary
initialisations.
No test case added due to dependence
on compile-time option.
sql/sql_select.cc:
Init merge_keys as well. If we don't,
things blow up badly outside of the
optimized-for-64-keys case!
sql/table.cc:
Init merge_keys as well. If we don't,
things blow up badly outside of the
optimized-for-64-keys case!
connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
mysql-test/r/innodb_mysql.result:
Bug #42419: test case
mysql-test/t/innodb_mysql-master.opt:
Bug #42419: increase the timeout so it covers te conservative
sleep 3 in the test
mysql-test/t/innodb_mysql.test:
Bug #42419: test case
sql/sql_select.cc:
Bug #42419: clean up the members of TABLE on failure in
make_join_statisitcs()
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
ORDER BY could cause a server crash
Dependent subqueries like
SELECT COUNT(*) FROM t1, t2 WHERE t2.b
IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a)
caused a memory leak proportional to the
number of outer rows.
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only
(make_simple_join of tmp_join may access these values
via 'this' pointer of the parent JOIN).
NOTE: this patch doesn't include standard test case (this is
"out of memory" bug). See bug #42037 page for test cases.
sql/sql_select.cc:
Bug #42037: Queries containing a subquery with DISTINCT and
ORDER BY could cause a server crash
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only.
sql/sql_select.h:
Bug #42037: Queries containing a subquery with DISTINCT and
ORDER BY could cause a server crash
1. The make_simple_join() function has been modified to
JOIN class method.
2. Type of JOIN::table_reexec field has been changed from
TABLE** to TABLE *table_reexec[1]: this field always was
NULL or a pointer to one-element array of pointers, so
a pointer to a pointer has been replaced with one pointer
and unnecessary memory allocation has been eliminated.
messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
mysql-test/r/subselect3.result:
Added test case for bug #39069.
mysql-test/t/subselect3.test:
Added test case for bug #39069.
sql/sql_select.cc:
Bug #39069: <row constructor> IN <table-subquery> seriously
messed up
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.