Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.
Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.
One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.
and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
sql/sql_select.cc:
Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.
sql/item_subselect.cc:
addon, fix for 11829691, item could be created in
runtime memroot, so we need to use real_item instead.
"ORDER BY" AND "LIMIT BY" CLAUSE
PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.
ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.
While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function.
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and
hence the estimated number of rows calculated above are
wrong (which results in choosing the second index).
FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.
This fix is backported from 5.6. This problem is fixed in 5.6 as
part of changes for work log #5558
mysql-test/r/subselect.result:
Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
Do not pass the actual 'limit' value if 'need_tmp' is true.
Problem: Some queries with subqueries and a HAVING clause that
consists only of a column not in the select or grouping lists causes
the server to crash.
During parsing, an Item_ref is constructed for the HAVING column. The
name of the column is resolved when JOIN::prepare calls fix_fields()
on its having clause. Since the column is not mentioned in the select
or grouping lists, a ref pointer is not found and a new Item_field is
created instead. The Item_ref is replaced by the Item_field in the
tree of HAVING clauses. Since the tree consists only of this item, the
pointer that is updated is JOIN::having. However,
st_select_lex::having still points to the Item_ref as the root of the
tree of HAVING clauses.
The bug is triggered when doing filesort for create_sort_index(). When
find_all_keys() calls select->cond->walk() it eventually reaches
Item_subselect::walk() where it continues to walk the having clauses
from lex->having. This means that it finds the Item_ref instead of the
new Item_field, and Item_ref::walk() tries to dereference the ref
pointer, which is still null.
The crash is reproducible only in 5.5, but the problem lies latent in
5.1 and trunk as well.
Fix: After calling fix_fields on the having clause in JOIN::prepare(),
set select_lex::having to point to the same item as JOIN::having.
This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
query is executed as a prepared statement. The Item_field is created
in the runtime arena when the query is prepared, and the pointer to
the item is saved by st_select_lex::fix_prepare_information() and
brought back as a dangling pointer when the query is executed, after
the runtime arena has been reclaimed.
Fix: Backport fix from trunk that switches to the permanent arena
before calling Item_ref::fix_fields() in JOIN::prepare().
sql/item.cc:
Set context when creating Item_field.
sql/sql_select.cc:
Switch to permanent arena and update select_lex->having.
WHEN KILLING
Suppose there is a query waiting for a lock. If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file. Since this is a user
requested interruption, no spurious error message must be logged
in the server log. This patch will remove the error message from
the log.
approved by joh and tatjana
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.
Analysis:
When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.
The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.
However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.
Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY
Original comment:
-----------------
3714 Jorgen Loland 2012-03-01
BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT
QUERY OUTPUT
For all but simple grouped queries, temporary tables are used to
resolve grouping. In these cases, the list of grouping fields is
stored in the temporary table and grouping is resolved
there (e.g. by adding a unique constraint on the involved
fields). Because of this, grouping is already done when the rows
are read from the temporary table.
In the case where a group clause may be optimized away, grouping
does not have to be resolved using a temporary table. However, if
a temporary table is explicitly requested (e.g. because the
SQL_BUFFER_RESULT hint is used, or the statement is
INSERT...SELECT), a temporary table is used anyway. In this case,
the temporary table is created with an empty group list (because
the group clause was optimized away) and it will therefore not
create groups. Since the temporary table does not take care of
grouping, JOIN::group shall not be set to false in
make_simple_join(). This was fixed in bug 12578908.
However, there is an exception where make_simple_join() should
set JOIN::group to false even if the query uses a temporary table
that was explicitly requested but is not strictly needed. That
exception is if the loose index scan access method (explain
says "Using index for group-by") is used to read into the
temporary table. With loose index scan, grouping is resolved
by the access method. This is exactly what happens in this bug.
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.
Solution is to mark constant only top equalities connected with AND.
ORDER BY COUNT(*) LIMIT.
PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)
The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.
Following is the code flow:
Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan
Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort
Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched
FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
mysql-test/r/func_group_innodb.result:
Added test result for Bug#12713907
mysql-test/t/func_group_innodb.test:
Added test case for Bug#12713907
sql/sql_select.cc:
Remove the call to set_keyread as we do it from access
functions 'join_read_first' and 'join_read_last'
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).
mysql-test/r/subselect_innodb.result:
test result
mysql-test/t/subselect_innodb.test:
test case
sql/sql_select.cc:
added OUTER_REF_TABLE_BIT to the used_tables parameter
for create_ref_for_key() function.
storage/innobase/handler/ha_innodb.cc:
added assertion, request from Inno team
storage/innodb_plugin/handler/ha_innodb.cc:
added assertion, request from Inno team
The main problem was a bug in CSV where it provided wrong statistics (it claimed the table was empty when it wasn't)
I also fixed wrong freeing of blob's in the CSV handler. (Any call to handler::read_first_row() on a CSV table with blobs would fail)
mysql-test/r/csv.result:
Added new test case
mysql-test/r/partition_innodb.result:
Updated test results after fixing bug with impossible partitions and const tables
mysql-test/t/csv.test:
Added new test case
sql/sql_select.cc:
Cleaned up code for handling of partitions.
Fixed also a bug where we didn't threat a table with impossible partitions as a const table.
storage/csv/ha_tina.cc:
Allocate blobroot onces.
IS EXECUTED TWICE FROM P
This bug is a duplicate of bug 12567331, which was pushed to the
optimizer backporting tree on 2011-06-11. This is just a back-port of
the fix. Both test cases are included as they differ somewhat.
If the sorted table belongs to a dependent subquery then the function
create_sort_index() should not clear TABLE:: select and TABLE::select
for this table after the sort of the table has been performed, because
these members are needed for the second execution of the subquery.
mysql-test/r/select.result:
Test case for lp:780425
mysql-test/r/select_pkeycache.result:
lp:780425
mysql-test/t/select.test:
lp:780425
sql/sql_select.cc:
Added DBUG_ASSERT to be prove some logic and later be able to simplify the code
Set implicit_grouping if we delete a GROUP BY to signal do_select() that a grouping needs to be done.
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
mysql-test/r/error_simulation.result:
test case
mysql-test/t/error_simulation.test:
test case
sql/item_subselect.cc:
added new flag subselect_single_select_engine::optimize_error
which is used for error detection which could happen at optimize
stage.
sql/item_subselect.h:
added new flag subselect_single_select_engine::optimize_error
sql/sql_select.cc:
test case
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
mysql-test/r/sp.result:
test case
mysql-test/t/sp.test:
test case
sql/sql_base.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_class.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_class.h:
THD::used_tables is replaced with LEX::used_tables
sql/sql_insert.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.h:
THD::used_tables is replaced with LEX::used_tables
sql/sql_prepare.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_select.cc:
THD::used_tables is replaced with LEX::used_tables