The query was re-written *after* we had tagged it with NON_AGG_FIELD_USED.
Remove the flag before continuing.
mysql-test/r/explain.result:
Update test case for Bug#48295.
mysql-test/r/subselect.result:
New test case.
mysql-test/t/explain.test:
Update test case for Bug#48295.
mysql-test/t/subselect.test:
New test case.
sql/item.cc:
Use accessor functions for non_agg_field_used/agg_func_used.
sql/item_subselect.cc:
Remove non_agg_field_used when we rewrite query '1 < some (...)' => '1 < max(...)'
sql/item_sum.cc:
Use accessor functions for non_agg_field_used/agg_func_used.
sql/mysql_priv.h:
Remove unused #defines.
sql/sql_lex.cc:
Initialize new member variables.
sql/sql_lex.h:
Replace full_group_by_flag with two boolean flags,
and itroduce accessors for manipulating them.
sql/sql_select.cc:
Use accessor functions for non_agg_field_used/agg_func_used.
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.
Fixed by resetting the variable upon each new iteration.
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
This might happen in 'check_reverse_order:' if we have a
select->quick which could not be made descending by appending
a QUICK_SELECT_DESC. ().
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
- An incorrect 'precomputed_group_by= TRUE' may have been set,
and not reverted, as part of the already modifified QEP (Bug#59308)
- A 'select->quick' might have been created which we fail to delete (bug#59110).
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
The refactorication above contains now intentional changes to the logic which
has been moved to the end of the function.
Furthermore, a smaller part of the fix address the handling of the
select->quick objects which may already exists when we call
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
- Before new select->quick may be created by calling ::test_quick_select(), we
set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
delete the save_quick's. (After this call we may have both a 'save_quick'
and 'select->quick')
- All returns from ::test_if_skip_sort_order() where we may have both a
'save_quick' and a 'select->quick' has been changed to goto's to the
exit points 'skiped_sort_order:' or 'need_filesort:' where we
decide which of the QUICK_SELECT's to keep, and delete the other.
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
'<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1'
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
However, if the BETWEEN predicate specified:
1) '<const1> BETWEEN<const2> AND<field>
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
2) '<const1> GE<const2> AND<const1> LE<field>'
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1> and<const2> for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
3) '<field> EQ <const1>'
This fix moves this optimization of '<field> BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect
key equality / comparison for the key fields in the BETWEEN condition.
tmptable needed
The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.
Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
to crash mysqld".
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
before being populated and it leads to incorrect result
We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:
1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.
2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.
3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.
Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;
4. Derived tables with subquery where subquery
is evaluated before table locking(Bug#54475, Bug#52157)
Suggested solution is following:
-Introduce new field LEX::context_analysis_only with the following
possible flags:
#define CONTEXT_ANALYSIS_ONLY_PREPARE 1
#define CONTEXT_ANALYSIS_ONLY_VIEW 2
#define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
context analysis operation
-Item_subselect::const_item() returns
result depending on LEX::context_analysis_only.
If context_analysis_only is set then we return
FALSE that means that subquery is non-const.
As all subquery types are wrapped by Item_subselect
it allow as to make subquery non-const when
it's necessary.
mysql-test/r/derived.result:
test case
mysql-test/r/multi_update.result:
test case
mysql-test/r/view.result:
test case
mysql-test/suite/innodb/r/innodb_multi_update.result:
test case
mysql-test/suite/innodb/t/innodb_multi_update.test:
test case
mysql-test/suite/innodb_plugin/r/innodb_multi_update.result:
test case
mysql-test/suite/innodb_plugin/t/innodb_multi_update.test:
test case
mysql-test/t/derived.test:
test case
mysql-test/t/multi_update.test:
test case
mysql-test/t/view.test:
test case
sql/item.cc:
--removed unnecessary code
sql/item_cmpfunc.cc:
--removed unnecessary checks
--THD::is_context_analysis_only() is replaced with LEX::is_ps_or_view_context_analysis()
sql/item_func.cc:
--refactored context analysis checks
sql/item_row.cc:
--removed unnecessary checks
sql/item_subselect.cc:
--removed unnecessary code
--added DBUG_ASSERT into Item_subselect::exec()
which asserts that subquery execution can not happen
if LEX::context_analysis_only is set, i.e. at context
analysis stage.
--Item_subselect::const_item()
Return FALSE if LEX::context_analysis_only is set.
It prevents subquery evaluation in ::fix_fields &
::fix_length_and_dec at context analysis stage.
sql/item_subselect.h:
--removed unnecessary code
sql/mysql_priv.h:
--Added new set of flags.
sql/sql_class.h:
--removed unnecessary code
sql/sql_derived.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_lex.cc:
--init LEX::context_analysis_only field
sql/sql_lex.h:
--New LEX::context_analysis_only field
sql/sql_parse.cc:
--removed unnecessary code
sql/sql_prepare.cc:
--removed unnecessary code
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_select.cc:
--refactored context analysis checks
sql/sql_show.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_view.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
> revision-id: alexey.kopytov@sun.com-20100824103548-ikm79qlfrvggyj9h
> parent: sunny.bains@oracle.com-20100816001222-xqc447tr6jwh8c53
> committer: Alexey Kopytov <Alexey.Kopytov@Sun.com>
> branch nick: 5.1-security
> timestamp: Tue 2010-08-24 14:35:48 +0400
> message:
> Bug #55568: user variable assignments crash server when used
> within query
>
> The server could crash after materializing a derived table
> which requires a temporary table for grouping.
>
> When destroying the temporary table used to execute a query for
> a derived table, JOIN::destroy() did not clean up Item_fields
> pointing to fields in the temporary table. This led to
> dereferencing a dangling pointer when printing out the items
> tree later in the outer SELECT.
>
> The solution is an addendum to the patch for bug37362: in
> addition to cleaning up items in tmp_all_fields3, do the same
> for items in tmp_all_fields1, since now we have an example
> where this is necessary.
sql/field.cc:
Make sure field->table_name is not set to NULL in
Field::make_field() to avoid assertion failure in
Item_field::make_field() after cleaning up items
(the assertion fired in udf.test when running
the test suite with the patch applied).
sql/sql_select.cc:
In addition to cleaning up items in tmp_all_fields3, do the
same for items in tmp_all_fields1.
Introduce a new helper function to avoid code duplication.
sql/sql_select.h:
Introduce a new helper function to avoid code duplication in
JOIN::destroy().
and related small fixes.
mysql-test/t/user_var.test:
test for bug
sql/field_conv.cc:
From the C standard, memcpy() has undefined behaviour if to->ptr==from->ptr
sql/item_func.cc:
In the case of BUG#56138, entry->value==ptr in which case memcpy()
has undefined results per the C standard.
sql/sql_select.cc:
Work around a bug in old gcc
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
mysql-test/r/func_time.result:
Test case for bug #52160.
mysql-test/r/type_blob.result:
Test case for bug #52160.
mysql-test/t/func_time.test:
Test case for bug #52160.
mysql-test/t/type_blob.test:
Test case for bug #52160.
sql/item_timefunc.h:
Bug #52160: crash and inconsistent results when grouping
by a function and column
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
sql/sql_select.cc:
Bug #52160: crash and inconsistent results when grouping
by a function and column
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
mysql-test/r/group_by.result:
test case
mysql-test/r/join_outer.result:
test case
mysql-test/t/group_by.test:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
--remove wrong code
--use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
mysql-test/r/explain.result:
test case
mysql-test/t/explain.test:
test case
sql/item_subselect.cc:
Make subquery uncacheable in case of EXPLAIN. It allows to keep
original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
on re-execution.
sql/sql_select.cc:
-restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
-Update TABLE::maybe_null field as it could have
the value(maybe_null==TRUE) based on the assumption
that this join is outer(see setup_table_map() func).
This change is not related to the crash problem but
affects EXPLAIN results in the test case.
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
mysql-test/r/fulltext.result:
test case
mysql-test/t/fulltext.test:
test case
sql/item_func.cc:
reinit ft_handler before re-execution of subquery
sql/item_func.h:
Fixed method name
sql/sql_select.cc:
reinit ft_handler before re-execution of subquery
ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
inited==INDEX
When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.
Fixed by doing the cleanup even if errors occur.
Bug#46754: 'rows' field doesn't reflect partition pruning
The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.
The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.
The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
mysql-test/r/partition_pruning.result:
updated result
mysql-test/t/partition_pruning.test:
Added test.
sql/sql_select.cc:
Use ::info + stats.records instead of ::records().
called twice in a row
Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.
When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.
Fixed by making sure fix_name_res is reset on every loop
iteration.
mysql-test/r/join.result:
Added a test case for bug #53544.
mysql-test/t/join.test:
Added a test case for bug #53544.
sql/sql_select.cc:
Make sure fix_name_res is reset on every loop iteration.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
mysql-test/suite/innodb/r/innodb_mysql.result:
Added a test case for the bug#55656.
mysql-test/suite/innodb/t/innodb_mysql.test:
Added a test case for the bug#55656.
sql/sql_select.cc:
Bug #55656: mysqldump can be slower after bug #39653 fix.
The find_shortest_key function now prefers clustered primary key
if found secondary key includes all fields of the table.
within query
The server could crash after materializing a derived table
which requires a temporary table for grouping.
When destroying the temporary table used to execute a query for
a derived table, JOIN::destroy() did not clean up Item_fields
pointing to fields in the temporary table. This led to
dereferencing a dangling pointer when printing out the items
tree later in the outer SELECT.
The solution is an addendum to the patch for bug37362: in
addition to cleaning up items in tmp_all_fields3, do the same
for items in tmp_all_fields1, since now we have an example
where this is necessary.
mysql-test/r/join.result:
Added test cases for bug#55568 and a duplicate bug #54468.
mysql-test/t/join.test:
Added test cases for bug#55568 and a duplicate bug #54468.
sql/field.cc:
Make sure field->table_name is not set to NULL in
Field::make_field() to avoid assertion failure in
Item_field::make_field() after cleaning up items
(the assertion fired in udf.test when running
the test suite with the patch applied).
sql/sql_select.cc:
In addition to cleaning up items in tmp_all_fields3, do the
same for items in tmp_all_fields1.
Introduce a new helper function to avoid code duplication.
sql/sql_select.h:
Introduce a new helper function to avoid code duplication in
JOIN::destroy().
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called.
Test case added.
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
preserve used_key_parts value for further use in QUICK_SELECT_DESC
file .\filesort.cc, line 149 (part II)
Problem: the server didn't disregard sort order
for some zero length tuples.
Fix: skip sort order in such a case
(zero length NOT NULL string functions).
mysql-test/r/select.result:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- test result.
mysql-test/t/select.test:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- test case.
sql/sql_select.cc:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- disregard sort order for zero length NOT NULL string functions
along with zero length NOT NULL fields.
strict aliasing violations.
One somewhat major source of strict-aliasing violations and
related warnings is the SQL_LIST structure. For example,
consider its member function `link_in_list` which takes
a pointer to pointer of type T (any type) as a pointer to
pointer to unsigned char. Dereferencing this pointer, which
is done to reset the next field, violates strict-aliasing
rules and might cause problems for surrounding code that
uses the next field of the object being added to the list.
The solution is to use templates to parametrize the SQL_LIST
structure in order to deference the pointers with compatible
types. As a side bonus, it becomes possible to remove quite
a few casts related to acessing data members of SQL_LIST.
sql/handler.h:
Use the appropriate template type argument.
sql/item.cc:
Remove now-unnecessary cast.
sql/item_subselect.cc:
Remove now-unnecessary casts.
sql/item_sum.cc:
Use the appropriate template type argument.
Remove now-unnecessary cast.
sql/mysql_priv.h:
Move SQL_LIST structure to sql_list.h
Use the appropriate template type argument.
sql/sp.cc:
Remove now-unnecessary casts.
sql/sql_delete.cc:
Use the appropriate template type argument.
Remove now-unnecessary casts.
sql/sql_derived.cc:
Remove now-unnecessary casts.
sql/sql_lex.cc:
Remove now-unnecessary casts.
sql/sql_lex.h:
SQL_LIST now takes a template type argument which must
match the type of the elements of the list. Use forward
declaration when the type is not available, it is used
in pointers anyway.
sql/sql_list.h:
Rename SQL_LIST to SQL_I_List. The template parameter is
the type of object that is stored in the list.
sql/sql_olap.cc:
Remove now-unnecessary casts.
sql/sql_parse.cc:
Remove now-unnecessary casts.
sql/sql_prepare.cc:
Remove now-unnecessary casts.
sql/sql_select.cc:
Remove now-unnecessary casts.
sql/sql_show.cc:
Remove now-unnecessary casts.
sql/sql_table.cc:
Remove now-unnecessary casts.
sql/sql_trigger.cc:
Remove now-unnecessary casts.
sql/sql_union.cc:
Remove now-unnecessary casts.
sql/sql_update.cc:
Remove now-unnecessary casts.
sql/sql_view.cc:
Remove now-unnecessary casts.
sql/sql_yacc.yy:
Remove now-unnecessary casts.
storage/myisammrg/ha_myisammrg.cc:
Remove now-unnecessary casts.
There are two problems:
1. In simplify_joins function we calculate table dependencies. If STRAIGHT_JOIN hint
is used for whole SELECT we do not count it and as result some dependendecies
might be lost. It leads to incorrect table order which is returned by
join_tab_cmp_straight() function.
2. make_join_statistics() calculate the transitive closure for relations a particular
JOIN_TAB is 'dependent on'.
We aggregate the dependent table_map of a JOIN_TAB by adding dependencies from other
tables which we depend on. However, this may also cause new dependencies to be
available after we have completed processing a certain JOIN_TAB.
Both these problems affect condition pushdown and as result condition might be pushed
into wrong table which leads to crash or even omitted which leads to wrong result.
The fix:
1. Use modified 'transitive closure' algorithm provided by Ole John Aske
2. Update table dependences in simplify_joins according to
global STRAIGHT_JOIN hint.
Note: the patch also fixes bugs 46091 & 51492
mysql-test/r/join_outer.result:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
1. Use modified 'transitive closure' algorithm provided by Ole John Aske
2. Update table dependences in simplify_joins according to
global STRAIGHT_JOIN hint.
The fix actually reverts the change introduced
by the patch for bug 51494.
The fact is that patches for bugs 52177&48419
fix bugs 51194&50575 as well.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
reverted wrong fix for bug 51494
greedy_search optimizer_search_depth=0
The algorithm inside restore_prev_nj_state failed to
properly update the counters within the NESTED_JOIN
tree. The counter was decremented each time a table in the
node was removed from the QEP, the correct thing to do being
only to decrement it when the last table in the child node
was removed from the plan. This lead to node counters
getting negative values and the plan thus appeared
impossible. An assertion caught this.
Fixed by not recursing up the tree unless the last table in
the join nest node is removed from the plan
WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
mysql-test/r/explain.result:
Added a test case for bug #48419.
mysql-test/r/ps.result:
Updated test results to take the new (and more correct)
"Extra" comments in execution plans.
mysql-test/t/explain.test:
Added a test case for bug #48419.
sql/sql_select.cc:
There is no point in excluding subqueries from checking
for identically false WHERE conditions.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
mysql-test/r/join.result:
Added a test case for bug #50335.
mysql-test/t/join.test:
Added a test case for bug #50335.
sql/sql_select.cc:
Removing the assertion in eq_ref_table() as it does not make
any sense. We also have to take care of the 'found' counter so
that we count multiple references only once. We rely on this
fact later in eq_ref_table().
union...order by (select... where...)
The problem is mysql is trying to materialize and
cache the scalar sub-queries at JOIN::optimize
even for EXPLAIN where the number of columns is
totally different from what's expected.
Fixed by not executing the scalar subqueries
for EXPLAIN.
The problem is that we can not use make_cond_for_table().
This function relies on used_tables() condition
which is not set properly for subqueries.
As result subquery is not filtered out.
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() func. 'remove_eq_conds()'
algorithm relies on const_item() value and it allows
to handle subqueries in right way.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() function.