Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
Valgrind warnings were caused by comparing index values to an un-initialized field.
mysql-test/r/subselect.result:
New test cases.
mysql-test/t/subselect.test:
New test cases.
sql/opt_sum.cc:
Add thd to opt_sum_query enabling it to test for errors.
If we have a non-nullable index, we cannot use it to match null values,
since set_null() will be ignored, and we might compare uninitialized data.
sql/sql_select.cc:
Add thd to opt_sum_query, enabling it to test for errors.
sql/sql_select.h:
Add thd to opt_sum_query, enabling it to test for errors.
There are two problems with ANALYSE():
1. Memory leak
it happens because do_select() can overwrite
JOIN::procedure field(with zero value in our case) and
JOIN destructor don't free the memory allocated for
JOIN::procedure. The fix is to save original JOIN::procedure
before do_select() call and restore it after do_select
execution.
2. Wrong result
If ANALYSE() procedure is used for the statement with LIMIT clause
it could retrun empty result set. It happens because of missing
analyse::end_of_records() call. First end_send() function call
returns NESTED_LOOP_QUERY_LIMIT and second call of end_send() with
end_of_records flag enabled does not happen. The fix is to return
NESTED_LOOP_OK from end_send() if procedure is active.
mysql-test/r/analyse.result:
test case
mysql-test/t/analyse.test:
test case
sql/sql_select.cc:
--save original JOIN::procedure before do_select() call and
restore it after do_select execution.
--return NESTED_LOOP_OK from end_send() if procedure is active
mysql-test/r/union.result:
Added test for lp:732124
mysql-test/t/union.test:
Added test for lp:732124
sql/sp_rcontext.cc:
Updated function definition for ::send_data()
sql/sp_rcontext.h:
Updated function definition for ::send_data()
sql/sql_analyse.cc:
Test if send_data() returned an error
sql/sql_class.cc:
Updated function definition for ::send_data()
sql/sql_class.h:
Changed select_result::send_data(List<Item> &items) to return -1 in case of duplicate row that should not be counted as part of LIMIT
sql/sql_cursor.cc:
Check if send_data returned error
sql/sql_delete.cc:
Updated function definition for ::send_data()
sql/sql_insert.cc:
Updated function definition for ::send_data()
sql/sql_select.cc:
Don't count rows which send_data() tells you to ignore
sql/sql_union.cc:
Inform caller that the row should be ignored. This is the real bug fix for lp:732124
sql/sql_update.cc:
Updated function definition for ::send_data()
Changed some String.ptr() -> String.c_ptr() for String that are not guaranteed to end with \0
Removed some c_ptr() usage from parameters to functions that takes ptr & length
Use preallocate buffers to avoid calling malloc() for most operations.
sql/event_db_repository.cc:
alias is now a String
sql/event_scheduler.cc:
c_ptr -> c_ptr_safe() to avoid warnings from valgrind.
sql/events.cc:
c_ptr -> c_ptr_safe() to avoid warnings from valgrind.
c_ptr -> ptr() as function takes ptr & length
sql/field.cc:
alias is now a String
sql/field.h:
alias is now a String
sql/ha_partition.cc:
alias is now a String
sql/handler.cc:
alias is now a String
ptr() -> c_ptr() as string is not guaranteed to be \0 terminated
sql/item.cc:
Store error parameter in separarte buffer to ensure correct error message
sql/item_func.cc:
ptr() -> c_ptr_safe() as string is not guaranteed to be \0 terminated
sql/item_sum.h:
Use my_strtod() instead of my_atof() to not have to make string \0 terminated
sql/lock.cc:
alias is now a String
sql/log.cc:
c_ptr() -> ptr() as function takes ptr & length
sql/log_event.cc:
c_ptr_quick() -> ptr() as we only want to get the pointer to String buffer
sql/opt_range.cc:
ptr() -> c_ptr() as string is not guaranteed to be \0 terminated
sql/opt_table_elimination.cc:
alias is now a String
sql/set_var.cc:
ptr() -> c_ptr() as string is not guaranteed to be \0 terminated
c_ptr() -> c_ptr_safe() to avoid warnings from valgrind.
c_ptr() -> ptr() as function takes ptr & length
Simplify some code.
sql/sp.cc:
c_ptr() -> ptr() as function takes ptr & length
sql/sp_rcontext.cc:
alias is now a String
sql/sql_base.cc:
alias is now a String.
Here we win a realloc() for most alias usage.
sql/sql_class.cc:
Use size descriptor for printf() to avoid accessing bytes outside of buffer
sql/sql_insert.cc:
Change allocation of TABLE as it's now contains a String
_ptr() -> ptr() as function takes ptr & length
sql/sql_load.cc:
Use preallocate buffers to avoid calling malloc() for most operations.
sql/sql_parse.cc:
Use c_ptr_safe() to ensure string is \0 terminated.
sql/sql_plugin.cc:
c_ptr_quick() -> ptr() as function takes ptr & length
sql/sql_select.cc:
alias is now a String
sql/sql_show.cc:
alias is now a String
sql/sql_string.h:
Added move() function to change who owns the string (owner does the free)
sql/sql_table.cc:
alias is now a String
c_ptr() -> c_ptr_safe() to avoid warnings from valgrind.
sql/sql_test.cc:
c_ptr() -> c_ptr_safe() to avoid warnings from valgrind.
alias is now a String
sql/sql_trigger.cc:
c_ptr() -> c_ptr_safe() to avoid warnings from valgrind.
Use field->init() to setup pointers to alias.
sql/sql_update.cc:
alias is now a String
sql/sql_view.cc:
ptr() -> c_ptr_safe() as string is not guaranteed to be \0 terminated
sql/sql_yacc.yy:
r() -> c_ptr() as string is not guaranteed to be \0 terminated
sql/table.cc:
alias is now a String
sql/table.h:
alias is now a String
storage/federatedx/ha_federatedx.cc:
Remove extra 1 byte alloc that is automaticly done by strmake()
Ensure that error message ends with \0
storage/maria/ha_maria.cc:
alias is now a String
storage/myisam/ha_myisam.cc:
alias is now a String
- Fixed some issues with partitions and connection_string, which also fixed lp:716890 "Pre- and post-recovery crash in Aria"
- Fixed wrong assert in Aria
Now need to merge with latest xtradb before pushing
sql/ha_partition.cc:
Ensure that m_ordered_rec_buffer is not freed before close.
sql/mysqld.cc:
Changed to use opt_stack_trace instead of opt_pstack.
Removed references to pstack
sql/partition_element.h:
Ensure that connect_string is initialized
storage/maria/ma_key_recover.c:
Fixed wrong assert
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.
Fixed by resetting the variable upon each new iteration.
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
This might happen in 'check_reverse_order:' if we have a
select->quick which could not be made descending by appending
a QUICK_SELECT_DESC. ().
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
- An incorrect 'precomputed_group_by= TRUE' may have been set,
and not reverted, as part of the already modifified QEP (Bug#59308)
- A 'select->quick' might have been created which we fail to delete (bug#59110).
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
The refactorication above contains now intentional changes to the logic which
has been moved to the end of the function.
Furthermore, a smaller part of the fix address the handling of the
select->quick objects which may already exists when we call
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
- Before new select->quick may be created by calling ::test_quick_select(), we
set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
delete the save_quick's. (After this call we may have both a 'save_quick'
and 'select->quick')
- All returns from ::test_if_skip_sort_order() where we may have both a
'save_quick' and a 'select->quick' has been changed to goto's to the
exit points 'skiped_sort_order:' or 'need_filesort:' where we
decide which of the QUICK_SELECT's to keep, and delete the other.
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
'<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1'
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
However, if the BETWEEN predicate specified:
1) '<const1> BETWEEN<const2> AND<field>
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
2) '<const1> GE<const2> AND<const1> LE<field>'
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1> and<const2> for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
3) '<field> EQ <const1>'
This fix moves this optimization of '<field> BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect
key equality / comparison for the key fields in the BETWEEN condition.
Added logging of all possible fatal table errors if --log-warnings set to > 1
mysql-test/extra/rpl_tests/rpl_EE_err.test:
Safety fix
mysql-test/extra/rpl_tests/rpl_row_basic.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/r/archive.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/r/csv.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/maria/r/maria-autozerofill.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/maria/t/maria-autozerofill.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/maria/t/maria-recover.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/parts/t/partition_recover_myisam.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/r/rpl_bug38694.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/r/rpl_idempotency.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/r/rpl_ignore_table.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/r/rpl_row_basic_11bugs.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/r/rpl_row_conflicts.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/r/rpl_temporary_errors.result:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/t/rpl_bug38694.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/t/rpl_idempotency.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/t/rpl_ignore_table.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/t/rpl_row_basic_11bugs.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/t/rpl_row_conflicts.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/suite/rpl/t/rpl_temporary_errors.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/t/archive.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
mysql-test/t/csv.test:
Added suppression of possible error message (so that one can run test with --log-warnings=2)
sql/handler.cc:
If running with --assert-of-crashed-table or --log-warnings > 1 then print engine error to log
sql/sql_select.cc:
Disable not initialized warning from gcc
strings/Makefile.am:
Fixed compiler error on Solaris 10 (duplicate strmov() function)
An assertion failure was triggered for a 6-way join query that uses two
join buffers.
The failure happened because every call of the function flush_cached_records()
saved and restored status of all tables before the table join_tab. It
must do it only for those of them that follow the last table that uses
a join buffer.
tmptable needed
The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.
Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
to crash mysqld".
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
before being populated and it leads to incorrect result
We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:
1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.
2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.
3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.
Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;
4. Derived tables with subquery where subquery
is evaluated before table locking(Bug#54475, Bug#52157)
Suggested solution is following:
-Introduce new field LEX::context_analysis_only with the following
possible flags:
#define CONTEXT_ANALYSIS_ONLY_PREPARE 1
#define CONTEXT_ANALYSIS_ONLY_VIEW 2
#define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
context analysis operation
-Item_subselect::const_item() returns
result depending on LEX::context_analysis_only.
If context_analysis_only is set then we return
FALSE that means that subquery is non-const.
As all subquery types are wrapped by Item_subselect
it allow as to make subquery non-const when
it's necessary.
mysql-test/r/derived.result:
test case
mysql-test/r/multi_update.result:
test case
mysql-test/r/view.result:
test case
mysql-test/suite/innodb/r/innodb_multi_update.result:
test case
mysql-test/suite/innodb/t/innodb_multi_update.test:
test case
mysql-test/suite/innodb_plugin/r/innodb_multi_update.result:
test case
mysql-test/suite/innodb_plugin/t/innodb_multi_update.test:
test case
mysql-test/t/derived.test:
test case
mysql-test/t/multi_update.test:
test case
mysql-test/t/view.test:
test case
sql/item.cc:
--removed unnecessary code
sql/item_cmpfunc.cc:
--removed unnecessary checks
--THD::is_context_analysis_only() is replaced with LEX::is_ps_or_view_context_analysis()
sql/item_func.cc:
--refactored context analysis checks
sql/item_row.cc:
--removed unnecessary checks
sql/item_subselect.cc:
--removed unnecessary code
--added DBUG_ASSERT into Item_subselect::exec()
which asserts that subquery execution can not happen
if LEX::context_analysis_only is set, i.e. at context
analysis stage.
--Item_subselect::const_item()
Return FALSE if LEX::context_analysis_only is set.
It prevents subquery evaluation in ::fix_fields &
::fix_length_and_dec at context analysis stage.
sql/item_subselect.h:
--removed unnecessary code
sql/mysql_priv.h:
--Added new set of flags.
sql/sql_class.h:
--removed unnecessary code
sql/sql_derived.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_lex.cc:
--init LEX::context_analysis_only field
sql/sql_lex.h:
--New LEX::context_analysis_only field
sql/sql_parse.cc:
--removed unnecessary code
sql/sql_prepare.cc:
--removed unnecessary code
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_select.cc:
--refactored context analysis checks
sql/sql_show.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_view.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
Open issues:
- A better fix for #57688; Igor is working on this
- Test failure in index_merge_innodb.test ; Igor promised to look at this
- Some Innodb tests fails (need to merge with latest xtradb) ; Kristian promised to look at this.
- Failing tests: innodb_plugin.innodb_bug56143 innodb_plugin.innodb_bug56632 innodb_plugin.innodb_bug56680 innodb_plugin.innodb_bug57255
- Werror is disabled; Should be enabled after merge with xtradb.
and related small fixes.
mysql-test/t/user_var.test:
test for bug
sql/field_conv.cc:
From the C standard, memcpy() has undefined behaviour if to->ptr==from->ptr
sql/item_func.cc:
In the case of BUG#56138, entry->value==ptr in which case memcpy()
has undefined results per the C standard.
sql/sql_select.cc:
Work around a bug in old gcc
Avoid doing reallocs
Prealloc some strings / provide extension allocation size to some strings
This gave a 25 % speedup in some mysql-test-run tests.
mysys/safemalloc.c:
More DBUG_PRINT
sql/net_serv.cc:
Make all mallocs() look the similar. (just-for-safety fix)
sql/protocol.cc:
Ensure that communication packet buffer is allocated.
(It's freed by stored precedures and some DLL statements)
sql/sp.cc:
Fixed valgrind warning
sql/sql_select.cc:
Set extent allocation for buffer that has a lot of append() calls.
sql/sql_show.cc:
Fixed wrong usage of string buffer. Old code worked in test suite 'just-by-chance'
sql/sql_string.cc:
Call realloc_with_extra_if_needed() in append() functions.
sql/sql_string.h:
Added 'extra_alloc' member, to specify chunck size for realloc().
extra_alloc is addaptive to catch cases where preallocation of buffers is not done properly.
Simplified free() to allow compiler to optimize things better (and to keep things consistent).
Fixed shrink() to take into account the extra memory added to the Alloced_length in realloc(). This saves us a realloc() per query.
sql/sql_test.cc:
Set extent allocation for buffer that has a lot of append() calls.
sql/table.cc:
Set extent allocation for buffer that has a lot of append() calls.
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
mysql-test/r/func_time.result:
Test case for bug #52160.
mysql-test/r/type_blob.result:
Test case for bug #52160.
mysql-test/t/func_time.test:
Test case for bug #52160.
mysql-test/t/type_blob.test:
Test case for bug #52160.
sql/item_timefunc.h:
Bug #52160: crash and inconsistent results when grouping
by a function and column
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
sql/sql_select.cc:
Bug #52160: crash and inconsistent results when grouping
by a function and column
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
mysql-test/r/group_by.result:
test case
mysql-test/r/join_outer.result:
test case
mysql-test/t/group_by.test:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
--remove wrong code
--use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join
BUG#26447 prefer a clustered key for an index scan, as secondary index is always slower
... which was fixed to cause
BUG#35850 queries take 50% longer
... and was reverted
and
BUG#39653 prefer a secondary index for an index scan, as clustered key is always slower
... which was fixed to cause
BUG#55656 mysqldump takes six days instead of half an hour
... and was amended with a special case workaround
sql/opt_range.cc:
move get_index_only_read_time() into the handler class
sql/sql_select.cc:
use cost not an index length when choosing a cheaper index
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
mysql-test/r/explain.result:
test case
mysql-test/t/explain.test:
test case
sql/item_subselect.cc:
Make subquery uncacheable in case of EXPLAIN. It allows to keep
original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
on re-execution.
sql/sql_select.cc:
-restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
-Update TABLE::maybe_null field as it could have
the value(maybe_null==TRUE) based on the assumption
that this join is outer(see setup_table_map() func).
This change is not related to the crash problem but
affects EXPLAIN results in the test case.
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
mysql-test/r/fulltext.result:
test case
mysql-test/t/fulltext.test:
test case
sql/item_func.cc:
reinit ft_handler before re-execution of subquery
sql/item_func.h:
Fixed method name
sql/sql_select.cc:
reinit ft_handler before re-execution of subquery
The condition over the outer tables now are extracted from
the on condition of any outer join. This condition is
saved in a special field of the JOIN_TAB structure for
the first inner table of the outer join. The condition
is checked before the first inner table is accessed. If
it turns out to be false the table is not accessed at all
and a null complemented row is generated immediately.
sql/field.cc:
Remove feature of 'new_mode' that was never implemtented in a newer MySQL version.
sql/item_cmpfunc.cc:
Boyer more is stable; Don't have to be protected by --skip-new anymore
sql/mysqld.cc:
Don't disable some proven stable functions with --skip-new
sql/records.cc:
Don't disable record caching with --safe-mode anymore
sql/sql_delete.cc:
Do fast truncate even if --skip-new or --safe is used
sql/sql_parse.cc:
Use always mysql_optimizer() for optimizer (instead of mysql_recreate_table() in case of --safe or --skip-new)
sql/sql_select.cc:
Don't disable 'only_eq_ref_tables' if --safe is used.
sql/sql_yacc.yy:
Removed not meaningfull test of --old
ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.