The problem was that maybe_null of Item_row and its componetes was unsynced after update_used_tables() (and so pushed_cond_guards was not initialized but then requested).
Fix updates Item_row::maybe_null on update_used_tables().
Analysis
The reason for the less efficient plan was result of a prior design decision -
to limit the eveluation of constant expressions during optimization to only
non-expensive ones. With this approach all stored procedures were considered
expensive, and were not evaluated during optimization. As a result, SPs didn't
participate in range optimization, which resulted in a plan with table scan
rather than index range scan.
Solution
Instead of considering all SPs expensive, consider expensive only those SPs
that are non-deterministic. If an SP is deterministic, the optimizer will
checj if it is constant, and may eventually evaluate it during optimization.
from a MERGE view.
The problem was in the lost ability to be null for the table of a left join if it
is a view/derived table.
It hapenned because setup_table_map(), was called earlier then we merged
the view or derived.
Fixed by propagating new maybe_null flag during Item::update_used_tables().
Change in join_outer.test and join_outer_jcl6.test appeared because
IS NULL reported no used tables (i.e. constant) for argument which could not be
NULL and new maybe_null flag was propagated for IS NULL argument (Item_field)
because table the Item_field belonged to changed its maybe_null status.
Analysis:
The following call stack shows that it is possible to set Item_cache::value_cached, and the relevant value
without setting Item_cache::example.
#0 Item_cache_temporal::store_packed at item.cc:8395
#1 get_datetime_value at item_cmpfunc.cc:915
#2 resolve_const_item at item.cc:7987
#3 propagate_cond_constants at sql_select.cc:12264
#4 propagate_cond_constants at sql_select.cc:12227
#5 optimize_cond at sql_select.cc:13026
#6 JOIN::optimize at sql_select.cc:1016
#7 st_select_lex::optimize_unflattened_subqueries at sql_lex.cc:3161
#8 JOIN::optimize_unflattened_subqueries at opt_subselect.cc:4880
#9 JOIN::optimize at sql_select.cc:1554
The fix is to set Item_cache_temporal::example even when the value is
set directly by Item_cache_temporal::store_packed. This makes the
Item_cache_temporal object consistent.
If the setting of system variables does not allow to use join buffer
for a join query with GROUP BY <f1,...> / ORDER BY <f1,...> then
filesort is not needed if the first joined table is scanned in
the order compatible with order specified by the list <f1,...>.
fix: don't call field->val_decimal() if the field->is_null()
because the buffer at field->ptr might not hold a valid decimal value
sql/item_sum.cc:
do not call field->val_decimal() if the field->is_null()
storage/maria/ma_blockrec.c:
cleanup
storage/maria/ma_rrnd.c:
cleanup
strings/decimal.c:
typo
If, when executing a query with ORDER BY col LIMIT n, the optimizer chose
an index-merge scan to access the table containing col while there existed
an index defined over col then optimizer did not consider the possibility
of using an alternative range scan by this index to avoid filesort. This
could cause a performance degradation if the optimizer flag index_merge was
set up to 'on'.
mysql-test/r/partition.result:
Added test case
mysql-test/t/partition.test:
Added test case
sql/ha_partition.cc:
Removed printing of not initialized variable
storage/maria/ha_maria.cc:
Don't copy variables that are not initialized
main.mysqlbinlog_row_innodb are skipped by mtr
=== Problem ===
The following tests are wrongly placed in main suite and as a
result these are not run with proper binlog format combinations.
Some are always skipped by mtr.
1) mysqlbinlog_row_myisam
2) mysqlbinlog_row_innodb
3) mysqlbinlog_row.test
4) mysqlbinlog_row_trans.test
5) mysqlbinlog-cp932
6) mysqlbinlog2
7) mysqlbinlog_base64
=== Background ===
mtr runs the tests placed in main suite with binlog format=stmt.
Those that need to be tested against binlog format=row or mixed
or more than one binlog format and require only one mysql server
are placed in binlog suite. mtr runs tests in binlog suite with
all three binlog formats(stmt,row and mixed).
=== Fix ===
1) Moved the test listed in problem section above to binlog suite.
2) Added prefix "binlog_" to the name of each test case moved.
Renamed the coresponding result files and option files accordingly.
mysql-test/extra/binlog_tests/mysqlbinlog_row_engine.inc:
include file for mysqlbinlog_row_myisam.test and
mysqlbinlog_row_myisam.test which are being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog-cp932.result:
result file for mysqlbinlog-cp932.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog2.result:
result file for mysqlbinlog2.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_base64.result:
result file for mysqlbinlog_base64.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row.result:
result file for mysqlbinlog_row.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_innodb.result:
result file for mysqlbinlog_row_innodb.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_myisam.result:
result file for mysqlbinlog_row_myisam.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_trans.result:
result file for mysqlbinlog_row_trans.test which is being moved to
binlog suite.
mysql-test/suite/binlog/t/binlog_mysqlbinlog-cp932-master.opt:
option file for mysqlbinlog-cp932.test which is being moved to
binlog suite.
mysql-test/suite/binlog/t/binlog_mysqlbinlog-cp932.test:
the test requires binlog format=stmt or mixed. Since, it was placed in
main suite earlier, it was only run with binlog format=stmt, and hence
this test was never run with binlog format=mixed.
mysql-test/suite/binlog/t/binlog_mysqlbinlog2.test:
the test requires binlog format=stmt or mixed. Since, it was placed in
main suite earlier, it was only run with binlog format=stmt, and hence
this test was never run with binlog format=mixed.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_base64.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_innodb.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_myisam.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_trans.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
.. into MariaDB 5.3
Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3
This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.
Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:
1. The t3 table contains four records. The outer query will read
these and for each of these it will execute the subquery.
2. Before the first execution of the subquery it will be optimized. In
this case the important is what happens to the first table t1:
-make_join_select() will call the range optimizer which decides
that t1 should be accessed using a range scan on the k1 index
It creates a QUICK_RANGE_SELECT object for this.
-As the last part of optimization the ICP code pushes the
condition down to the storage engine for table t1 on the k1 index.
This produces the following information in the explain for this table:
2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
Note the use of filesort.
3. The first execution of the subquery does (among other things) due
to the need for sorting:
a. Call create_sort_index() which again will call find_all_keys():
b. find_all_keys() will read the required keys for all qualifying
rows from the storage engine. To do this it checks if it has a
quick-select for the table. It will use the quick-select for
reading records. In this case it will read four records from the
storage engine (based on the range criteria). The storage engine
will evaluate the pushed index condition for each record.
c. At the end of create_sort_index() there is code that cleans up a
lot of stuff on the join tab. One of the things that is cleaned
is the select object. The result of this is that the
quick-select object created in make_join_select is deleted.
4. The second execution of the subquery does the same as the first but
the result is different:
a. Call create_sort_index() which again will call find_all_keys()
(same as for the first execution)
b. find_all_keys() will read the keys from the storage engine. To
do this it checks if it has a quick-select for the table. Now
there is NO quick-select object(!) (since it was deleted in
step 3c). So find_all_keys defaults to read the table using a
table scan instead. So instead of reading the four relevant records
in the range it reads the entire table (6 records). It then
evaluates the table's condition (and here it goes wrong). Since
the entire condition has been pushed down to the storage engine
using ICP all 6 records qualify. (Note that the storage engine
will not evaluate the pushed index condition in this case since
it was pushed for the k1 index and now we do a table scan
without any index being used).
The result is that here we return six qualifying key values
instead of four due to not evaluating the table's condition.
c. As above.
5. The two last execution of the subquery will also produce wrong results
for the same reason.
Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.
Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.
The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.
Check ability of index to be NULL as it made in MyISAM. UNIQUE with NULL could have several NULL entries so we have to continue even if ve have found a row.
In some rare cases when the value of the system variable join_buffer_size
was set to a number less than 256 the function JOIN_CACHE::set_constants
determined the size of an offset in the join buffer equal to 1 though
the minimal join buffer required more than 256 bytes. This could cause
a crash of the server when records from the join buffer were read.
The feature was backported from MySQL 5.6.
Some code was added to make commands as
SELECT * FROM ignored_db.t1;
CALL ignored_db.proc();
USE ignored_db;
to take that option into account.
per-file comments:
mysql-test/r/ignore_db_dirs_basic.result
test result added.
mysql-test/t/ignore_db_dirs_basic-master.opt
options for the test,
actually the set of --ignore-db-dir lines.
mysql-test/t/ignore_db_dirs_basic.test
test for the feature.
Same test from 5.6 was taken as a basis,
then tests for SELECT, CALL etc were added.
per-file comments:
sql/mysql_priv.h
MDEV-495 backport --ignore-db-dir.
interface for db_name_is_in_ignore_list() added.
sql/mysqld.cc
MDEV-495 backport --ignore-db-dir.
--ignore-db-dir handling.
sql/set_var.cc
MDEV-495 backport --ignore-db-dir.
the @@ignore_db_dirs variable added.
sql/sql_show.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
sql/sql_show.h
MDEV-495 backport --ignore-db-dir.
interface added for opt_ignored_db_dirs.
sql/table.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
-----------
After compiling from source, during make test I got the following error:
test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2 [ skipped ] Test requires:'
have_ucs2'
Should main.loaddata be skipped if there is no ucs2
How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2
Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
Analysis:
The queries in question use the [unique | index]_subquery execution methods.
These methods reuse the ref keys constructed by create_ref_for_key(). The
way create_ref_for_key() works is that it doesn't store in ref.key_copy[]
store_key elements that represent constants. In particular it doesn't store
the store_key for NULL constants.
The execution of [unique | index]_subquery calls
subselect_uniquesubquery_engine::copy_ref_key, which in addition to copy
the left IN argument into a index lookup key, is supposed to detect if
the left IN argument contains NULLs. Since the store_key for the NULL
constant is not copied into the key array, the null is not detected, and
execution erroneously proceeds as if it should look for a complete match.
Solution:
The solution (unlike MySQL) is to reuse already computed information about
NULL presence. Item_in_optimizer::val_int already finds out if the left IN
operand contains NULLs. The fix propagates this to the execution methods
subselect_[unique | index]subquery_engine::exec so it knows if there were
NULL values independent of the presence of keys.
In addition the patch siplifies copy_ref_key() and the logic that hanldes
the case of NULLs in the left IN operand.
Link view/derived table fields to a real table to check turning the table record to null row.
Item_direct_view_ref wrapper now checks if table is turned to null row.
Autointersections of an object were treated as nodes, so the wrong result.
per-file comments:
mysql-test/r/gis.result
Bug #1043845 st_distance() results are incorrect depending on variable order.
test result updated.
mysql-test/t/gis.test
Bug #1043845 st_distance() results are incorrect depending on variable order.
test case added.
sql/item.cc
small fix to make compilers happy.
sql/item_geofunc.cc
Bug #1043845 st_distance() results are incorrect depending on variable order.
Skip intersection points when calculate distance.