.. into MariaDB 5.3
Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3
This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.
Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:
1. The t3 table contains four records. The outer query will read
these and for each of these it will execute the subquery.
2. Before the first execution of the subquery it will be optimized. In
this case the important is what happens to the first table t1:
-make_join_select() will call the range optimizer which decides
that t1 should be accessed using a range scan on the k1 index
It creates a QUICK_RANGE_SELECT object for this.
-As the last part of optimization the ICP code pushes the
condition down to the storage engine for table t1 on the k1 index.
This produces the following information in the explain for this table:
2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
Note the use of filesort.
3. The first execution of the subquery does (among other things) due
to the need for sorting:
a. Call create_sort_index() which again will call find_all_keys():
b. find_all_keys() will read the required keys for all qualifying
rows from the storage engine. To do this it checks if it has a
quick-select for the table. It will use the quick-select for
reading records. In this case it will read four records from the
storage engine (based on the range criteria). The storage engine
will evaluate the pushed index condition for each record.
c. At the end of create_sort_index() there is code that cleans up a
lot of stuff on the join tab. One of the things that is cleaned
is the select object. The result of this is that the
quick-select object created in make_join_select is deleted.
4. The second execution of the subquery does the same as the first but
the result is different:
a. Call create_sort_index() which again will call find_all_keys()
(same as for the first execution)
b. find_all_keys() will read the keys from the storage engine. To
do this it checks if it has a quick-select for the table. Now
there is NO quick-select object(!) (since it was deleted in
step 3c). So find_all_keys defaults to read the table using a
table scan instead. So instead of reading the four relevant records
in the range it reads the entire table (6 records). It then
evaluates the table's condition (and here it goes wrong). Since
the entire condition has been pushed down to the storage engine
using ICP all 6 records qualify. (Note that the storage engine
will not evaluate the pushed index condition in this case since
it was pushed for the k1 index and now we do a table scan
without any index being used).
The result is that here we return six qualifying key values
instead of four due to not evaluating the table's condition.
c. As above.
5. The two last execution of the subquery will also produce wrong results
for the same reason.
Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.
Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.
The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.
In some rare cases when the value of the system variable join_buffer_size
was set to a number less than 256 the function JOIN_CACHE::set_constants
determined the size of an offset in the join buffer equal to 1 though
the minimal join buffer required more than 256 bytes. This could cause
a crash of the server when records from the join buffer were read.
The feature was backported from MySQL 5.6.
Some code was added to make commands as
SELECT * FROM ignored_db.t1;
CALL ignored_db.proc();
USE ignored_db;
to take that option into account.
per-file comments:
mysql-test/r/ignore_db_dirs_basic.result
test result added.
mysql-test/t/ignore_db_dirs_basic-master.opt
options for the test,
actually the set of --ignore-db-dir lines.
mysql-test/t/ignore_db_dirs_basic.test
test for the feature.
Same test from 5.6 was taken as a basis,
then tests for SELECT, CALL etc were added.
per-file comments:
sql/mysql_priv.h
MDEV-495 backport --ignore-db-dir.
interface for db_name_is_in_ignore_list() added.
sql/mysqld.cc
MDEV-495 backport --ignore-db-dir.
--ignore-db-dir handling.
sql/set_var.cc
MDEV-495 backport --ignore-db-dir.
the @@ignore_db_dirs variable added.
sql/sql_show.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
sql/sql_show.h
MDEV-495 backport --ignore-db-dir.
interface added for opt_ignored_db_dirs.
sql/table.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
Analysis:
The queries in question use the [unique | index]_subquery execution methods.
These methods reuse the ref keys constructed by create_ref_for_key(). The
way create_ref_for_key() works is that it doesn't store in ref.key_copy[]
store_key elements that represent constants. In particular it doesn't store
the store_key for NULL constants.
The execution of [unique | index]_subquery calls
subselect_uniquesubquery_engine::copy_ref_key, which in addition to copy
the left IN argument into a index lookup key, is supposed to detect if
the left IN argument contains NULLs. Since the store_key for the NULL
constant is not copied into the key array, the null is not detected, and
execution erroneously proceeds as if it should look for a complete match.
Solution:
The solution (unlike MySQL) is to reuse already computed information about
NULL presence. Item_in_optimizer::val_int already finds out if the left IN
operand contains NULLs. The fix propagates this to the execution methods
subselect_[unique | index]subquery_engine::exec so it knows if there were
NULL values independent of the presence of keys.
In addition the patch siplifies copy_ref_key() and the logic that hanldes
the case of NULLs in the left IN operand.
Link view/derived table fields to a real table to check turning the table record to null row.
Item_direct_view_ref wrapper now checks if table is turned to null row.
Autointersections of an object were treated as nodes, so the wrong result.
per-file comments:
mysql-test/r/gis.result
Bug #1043845 st_distance() results are incorrect depending on variable order.
test result updated.
mysql-test/t/gis.test
Bug #1043845 st_distance() results are incorrect depending on variable order.
test case added.
sql/item.cc
small fix to make compilers happy.
sql/item_geofunc.cc
Bug #1043845 st_distance() results are incorrect depending on variable order.
Skip intersection points when calculate distance.
When we append data to the binlog file, we use fdatasync() to ensure
the data gets to disk so that crash recovery can work.
Unfortunately there seems to be a bug in ext3/ext4 on linux, so that
fdatasync() does not correctly sync all data when the size of a file
is increased. This causes crash recovery to not work correctly (it
loses transactions from the binlog).
As a work-around, use fsync() for the binlog, not fdatasync(). Since
we are increasing the file size, (correct) fdatasync() will most
likely not be faster than fsync() on any file system, and fsync()
does work correctly on ext3/ext4. This avoids the need to try to
detect if we are running on buggy ext3/ext4.
1. Field_newdate::get_date should refuse to return a date with zeros when
TIME_NO_ZERO_IN_DATE is set, not when TIME_FUZZY_DATE is unset
2. Item_func_to_days and Item_date_add_interval can only work with valid dates,
no zeros allowed.
fix Item_func_add_time::get_date() to generate valid dates.
Move the validity check inside get_date_from_daynr()
instead of relying on callers
(5 that had it, and 2 that did not, but should've)
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.
Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.