Check ability of index to be NULL as it made in MyISAM. UNIQUE with NULL could have several NULL entries so we have to continue even if ve have found a row.
In some rare cases when the value of the system variable join_buffer_size
was set to a number less than 256 the function JOIN_CACHE::set_constants
determined the size of an offset in the join buffer equal to 1 though
the minimal join buffer required more than 256 bytes. This could cause
a crash of the server when records from the join buffer were read.
The feature was backported from MySQL 5.6.
Some code was added to make commands as
SELECT * FROM ignored_db.t1;
CALL ignored_db.proc();
USE ignored_db;
to take that option into account.
per-file comments:
mysql-test/r/ignore_db_dirs_basic.result
test result added.
mysql-test/t/ignore_db_dirs_basic-master.opt
options for the test,
actually the set of --ignore-db-dir lines.
mysql-test/t/ignore_db_dirs_basic.test
test for the feature.
Same test from 5.6 was taken as a basis,
then tests for SELECT, CALL etc were added.
per-file comments:
sql/mysql_priv.h
MDEV-495 backport --ignore-db-dir.
interface for db_name_is_in_ignore_list() added.
sql/mysqld.cc
MDEV-495 backport --ignore-db-dir.
--ignore-db-dir handling.
sql/set_var.cc
MDEV-495 backport --ignore-db-dir.
the @@ignore_db_dirs variable added.
sql/sql_show.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
sql/sql_show.h
MDEV-495 backport --ignore-db-dir.
interface added for opt_ignored_db_dirs.
sql/table.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
Analysis:
The queries in question use the [unique | index]_subquery execution methods.
These methods reuse the ref keys constructed by create_ref_for_key(). The
way create_ref_for_key() works is that it doesn't store in ref.key_copy[]
store_key elements that represent constants. In particular it doesn't store
the store_key for NULL constants.
The execution of [unique | index]_subquery calls
subselect_uniquesubquery_engine::copy_ref_key, which in addition to copy
the left IN argument into a index lookup key, is supposed to detect if
the left IN argument contains NULLs. Since the store_key for the NULL
constant is not copied into the key array, the null is not detected, and
execution erroneously proceeds as if it should look for a complete match.
Solution:
The solution (unlike MySQL) is to reuse already computed information about
NULL presence. Item_in_optimizer::val_int already finds out if the left IN
operand contains NULLs. The fix propagates this to the execution methods
subselect_[unique | index]subquery_engine::exec so it knows if there were
NULL values independent of the presence of keys.
In addition the patch siplifies copy_ref_key() and the logic that hanldes
the case of NULLs in the left IN operand.
Link view/derived table fields to a real table to check turning the table record to null row.
Item_direct_view_ref wrapper now checks if table is turned to null row.
In fill_schema_table_by_open(): free item list before restoring active arena.
sql/sql_show.cc:
Replaced i_s_arena.free_items with DBUG_ASSERT(i_s_arena.free_list == NULL)
(there's nothing to free in that list)
Autointersections of an object were treated as nodes, so the wrong result.
per-file comments:
mysql-test/r/gis.result
Bug #1043845 st_distance() results are incorrect depending on variable order.
test result updated.
mysql-test/t/gis.test
Bug #1043845 st_distance() results are incorrect depending on variable order.
test case added.
sql/item.cc
small fix to make compilers happy.
sql/item_geofunc.cc
Bug #1043845 st_distance() results are incorrect depending on variable order.
Skip intersection points when calculate distance.
THOUGH IT IS NOT.
The following error message is misleading because it claims
that the BLOB space is not counted.
"ERROR 1118 (42000): Row size too large. The maximum row size for
the used table type, not counting BLOBs, is 8126. You have to
change some columns to TEXT or BLOBs"
When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row. So
the above error message is changed as follows depending on
the row format used:
For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format,
BLOB prefix of 0 bytes is stored inline."
For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."
rb://1252 approved by Marko Makela
PAGE SPLIT
page_rec_get_nth_const(): Map nth==0 to the page infimum.
btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
should never be positioned on the page infimum.
btr_index_page_validate(): Add test instrumentation for checking the
return values of page_rec_get_nth_const() during CHECK TABLE, and for
checking that the page directory slot 0 always contains only one
record, the predefined page infimum record.
page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
assertions guarding against accessing the page slot 0.
page_copy_rec_list_start(): Clarify a comment about ret_pos==0.
rb:1248 approved by Jimmy Yang
ha_innodb::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
The patch by Jorgen Loland only removed the assertion from the
built-in InnoDB, not from the InnoDB Plugin.
When we append data to the binlog file, we use fdatasync() to ensure
the data gets to disk so that crash recovery can work.
Unfortunately there seems to be a bug in ext3/ext4 on linux, so that
fdatasync() does not correctly sync all data when the size of a file
is increased. This causes crash recovery to not work correctly (it
loses transactions from the binlog).
As a work-around, use fsync() for the binlog, not fdatasync(). Since
we are increasing the file size, (correct) fdatasync() will most
likely not be faster than fsync() on any file system, and fsync()
does work correctly on ext3/ext4. This avoids the need to try to
detect if we are running on buggy ext3/ext4.
1. Field_newdate::get_date should refuse to return a date with zeros when
TIME_NO_ZERO_IN_DATE is set, not when TIME_FUZZY_DATE is unset
2. Item_func_to_days and Item_date_add_interval can only work with valid dates,
no zeros allowed.
fix Item_func_add_time::get_date() to generate valid dates.
Move the validity check inside get_date_from_daynr()
instead of relying on callers
(5 that had it, and 2 that did not, but should've)
ha_innobase::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
This assertion catches unnecessary calls to this method,
but such calls are not harming correctness.
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.
Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.