- make multi_delete::initialize_tables() take into account that the JOIN structure may have
semi-join nests (which are not fully initialized when this function is called, they have
tab->table=NULL which caused the crash)
- Also checked multi_update::initialize_tables(): it has a different logic and needed no fixing.
This bug was the result of incompleteness of the patch for bug mdev-4177.
When an OR condition is simplified to a single conjunct it is merged
into the embedding AND condition. Multiple equalities are also merged,
and any field item involved in those equality should acquire a pointer
to a the multiple equality formed by this merge.
The function make_join_statistics checks whether eq_ref access uses only
constant expressions, and, if this is the case the function performs
constant row substitution. The code of this check must take into account
hidden components of extended secondary keys.
WITH A PORT NUMBER ENCLOSED IN QUOTES
Problem: mysqldump --dump-slave --include-master-host-port
prints the CHANGE MASTER command in the generated logical
backup. The PORT number that is generated with this command
is a string and should be an integer.
Fix: Remove the Enclosed quotes for port number.
fulltext search was initialized for all MATCH ... AGAINST items
at the end of the JOIN::optimize(). But since 5.3 derived tables
are initialized lazily on first use, very late in the sub_select().
Skip Item_func_match::init_search initialization if the corresponding
table isn't open yet; repeat fulltext initialization for all
not-yet-initialized MATCH ... AGAINST items after creating derived tables.
init join->top_join_tab_count to be in sync with join->join_tab=stat,
otherwise a query can be killed in-between and join_tab's won't be deleted
(JOIN::cleanup won't call JOIN_TAB::cleanup)
Analysis:
The reason for the inefficent plan was that Item_subselect::is_expensive()
didn't detect the special case when a subquery was optimized, but had no
join plan because it either has no table, or its tables have been optimized
away, or the optimizer detected that the result set is empty.
Solution:
Identify the special cases above in the Item_subselect::is_expensive(),
and consider such degenerate subqueries inexpensive.
This bug was introduced by the patch for WL#3220.
If the memory allocated for the tree to store unique elements
to be counted is not big enough to include all of them then
an external file is used to store the elements.
The unique elements are guaranteed not to be nulls. So, when
reading them from the file we don't have to care about the null
flags of the read values. However, we should remove the flag
at the very beginning of the process. If we don't do it and
if the last value written into the record buffer for the field
whose distinct values needs to be counted happens to be null,
then all values read from the file are considered to be nulls
and are not counted in.
The fix does not remove a possible null flag for the read values.
Rather it just counts the values in the same way it was done
before WL #3220.
In some cases, when using views the optimizer incorrectly determined
possible join orders for queries with nested outer and inner joins.
This could lead to invalid execution plans for such queries.
Additional fixes for possible overflows in length-related
calculations in 'spatial' implementations.
Checks added to the ::get_data_size() methods.
max_n_points decreased to occupy less 2G size. An
object of that size is practically inoperable anyway.
Item_func_make_set wasn't taking into account the first argument when
calculating maybe_null.
sql/item_strfunc.cc:
rewrite Item_func_make_set, removing separate storage of the first argument
sql/item_strfunc.h:
rewrite Item_func_make_set, removing separate storage of the first argument
with decimals=NOT_FIXED_DEC it is possible to have 'decimals' larger
than 'max_length', it's not an error for temporal functions.
But when Item_func_numhybrid converts the value to DECIMAL_RESULT,
it must limit 'decimals' to be a valid scale of a decimal number.
Flip the switch and create Item_cache based on the argument's cmp_type, not argument's result_type().
Fix subselect_engine to calculate cmp_type correctly
sql/item_subselect.h:
mdev:4284
mysql-test/r/keywords.result:
Test that option works as table/column/variable
mysql-test/suite/funcs_1/r/storedproc.result:
OPTION is now a valid identifier
mysql-test/suite/funcs_1/t/storedproc.test:
OPTION is now a valid identifier
mysql-test/t/keywords.test:
Test that option works as table/column/variable
sql/sql_yacc.yy:
OPTION is now a valid identifier
get_datetime_value() should not double-cache its own Item_cache_temporal items,
but it *should* cache other Item_cache items, such as Item_cache_str.
sql/item.h:
shortcut, to avoid going through the switch in Item::cmp_type()
sql/item_cmpfunc.cc:
even if the item is Item_cache_str - it still needs to be converted and cached.
sql/item_timefunc.h:
all descendants of Item_temporal_func always have cmp_type==TIME_RESULT.
Even Item_date_add_interval, that might have field_type == MYSQL_TYPE_STRING.
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
This is a bug in the legacy code. It did not manifest itself because
it was masked by other bugs that were fixed by the patches for
mdev-4172 and mdev-4177.
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
This bug is a regression bug. The regression was introduced by
the patch for mdev-3851, that tried to weaken the condition when
a ref access with an extended key can be converted to an eq_ref
access. The patch incorrectly formed this condition. As a result,
while improving performance for some queries, the patch caused
worse performance for another queries.
The issue was that there was that SHOW commands could open the table in the store engine, even in cases
where it should not be allowed to do that (ie, the storage engines meta data for that table was under big changes).
The cases where this should not be allowed are:
- ALTER TABLE DISABLE KEYS
- ALTER TABLE ENABLE KEYS
- REPAIR TABLE
- OPTIMIZE TABLE
- DROP TABLE
This patch adds a new mode, protected_against_usage(). If this is used then the SHOW command will wait until the table
is accessable. This is implemented by re-using the already exising 'version' flag for TABLE_SHARE.
It also added functions to be used to change TABLE_SHARE->version instead of changing it directly.
mysql-test/r/myisam-metadata.result:
Added test case
mysql-test/t/myisam-metadata.test:
Added test case
sql/mysqld.cc:
Start from refresh_version 2 as 0 and 1 are reserved.
sql/sql_admin.cc:
Added MYSQL_OPEN_FOR_REPAIR
Updated call to wait_while_table_is_used()
sql/sql_base.cc:
Updated call to wait_while_table_is_used()
- Allow one to specify how the table should be removed (for all commands except show or for all commands).
- Don't allow one to reopen the table if one has called share->protect_against_usage()
sql/sql_base.h:
Added TDC_RT_REMOVE_NOT_OWN_AND_MARK_NOT_USABLE, which is used to mark that no one can reopen this table, except with MYSQL_OPEN_FOR_REPAIR .
- Added MYSQL_OPEN_FOR_REPAIR
- Updated prototype for wait_while_table_is_used()
sql/sql_table.cc:
Updated call to wait_while_table_is_used()
Use MYSQL_OPEN_FOR_REPAIR for open tables that where repaired.
sql/sql_truncate.cc:
Updated call to wait_while_table_is_used()
sql/table.cc:
Use set_refresh_version()
sql/table.h:
Added functions to be used to change TABLE_SHARE->version instead of changing it directly