Additional fixes for possible overflows in length-related
calculations in 'spatial' implementations.
Checks added to the ::get_data_size() methods.
max_n_points decreased to occupy less 2G size. An
object of that size is practically inoperable anyway.
Item_func_make_set wasn't taking into account the first argument when
calculating maybe_null.
sql/item_strfunc.cc:
rewrite Item_func_make_set, removing separate storage of the first argument
sql/item_strfunc.h:
rewrite Item_func_make_set, removing separate storage of the first argument
with decimals=NOT_FIXED_DEC it is possible to have 'decimals' larger
than 'max_length', it's not an error for temporal functions.
But when Item_func_numhybrid converts the value to DECIMAL_RESULT,
it must limit 'decimals' to be a valid scale of a decimal number.
Flip the switch and create Item_cache based on the argument's cmp_type, not argument's result_type().
Fix subselect_engine to calculate cmp_type correctly
sql/item_subselect.h:
mdev:4284
mysql-test/r/keywords.result:
Test that option works as table/column/variable
mysql-test/suite/funcs_1/r/storedproc.result:
OPTION is now a valid identifier
mysql-test/suite/funcs_1/t/storedproc.test:
OPTION is now a valid identifier
mysql-test/t/keywords.test:
Test that option works as table/column/variable
sql/sql_yacc.yy:
OPTION is now a valid identifier
get_datetime_value() should not double-cache its own Item_cache_temporal items,
but it *should* cache other Item_cache items, such as Item_cache_str.
sql/item.h:
shortcut, to avoid going through the switch in Item::cmp_type()
sql/item_cmpfunc.cc:
even if the item is Item_cache_str - it still needs to be converted and cached.
sql/item_timefunc.h:
all descendants of Item_temporal_func always have cmp_type==TIME_RESULT.
Even Item_date_add_interval, that might have field_type == MYSQL_TYPE_STRING.
The reason for the problem was negation of signed longlong value LONGLON
G_MIN in Item_func_neg::int_op() - the result of this operation is not defined
(in C/C++ standard).
With this patch, LONGLONG_MIN is handled as special value, and negation is
avoided.
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
This is a bug in the legacy code. It did not manifest itself because
it was masked by other bugs that were fixed by the patches for
mdev-4172 and mdev-4177.
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
sql/sql_table.cc:
Don't call allow_access_to_protected_table() if we haven't protected table against usage.
Table is mainly protected against usage when one disables keys with alter table.
sql/sql_table.cc:
Remove version protection from share when repair has been done.
Without this one can't run SHOW commands on the table if it was locked until it's unlocked.
sql/table.h:
Allow one to remove version protection with allow_access_to_protected_table()
This bug is a regression bug. The regression was introduced by
the patch for mdev-3851, that tried to weaken the condition when
a ref access with an extended key can be converted to an eq_ref
access. The patch incorrectly formed this condition. As a result,
while improving performance for some queries, the patch caused
worse performance for another queries.
The issue was that there was that SHOW commands could open the table in the store engine, even in cases
where it should not be allowed to do that (ie, the storage engines meta data for that table was under big changes).
The cases where this should not be allowed are:
- ALTER TABLE DISABLE KEYS
- ALTER TABLE ENABLE KEYS
- REPAIR TABLE
- OPTIMIZE TABLE
- DROP TABLE
This patch adds a new mode, protected_against_usage(). If this is used then the SHOW command will wait until the table
is accessable. This is implemented by re-using the already exising 'version' flag for TABLE_SHARE.
It also added functions to be used to change TABLE_SHARE->version instead of changing it directly.
mysql-test/r/myisam-metadata.result:
Added test case
mysql-test/t/myisam-metadata.test:
Added test case
sql/mysqld.cc:
Start from refresh_version 2 as 0 and 1 are reserved.
sql/sql_admin.cc:
Added MYSQL_OPEN_FOR_REPAIR
Updated call to wait_while_table_is_used()
sql/sql_base.cc:
Updated call to wait_while_table_is_used()
- Allow one to specify how the table should be removed (for all commands except show or for all commands).
- Don't allow one to reopen the table if one has called share->protect_against_usage()
sql/sql_base.h:
Added TDC_RT_REMOVE_NOT_OWN_AND_MARK_NOT_USABLE, which is used to mark that no one can reopen this table, except with MYSQL_OPEN_FOR_REPAIR .
- Added MYSQL_OPEN_FOR_REPAIR
- Updated prototype for wait_while_table_is_used()
sql/sql_table.cc:
Updated call to wait_while_table_is_used()
Use MYSQL_OPEN_FOR_REPAIR for open tables that where repaired.
sql/sql_truncate.cc:
Updated call to wait_while_table_is_used()
sql/table.cc:
Use set_refresh_version()
sql/table.h:
Added functions to be used to change TABLE_SHARE->version instead of changing it directly
Do not include BLOB fields into the key to access the temporary
table created for a materialized view/derived table.
BLOB components are not allowed in keys.
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef
committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
timestamp: Fri 2012-03-09 15:04:49 +0200
message:
Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.
a better fix (much smaller and without regressions) is coming from 5.1
PROBLEM AFTER MYSQL_HA_FIND
This problem occured if a prepared statement tried to create a table
for which there already existed a view with the same name while a
SQL handler was opened.
Before DDL statements are executed, mysql_ha_rm_tables() is called
to remove any matching tables from the internal list of opened SQL
handler tables. This match was done on TABLE_LIST::db and
TABLE_LIST::table_name. This is problematic for views (which use
TABLE_LIST::view_db and TABLE_LIST::view_name) and anonymous
derived tables.
This patch fixes the problem by skipping TABLE_LISTs representing
anonymous derived tables and using get_db_name()/get_table_name()
which handles views when looking for SQL handler tables to remove.
MySQL Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME USER VARIABLE = CRASH
and
MySQL Bug#14664077 SEVERE PERFORMANCE DEGRADATION IN SOME CASES WHEN USER VARIABLES ARE USED
sql/item_func.cc:
don't use anything from Item_func_set_user_var::fix_fields()
in Item_func_set_user_var::save_item_result()
sql/sql_class.cc:
Call suv->save_item_result(item) *before* doing suv->fix_fields(), because
the former evaluates the item (and caches its value), while the latter marks
the user variable as non-const. The problem is that the item was fix_field'ed
when the user variable was const, and it doesn't expect it to change to non-const
in the middle of the execution.
mysys/errors.c:
revert upstream's fix. use a much simpler one
mysys/my_write.c:
revert upstream's fix. use a simpler one
sql/item_xmlfunc.cc:
useless, but ok
sql/mysqld.cc:
simplify upstream's fix
storage/heap/hp_delete.c:
remove upstream's fix.
we'll use a much less expensive approach.
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
GRANT STATEMENT
Description: A missing length check causes problem while
copying source to destination when
lower_case_table_names is set to a value
other than 0. This patch fixes the issue
by ensuring that requried bound check is
performed.
ANALYSIS
--------
When we open the view using open_new_frm() ,it doesnt set the
table-list->table variable and any access to table_list->table
will cause a crash.
FIX
---
Added a check during execution of the alter partition to return
error if table is view.
[http://rb.no.oracle.com/rb/r/2001/ Approved by Mattias J ]
Problem:
When a system variable is being set to the DEFAULT value, the server
segfaults if there is no 'default' defined for that system variable.
For example, for the following statements server segfaults.
set session rand_seed1=DEFAULT;
set session rand_seed2=DEFAULT;
Analysis:
The class sys_var represents one system variable. The class set_var represents
one system variable that is to be updated. The class set_var contains two
pieces of information, the system variable to object (set_var::var) member
and the value to be updated (set_var::value).
When the given value is 'default', the set_var::value will be NULL.
To update a system variable the member set_var::update() will be called,
which in turn will call sys_var::update() or sys_var::set_default() depending
on whether a value has been provided or not.
If the sys_var::set_default() is called, then the default value is obtained
either from the session scope or the global scope. This default value is
stored in a local temporary set_var object and then passed on to the
sys_var::update() call. A local temporary set_var object is needed because
sys_var::set_default() does not take set_var as an argument.
In the given scenario, the set_var::update() called sys_var::set_default().
And this sys_var::set_default() obtains the default value and then calls
sys_var::update(). To pass this value to sys_var::update() a local set_var
object is being created. While creating this local set_var object, its member
set_var::var was incorrectly left as 0.
Solution:
Instead of creating a local set_var object, the sys_var::set_default() can take
the set_var object as an argument just like sys_var::update().
rb://1996 approved by Nirbhay and Ramil.
The function remove_eq_cond removes the parts of a disjunction
for which it has been proved that they are always true. In the
result of this removal the disjunction may be converted into a
formula without OR that must be merged into the the AND formula
that contains the disjunction.
The merging of two AND conditions must take into account the
multiple equalities that may be part of each of them.
These multiple equality must be merged and become part of the
and object built as the result of the merge of the AND conditions.
Erroneously the function remove_eq_cond lacked the code that
would merge multiple equalities of the merged AND conditions.
This could lead to confusing situations when at the same AND
level there were two multiple equalities with common members
and the list of equal items contained only some of these
multiple equalities.
This, in its turn, could lead to an incorrect work of the
function substitute_for_best_equal_field when it tried to optimize
ref accesses. This resulted in forming invalid TABLE_REF objects
that were used to build look-up keys when materialized subqueries
were exploited.
Problem:
When the VALUES() function is inappropriately used in the SET stmt the server
exits.
set port = values(v);
This happens because the values(v) will be parsed as an Item_insert_value by
the parser. Both Item_field and Item_insert_value return the type as
FIELD_ITEM. But for Item_insert_value the field_name member is NULL. In
set_var constructor, when the type of the item is FIELD_ITEM we try to access
the non-existent field_name.
The class hierarchy is as follows:
Item -> Item_ident -> Item_field -> Item_insert_value
The Item_ident::field_name is NULL for Item_insert_value.
Solution:
In the parsing stage, in the set_var constructor if the item type is
FIELD_ITEM and if the field_name is non-existent, then it is probably
the Item_insert_value. So leave it as it is for later evaluation.
rb://2004 approved by Roy and Norvald.
This bug in the legacy code could manifest itself in queries with
semi-join materialized subqueries.
When a subquery is materialized all conditions that are imposed
only on the columns belonging to the tables from the subquery
are taken into account.The code responsible for subquery optimizations
that employes subquery materialization makes sure to remove these
conditions from the WHERE conditions of the query obtained after
it has transformed the original query into a query with a semi-join.
If the condition to be removed is an equality condition it could
be added to ON expressions and/or conditions from disjunctive branches
(parts of OR conditions) in an attempt to generate better access keys
to the tables of the query. Such equalities are supposed to be removed
later from all the formulas where they have been added to.
However, erroneously, this was not done in some cases when an ON
expression and/or a disjunctive part of the OR condition could
be converted into one multiple equality. As a result some equality
predicates over columns belonging to the tables of the materialized
subquery remained in the ON condition and/or the a disjunctive
part of the OR condition, and the excuter later, when trying to
evaluate them, returned wrong answers as the values of the fields
from these equalities were not valid.
This happened because any standalone multiple equality (a multiple
equality that are not ANDed with any other predicates) lacked
the information about equality predicates inherited from upper
levels (in particular, inherited from the WHERE condition).
The fix adds a reference to such information to any standalone
multiple equality.
The wrong result set returned by the left join query from
the bug test case happened due to several inconsistencies
and bugs of the legacy mysql code.
The bug test case uses an execution plan that employs a scan
of a materialized IN subquery from the WHERE condition.
When materializing such an IN- subquery the optimizer injects
additional equalities into the WHERE clause. These equalities
express the constraints imposed by the subquery predicate.
The injected equality of the query in the test case happens
to belong to the same equality class, and a new equality
imposing a condition on the rows of the materialized subquery
is inferred from this class. Simultaneously the multiple
equality is added to the ON expression of the LEFT JOIN
used in the main query.
The inferred equality of the form f1=f2 is taken into account
when optimizing the scan of the rows the temporary table
that is the result of the subquery materialization: only the
values of the field f1 are read from the table into the record
buffer. Meanwhile the inferred equality is removed from the
WHERE conditions altogether as a constraint on the fields
of the temporary table that has been used when filling this table.
This equality is supposed to be removed from the ON expression
when the multiple equalities of the ON expression are converted
into an optimal set of equality predicates. It supposed to be
removed from the ON expression as an equality inferred from only
equalities of the WHERE condition. Yet, it did not happened
due to the following bug in the code.
Erroneously the code tried to build multiple equality for ON
expression twice: the first time, when it called optimize_cond()
for the WHERE condition, the second time, when it called
this function for the HAVING condition. When executing
optimize_con() for the WHERE condition a reference
to the multiple equality of the WHERE condition is set
in the multiple equality of the ON expression. This reference
would allow later to convert multiple equalities of the
ON expression into equality predicates. However the
the second call of build_equal_items() for the ON expression
that happened when optimize_cond() was called for the
HAVING condition reset this reference to NULL.
This bug fix blocks calling build_equal_items() for ON
expressions for the second time. In general, it will be
beneficial for many queries as it removes from ON
expressions any equalities that are to be checked for the
WHERE condition.
The patch also fixes two bugs in the list manipulation
operations and a bug in the function
substitute_for_best_equal_field() that resulted
in passing wrong reference to the multiple equalities
of where conditions when processing multiple
equalities of ON expressions.
The code of substitute_for_best_equal_field() and
the code the helper function eliminate_item_equal()
were also streamlined and cleaned up.
Now the conversion of the multiple equalities into
an optimal set of equality predicates first produces
the sequence of the all equalities processing multiple
equalities one by one, and, only after this, it inserts
the equalities at the beginning of the other conditions.
The multiple changes in the output of EXPLAIN
EXTENDED are mainly the result of this streamlining,
but in some cases is the result of the removal of
unneeded equalities from ON expressions. In
some test cases this removal were reflected in the
output of EXPLAIN resulted in disappearance of
“Using where” in some rows of the execution plans.
Problem:
===========================================================
If mysqld daemon is started without a --datadir option
option, and we issue the SHOW VARIABLES LIKE 'DATADIR';SQL command
at the client it returns an empty path. This is because
mysql_real_data_home_ptr is being reset to NULL by Sys_var_charptr
constructor call when the datadir is not given either through
configuration file (no-defaults) or through mysqld parameters.
Solution:
===========================================================
mysql_real_data_home is an array which stores the path of the datadir
and mysql_real_data_home_ptr is the pointer to it. The pointer is
being set to NULL at the Sys_datadir, which is of type Sys_var_charptr,
constructor call. This is because at Sys_datadir call the def_val
parameter was being passed with DEFAULT(0) which is now replaced with
DEFAULT(mysql_real_data_home). The patch has been tested manually as it
is not possible to start mtr without a default config file.
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.
Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
The technical problem was that THD::user_var_events_alloc was reset to NULL
from a valid value when a stored program is executed during the PREPARE statement.
The user visible problem was that the server crashed if user issued a PREPARE
statement using some combination of stored functions and user variables.
The fix is to restore THD::user_var_events_alloc to the original value.
This is a minimal fix for 5.5.
More proper patch has been already implemented for 5.6+. It avoids
evaluation of stored functions for the PREPARE phase.
From the user point of view, this bug is a regression, introduced by the patch for WL2649
(Number-to-string conversions), revid: bar@mysql.com-20100211041725-ijbox021olab82nv
However, the code resetting THD::user_var_events_alloc exists even in 5.1.
The WL just changed the way arguments are converted to strings and the bug became visible.
DOWNGRADED FROM 5.6.11 TO 5.6.10
Problem was new syntax not accepted by previous version.
Fixed by adding version comment of /*!50531 around the
new syntax.
Like this in the .frm file:
'PARTITION BY KEY /*!50611 ALGORITHM = 2 */ () PARTITIONS 3'
and also changing the output from SHOW CREATE TABLE to:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
It will always add the ALGORITHM into the .frm for KEY [sub]partitioned
tables, but for SHOW CREATE TABLE it will only add it in case it is the non
default ALGORITHM = 1.
Also notice that for 5.5, it will say /*!50531 instead of /*!50611, which
will make upgrade from 5.5 > 5.5.31 to 5.6 < 5.6.11 fail!
If one downgrades an fixed version to the same major version (5.5 or 5.6) the
bug 14521864 will be visible again, but unless the .frm is updated, it will
work again when upgrading again.
Also fixed so that the .frm does not get updated version
if a single partition check passes.
Analysis:
Range analysis detects that the subquery is expensive and doesn't
build a range access method. Later, the applicability test for loose
scan doesn't take that into account, and builds a loose scan method
without a range scan on the min/max column. As a result loose scan
fetches the first key in each group, rather than the first key that
satisfies the condition on the min/max column.
Solution:
Since there is no SEL_ARG tree to be used for the min/max column,
it is not possible to use loose scan if the min/max column is compared
with an expensive scalar subquery. Make the test for loose scan
applicability to be in sync with the range analysis code by testing if
the min/max argument is compared with an expensive predicate.
This bug happened because the executor tried to use a wrong
TABLE REF object when building access keys. It constructed
keys from fields of a materialized table from a ref object
created to construct keys from the fields of the underlying
base table. This could happen only when materialized table
was created for a non-correlated IN subquery and only
when the materialized table used for lookups.
In this case we are guaranteed to be able to construct the
keys from the fields of tables that would be outer tables
for the tables of the IN subquery.
The patch makes sure that no ref objects constructed from
fields of materialized lookup tables are to be used.
5.1 SERVER
Problem was caused by the COM_CHANGE_USER parsing code. That code ignored
character set number passed in COM_CHANGE_USER packet. Instead
character_set_client values was used. User name was not converted at all.
Fixed by using passed character set number to convert both db and user names.
If COM_CHANGE_USER does not contain character set number then
character_set_client is used to convert both names.
This is a backport of the fix for:
Bug#13633549 HANDLE_FATAL_SIGNAL IN TEST_IF_SKIP_SORT_ORDER/CREATE_SORT_INDEX
Don't invoke the range optimizer for a NULL select.
Analys:
The cause for the wrong result was that the optimizer
incorrectly chose min/max loose scan when it is not
applicable. The applicability test missed the case when
a condition on the MIN/MAX argument was OR-ed with a
condition on some other field. In this case, the MIN/MAX
condition cannot be used for loose scan.
Solution:
Extend the test check_group_min_max_predicates() to check
that the WHERE clause is of the form: "cond1 AND cond2"
where
cond1 - does not use min_max_column at all.
cond2 - is an AND/OR tree with leaves in form "min_max_column $CMP$ const"
or $CMP$ is one of the functions between, is [not] null
PROBLEM:
When large number of connections are continuously made
with wait_timeout of 600 seconds for some hours, some
connections remain after wait_timeout expired and also
new connections get struck under the configuration and
the scenario reported in bug#16196591.
FIX:
The cause of this bug is the issue identified and fixed in
the BUG#16088658 in 5.6.Also LOCK_thread_count contention
issue fixed in BUG#15921866 in 5.6 need to be in 5.5 as
well. Since the issue is not reproducible, it has been
verified at customer configuration the issue could not
be reproduced after a 48-hour test with a non-debug build
which includes the above two fixes backported.
Analysis:
--------
As part of the fix for Bug#11757464, the 'out of memory' error
condition was not pushed to the diagnostic area as it requires
memory allocation. However in cases of SIGNAL/RESIGNAL 'out of
memory' error, the server may not be out of memory. Hence it
would be good to report the error in such cases.
Fix:
---
Push only non fatal 'out of memory' errors to the diagnostic area.
Since SIGNAL/RESIGNAL of 'out of memory' error may not be fatal,
the error is reported.
Analysis:
Range analysis discoveres that the query can be executed via loose index scan for GROUP BY.
Later, GROUP BY analysis fails to confirm that the GROUP operation can be computed via an
index because there is no logic to handle duplicate field references in the GROUP clause.
As a result the optimizer produces an inconsistent plan. It constructs a temporary table,
but on the other hand the group fields are not set to point there.
Solution:
Make loose scan analysis work in sync with order by analysis. In the case of duplicate
columns loose scan will not be applicable. This limitation will be lifted in 10.0 by
removing duplicate columns.
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.
There were a few different issues with similar assertion
failures on different queries:
1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).
Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.
Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.
2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.
Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been replaced with the has_subquery() function call.
3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.
Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.
4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.
Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.
5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
< <is_not_null_test>(a),
---
> <cache>(<is_not_null_test>(a)),
or
< having (<is_not_null_test>(a) and <is_not_null_test>(a))
---
> having 1
etc.
Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
Backport of fix for Bug#13581962
mysql-test/r/cast.result:
Added test result for Bug#13581962,Bug#14096619
mysql-test/r/ctype_utf8mb4.result:
Added test result for Bug#13581962,Bug#14096619
mysql-test/t/cast.test:
Added test case for Bug#13581962,Bug#14096619
mysql-test/t/ctype_utf8mb4.test:
Added test case for Bug#13581962,Bug#14096619
sql/item_func.h:
limit max length by MY_INT64_NUM_DECIMAL_DIGITS
Backport of Bug#13581962
mysql-test/r/cast.result:
Added test result for Bug#13581962,Bug#14096619
mysql-test/t/cast.test:
Added test case for Bug#13581962,Bug#14096619
sql/item_func.h:
limit max length by MY_INT64_NUM_DECIMAL_DIGITS
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
Assertion happens in replication thread during THD destruction, when thread calls my_sync(), which in turn calls thd_wait_begin() callback. Connection count can be 0, because the counter was decremented before THD destructor.
This assertion currently reproducible only in Percona server and not in MariaDB, due to differences in replication code.
Fixed by moving code to decrement connection counter after the THD destructor.
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
sql/opt_range.cc:
check whether the keeypart_tree has any range tree.
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
from MariaDB 10.0.
The bug in mdev-3948 was an instance of the problem fixed by Sergey's patch
in 10.0 - namely that the range optimizer could change table->[read | write]_set,
and not restore it.
revno: 3471
committer: Sergey Petrunya <psergey@askmonty.org>
branch nick: 10.0-serg-fix-imerge
timestamp: Sat 2012-11-03 12:24:36 +0400
message:
# MDEV-3817: Wrong result with index_merge+index_merge_intersection, InnoDB table, join, AND and OR conditions
Reconcile the fixes from:
#
# guilhem.bichot@oracle.com-20110805143029-ywrzuz15uzgontr0
# Fix for BUG#12698916 - "JOIN QUERY GIVES WRONG RESULT AT 2ND EXEC. OR
# AFTER FLUSH TABLES [-INT VS NULL]"
#
# guilhem.bichot@oracle.com-20111209150650-tzx3ldzxe1yfwji6
# Fix for BUG#12912171 - ASSERTION FAILED: QUICK->HEAD->READ_SET == SAVE_READ_SET
# and
#
and related fixes from: BUG#1006164, MDEV-376:
Now, ROR-merged QUICK_RANGE_SELECT objects make no assumptions about the values
of table->read_set and table->write_set.
Each QUICK_ROR_SELECT has (and had before) its own column bitmap, but now, all
QUICK_ROR_SELECT's functions that care: reset(), init_ror_merged_scan(), and
get_next() will set table->read_set when invoked and restore it back to what
it was before the call before they return.
This allows to avoid the mess when somebody else modifies table->read_set for
some reason.
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
mysql-test/r/func_compress.result:
changing result file
mysql-test/r/variables.result:
changing result file
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
changing result file
sql/item_func.cc:
Quote the user variable items
Due to not resetting a member (last_added) of
Deferred events class inside a clean up function
(Deferred_log_events::rewind), there is a memory
leak on filtered slaves.
Fix:
Resetting last_added to NULL in rewind() function.
sql/rpl_utility.cc:
Resetting last_added to NULL to avoid memory leak
The problem was that a temporary table was re-created as a non-temporary table.
mysql-test/suite/maria/truncate.result:
Added test cases
mysql-test/suite/maria/truncate.test:
Added test cases
sql/sql_truncate.cc:
Mark that table to be created is a temporary table
storage/maria/ha_maria.cc:
Ensure that temporary tables are not transactional.