Issue:
------
VALUES doesn't have a type() function and is considered a
Item_field.
Solution for 5.7:
-----------------
Add a new type() function for Item_values_insert.
On 8.0 and trunk it was fixed by Mithun's Bug#19601973.
Solution for 5.6:
-----------------
Additionally Bug#17458914 is backported.
This will address the problem of using VALUES() in
INSERT ... ON DUPLICATE KEY UPDATE. Create a field object
only if it is in the UPDATE clause, else return a NULL
item.
This will also address the problems mentioned in
Bug#14789787 and Bug#16756402.
Solution for 5.5:
-----------------
As mentioned above Bug#17458914 is backported.
Additionally Bug#14786324 is also backported.
When VALUES() is detected outside its meaningful place,
it should be treated as NULL and is thus replaced with a
Field_null object, with the same name as the original
field.
Fields with type NULL are generally not handled well inside
the server (e.g Innodb will not accept them and it is
impossible to create them in regular tables). So create a
new const NULL item instead.
uses alias in HAVING when sql_mode = 'ONLY_FULL_GROUP_BY'
This patch corrects the patch for bug#18739: non-standard
HAVING extension was allowed in strict ANSI sql mode
added in 2006 by commit 4b7c4cd27f.
As a result of incompleteness of the fix in the above commit
if a query with GROUP BY contained an aggregate function with an
alias and this alias was used in the HAVING clause of the query
the server reported an error when sql_mode was set to
'ONLY_FULL_GROUP_BY'.
Make differentiation between pullout for merge and pulout of outer field during exists2in transformation.
In last case the field was outer and so we can safely start from name resolution context of the SELECT where it was pulled.
Old behavior lead to inconsistence between list of tables and outer name resolution context (which skips one SELECT for merge purposes) which creates problem vor name resolution.
based on:
commit f7316aa0c9
Author: Ajo Robert <ajo.robert@oracle.com>
Date: Thu Aug 24 17:03:21 2017 +0530
Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST,
COL), NAME_CONST('NAME', NULL))
Backport of Bug#19143243 fix.
NAME_CONST item can return NULL_ITEM type in case of incorrect arguments.
NULL_ITEM has special processing in Item_func_in function.
In Item_func_in::fix_length_and_dec an array of possible comparators is
created. Since NAME_CONST function has NULL_ITEM type, corresponding
array element is empty. Then NAME_CONST is wrapped to ITEM_CACHE.
ITEM_CACHE can not return proper type(NULL_ITEM) in Item_func_in::val_int(),
so the NULL_ITEM is attempted compared with an empty comparator.
The fix is to disable the caching of Item_name_const item.
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
This should also fix the MariaDB 10.2.2 bug
MDEV-13826 CREATE FULLTEXT INDEX on encrypted table fails.
MDEV-12634 FIXME: Modify innodb-index-online, innodb-table-online
so that they will write and read merge sort files. InnoDB 5.7
introduced some optimizations to avoid using the files for small tables.
Many collation test results have been adjusted for MDEV-10191.
The problem was introduced by the patch for MDEV-7661,
which (in addition to the fix itself) included an attempt to make
CONVERT/CAST work in the same way with fields
(i.e. return NULL in strict mode if a non-convertable character found).
It appeared to be a bad idea and some users were affected by this
behavior change. Changing CONVERT/CAST not depend on sql_mode
(restoring pre-10.1.4 behavior).
During statement preparation st_order::item gets set to a value in
ref_ptr_array. During statement execution we were overriding that value,
causing subsequent checks for window functions to return true.
Whenever we do any setting from ref_ptr_array, make sure to always
store the value in all_fields as well.
For function items containing window functions, as MDEV-12336 has
discovered, we don't need to create a separate Item_direct_ref or
Item_aggregate_ref as they will be computed directly from the top-level
item once the window function argument columns are computed.
This patch fills in a serious flaw in the
code that supports condition pushdown into
materialized views / derived tables.
If a predicate happened to contain a reference
to a mergeable view / derived table and it does
not depended directly on the target materialized
view / derived table then the predicate was not
considered as a subject to pusdown to this view
/ derived table.
This is another attempt to fix the bug mdev-12992.
This patch introduces st_select_lex::context_analysis_place for
the place in SELECT where context analysis is currently performed.
It's similar to st_select_lex::parsing_place, but it is used at
the preparation stage.
The problem lies in how CURRENT_ROLE is defined. The
Item_func_current_role inherits from Item_func_sysconst, which defines
a safe_charset_converter to be a const_charset_converter.
During view creation, if there is no role previously set, the current_role()
function returns NULL.
This is captured on item instantiation and the
const_charset_converter call subsequently returns an Item_null.
In turn, the function is replaced with Item_null and the view is
then created with an Item_null instead of Item_func_current_role.
Without this patch, the first SHOW CREATE VIEW from the testcase would
have a where clause of WHERE role_name = NULL, while the second SHOW
CREATE VIEW would show a correctly created view.
The same applies for the DATABASE function, as it can change as well.
There is an additional problem with CURRENT_ROLE() when used in a
prepared statement. During prepared statement creation we used to set
the string_value of the function to the current role as well as the
null_value flag. During execution, if CURRENT_ROLE was not null, the
null_value flag was never set to not-null during fix_fields.
Item_func_current_user however can never be NULL so it did not show this
problem in a view before. At the same time, the CURRENT_USER() can not
be changed between prepared statement execution and creation so the
implementation where the value is stored during fix_fields is
sufficient.
Note also that DATABASE() function behaves differently during prepared
statements. See bug 25843 for details or commit
7e0ad09edf
When the SELECT query from a trigger that used a subquery
in its SELECT list was prepared the counter select_n_having_items
was incremented in the constructor Item::Item(THD *thd).
As a result each invocation of the trigger required more and more
memory for the ref_pointer_array for this SELECT.
Made sure that the counter st_select_lex::select_n_having_items
would be incremented only at the first execution of such trigger.
GROUP BY
Issue 1:
--------
This problem occurs in the following conditions:
1) A UNION is present in the subquery of select list and
handles multiple columns.
2) Query has a GROUP BY.
A temporary table is created to handle the UNION.
Item_field objects are based on the expressions of the
result of the UNION (ie. the fake_select_lex). While
checking validity of the columns in the GROUP BY list, the
columns of the temporary table are checked in
Item_ident::local_column. But the Item_field objects
created for the temporary table don't have information like
the Name_resolution_context that they belong to or whether
they are dependent on an outer query. Since these members
are null, incorrect behavior is caused.
This can happen when such Item objects are cached to apply
the IN-to-EXISTS transform for Item_row.
Solution to Issue 1:
--------------------
Context information of the first select in the UNION will
be assigned to the new Item_field objects.
Issue 2:
--------
This problem occurs in the following conditions:
1) A UNION is present in the subquery of select list.
2) A column in the UNION's first SELECT refers to a table
in the outer-query making it a dependent union.
3) GROUP BY column refers to the outer-referencing column.
While resolving the select list with an outer-reference, an
Item_outer_ref object is created to handle the
outer-query's GROUP BY list. The Item_outer_ref object
replaces the Item_field object in the item tree.
Item_outer_ref::fix_fields will be called only while fixing
the inner references of the outer query.
Before resolving the outer-query, an Item_type_holder
object needs to be created to handle the UNION. But as
explained above, the Item_outer_ref object has not been
fixed yet. Having a fixed Item object is a pre-condition
for creating an Item_type_holder.
Solution to Issue 2:
--------------------
Use the reference (real_item()) of an Item_outer_ref object
instead of the object itself while creating an
Item_type_holder.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
Fixed handling of default values with cached temporal functions so that the
CREATE TABLE statement now succeeds.
Fixed virtual column session cleanup.
Fixed the error message.
Added quoting of date/time values in cases when this was omitted.
Added a test case in default.test.
Updated test result files.
Fixed the bug by failing the statement with an error message that explains
that an auto-increment column may not be used in an expression for a
check constraint.
Added a test case in check_constraint.test.
Updated existing tests and results.
An attempt to mark reference as dependent lead to transfering this property to
original view field and through it to other references of this field which
can't be dependent.
The function Item::split_sum_func2() incorrectly processed the function
items with window functions that were not window functions themselfes
and were used as arguments of other functions.
Optionally do table->update_default_fields() even for INSERT
that supposedly provides values for all column. Because these
"values" might be DEFAULT, which would need table->update_default_fields()
at the end.
Also set Item_default_value::used_tables() from the default expression.
Non-zero used_field() means that mysql_insert() will initialize all
fields to their default values (with restore_record()) even if
all columns are later provided with values. Because default expressions
may refer to other columns and they must be initialized.
Before "MDEV-10709 Expressions as parameters to Dynamic SQL" only
user variables were syntactically allowed as EXECUTE parameters.
User variables were OK as both IN and OUT parameters.
When Item_param was bound to an actual parameter (a user variable),
it automatically meant that the bound Item was settable.
The DBUG_ASSERT() in Protocol_text::send_out_parameters() guarded that
the actual parameter is really settable.
After MDEV-10709, any kind of expressions are allowed as EXECUTE IN parameters.
But the patch for MDEV-10709 forgot to check that only descendants of
Settable_routine_parameter should be allowed as OUT parameters.
So an attempt to pass a non-settable parameter as an OUT parameter
made server crash on the above mentioned DBUG_ASSERT.
This patch changes Item_param::get_settable_routine_parameter(),
which previously always returned "this". Now, when Item_param is bound
to some Item, it caches if the bound Item is settable.
Item_param::get_settable_routine_parameter() now returns "this" only
if the bound actual parameter is settable, and returns NULL otherwise.
Problem: Item_param::basic_const_item() returned true when fixed==false.
This unexpected combination made Item::const_charset_converter() crash
on asserts.
Fix:
- Changing all Item_param::set_xxx() to set "fixed" to true.
This fixes the problem.
- Additionally, changing all Item_param::set_xxx() to set
Item_param::item_type, to avoid duplicate code, and for consistency,
to make the code symmetric between different constant types.
Before this patch only set_null() set item_type.
- Moving Item_param::state and Item_param::item_type from public to private,
to make sure easier that these members are in sync with "fixed" and to
each other.
- Adding a new argument "unsigned_arg" to Item::set_decimal(),
and reusing it in two places instead of duplicate code.
- Adding a new method Item_param::fix_temporal() and reusing it in two places.
- Adding methods has_no_value(), has_long_data_value(), has_int_value(),
instead of direct access to Item_param::state.
Most notably, this includes MDEV-11623, which includes a fix and
an upgrade procedure for the InnoDB file format incompatibility
that is present in MariaDB Server 10.1.0 through 10.1.20.
In other words, this merge should address
MDEV-11202 InnoDB 10.1 -> 10.2 migration does not work
* Remove duplicate lines from tests
* Use thd instead of current_thd
* Remove extra wsrep_binlog_format_names
* Correctly merge union patch from 5.5 wrt duplicate rows.
* Correctly merge SELinux changes into 10.1
Fixing Item::decimal_precision() to return at least one digit.
This fixes the problem reported in MDEV.
Also, fixing Item_func_signed::fix_length_and_dec() to reserve
space for at least one digit (plus one character for an optional sign).
This is needed to have CONVERT(expr,SIGNED) and CONVERT(expr,UNSIGNED)
create correct string fields when they appear in string context, e.g.:
CREATE TABLE t1 AS SELECT CONCAT(CONVERT('',SIGNED));
The patch b96c196f1c added a new call for
safe_charset_converter() without a corresponding fix_fields().
In case of a sub-query the created Item remained in non-fixed state.
The problem did not show up with literal derived expressions, only
subselects were affected. This patch adds a corresponding fix_fields()
to the previously added safe_charset_converter().
The fix for bug mdev-11488 introduced the virtual method
convert_to_basic_const_item for the class Item_cache.
The implementation of this method for the class Item_cache_str
was not quite correct: the server could crash if the cached item
was null.
A similar problem could appear for the implementation of
this method for the class Item_cache_decimal. Although I could not
reproduce the problem I decided to change the code appropriately.
The bug occurred when a subquery
- has a reference to outside, to grand-parent query or further up
- is converted to a semi-join (i.e. merged into its parent).
Then the reference to outside had form Item_ref(Item_field(...)).
- Conversion to semi-join would call item->fix_after_pullout() for the
outside reference.
- Item_ref::fix_after_pullout would call Item_field->fix_after_pullout
- The Item_field would construct a new Name_resolution_context object
This process ignored the fact that the Item_field does not belong to
any of the subselects being flattened.
The result was crash in the next call to Item_field::fix_fields(), where
we would try to use an invalid Name_resolution_context object.
Fixed by not creating Name_resolution_context object if the Item_field's
context does not belong to the subselect(s) that were flattened.
The patch for bug mdev-10882 tried to fix it by providing an
implementation of the virtual method build_clone for the class
Item_cache. It's turned out that it is not easy provide a valid
implementation for Item_cache::build_clone(). At the same time
if the condition that can be pushed into a materialized view
contains a cached item this item can be substituted for a basic
constant of the same value. In such a way we can avoid building
proper clones for Item_cache objects when constructing pushdown
conditions.
otherwise we'd need to store sql_mode *per vcol*
(consider CREATE INDEX...) and how SHOW CREATE TABLE would
support that?
Additionally, get rid of vcol::expr_str, just to make sure
the string is always generated and never leaked in the
original form.
The problem was that null_value was not set to "false" on a well-formed row.
If an ill-formed row was followed by a well-forned row, null_value remained
"true" in the call of Item::send() for the well-formed row.
This patch adds DEFAULT as a possible dynamic SQL parameter, e.g.:
EXECUTE IMMEDIATE 'INSERT INTO t1 (column) VALUES(?)' USING DEFAULT;
EXECUTE IMMEDIATE 'UPDATE t1 SET column=?' USING DEFAULT;
and for similar PREPARE..EXECUTE queries.
This is done for symmetry with the STMT_INDICATOR_DEFAULT indicator in
the client-server PS protocol.
The changes include:
- Allowing DEFAULT as a possible option in execute USING clause (sql_yacc.yy)
- Adding "virtual bool Item::save_in_param(THD *thd, Item_param *param)",
because "normal" items (that have real values) and Item_default_value
have now different actions when assigning itself as an Item_param value.
- Fixing switch() statements in a few Item_param methods not to have "default",
because it was easy to forget to add a new "case" when adding a new XXX_VALUE
value into the enum Item_param::enum_item_param_state.
This is important, as we'll be adding new values soon, e.g. for MDEV-11359.
Removing "default" helped to find and report bugs MDEV-11361 and MDEV-11362,
because DECIMAL_VALUE is obviously not properly handled in some cases.
This is similar to MysQL Worklog 3253, but with
a different implementation. The disk format and
SQL syntax is identical with MySQL 5.7.
Fetures supported:
- "Any" ammount of any trigger
- Supports FOLLOWS and PRECEDES to be
able to put triggers in a certain execution order.
Implementation details:
- Class Trigger added to hold information about a trigger.
Before this trigger information was stored in a set of lists in
Table_triggers_list and in Table_triggers_list::bodies
- Each Trigger has a next field that poinst to the next Trigger with the
same action and time.
- When accessing a trigger, we now always access all linked triggers
- The list are now only used to load and save trigger files.
- MySQL trigger test case (trigger_wl3253) added and we execute these
identically.
- Even more gracefully handling of wrong trigger files than before. This
is useful if a trigger file uses functions or syntax not provided by
the server.
- Each trigger now has a "Created" field that shows when the trigger was
created, with 2 decimals.
Other comments:
- Many of the changes in test files was done because of the new "Created"
field in the trigger file. This shows up in SHOW ... TRIGGER and when
using information_schema.trigger.
- Don't check if all memory is released if on uses --gdb; This is needed
to be able to get a list from safemalloc of not freed memory while
debugging.
- Added option to trim_whitespace() to know how many prefix characters
was skipped.
- Changed a few ulonglong sql_mode to sql_mode_t, to find some wrong usage
of sql_mode.
Fix window function expressions such as win_func() <operator> expr.
The problem was found in 2 places.
First, when we have complex expressions containing window functions, we
can only compute their final value _after_ we have computed the window
function's values. These values must be stored within the temporary
table that we are using, before sending them off.
This is done by performing an extra copy_funcs call before the final
end_send() call.
Second, such expressions need to have their inner arguments,
changed such that the references within those arguments point to fields within
the temporary table.
Ex: sum(t.a) over (order by t.b) + sum(t.a) over (order by t.b)
Before this fix, t.a pointed to the original table's a field. In order
to compute the sum function's value correctly, it needs to point to the
copy of this field inside the temp table.
This is done by calling split_sum_func for each argument in the
expression in turn.
The win.test results have also been updated as they contained wrong
values for such a use case.
This bug in the code of Item_ref::build_clone could
cause corruption of items in where conditions.
Also made sure that equality predicates extracted
from multiple equality items to be pushed into
materialized views were cloned.
for materialized views and derived tables: there were no
push-down if the view was defined as union of selects
without aggregation. Added test cases with such unions.
Adjusted result files after the merge of the code for mdev-9197.
FROM I_S
Issue:
------
There is a difference in the field type created when the
following DDLs are used:
1) CREATE TABLE t0 AS SELECT NULL;
2) CREATE TABLE t0 AS SELECT GREATEST(NULL,NULL);
The first statement creates field of type Field_string and
the second one creates a field of type Field_null.
This creates a problem when the query mentioned in this bug
is used. Since the null_ptr is calculated differently for
Field_null.
Solution:
---------
When there is a function returning null in the select list
as mentioned above, the field should be of type
Field_string.
This was fixed in 5.6+ as part of Bug#14021323. This is a
backport to mysql-5.5.
An incorrect comment in innodb_bug54044.test has been
corrected in all versions.
Adding Converter_double_to_longlong and reusing it in:
1. Field_longlong::store(double nr)
2. Field_double::val_int()
3. Item::val_int_from_real()
4. Item_dyncol_get::val_int()
As a good side efferct, now overflow in conversion in the mentioned
val_xxx() methods return exactly the same warning.
* remove a confusing method name - Field::set_default_expression()
* remove handler::register_columns_for_write()
* rename stuff
* add asserts
* remove unlikely unlikely
* remove redundant if() conditions
* fix mark_unsupported_function() to report the most important violation
* don't scan vfield list for default values (vfields don't have defaults)
* move handling for DROP CONSTRAINT IF EXIST where it belongs
* don't protect engines from Alter_inplace_info::ALTER_ADD_CONSTRAINT
* comments
MDEV-10134 Add full support for DEFAULT
- Added support for using tables with MySQL 5.7 virtual fields,
including MySQL 5.7 syntax
- Better error messages also for old cases
- CREATE ... SELECT now also updates timestamp columns
- Blob can now have default values
- Added new system variable "check_constraint_checks", to turn of
CHECK constraint checking if needed.
- Removed some engine independent tests in suite vcol to only test myisam
- Moved some tests from 'include' to 't'. Should some day be done for all tests.
- FRM version increased to 11 if one uses virtual fields or constraints
- Changed to use a bitmap to check if a field has got a value, instead of
setting HAS_EXPLICIT_VALUE bit in field flags
- Expressions can now be up to 65K in total
- Ensure we are not refering to uninitialized fields when handling virtual fields or defaults
- Changed check_vcol_func_processor() to return a bitmap of used types
- Had to change some functions that calculated cached value in fix_fields to do
this in val() or getdate() instead.
- store_now_in_TIME() now takes a THD argument
- fill_record() now updates default values
- Add a lookahead for NOT NULL, to be able to handle DEFAULT 1+1 NOT NULL
- Automatically generate a name for constraints that doesn't have a name
- Added support for ALTER TABLE DROP CONSTRAINT
- Ensure that partition functions register virtual fields used. This fixes
some bugs when using virtual fields in a partitioning function
use Item->neg to convert generate negative Item_num's
instead of Item_func_neg(Item_num).
Based on the following commit:
Author: Monty <monty@mariadb.org>
Date: Mon May 30 22:44:00 2016 +0300
Make negative number their own token
The negation (-) operator will call Item->neg() one underlying numeric constants
and remove itself (like the NOT() function does today for other NOT functions.
This simplifies things
- -1 is not anymore an expression but a basic_const_item
- improves optimizer
- DEFAULT -1 doesn't need special handling anymore
- When we add DEFAULT expressions, -1 will be treated exactly like 1
- printing of items doesn't anymore put braces around all negative numbers
Other things fixed:
- Fixed that longlong converted to decimal's has a more appropriate size
- Fixed that "-0.0" read into a decimal is interpreted as 0.0
of bugs easier (MDEV-8919, MDEV-10304, MDEV-10305, MDEV-10307)
- Adding Item::push_note_converted_to_negative_complement() and
Item::push_note_converted_to_positive_complement()
- Adding virtual methods Item::val_int_signed_typecast() and
Item::val_int_unsigned_typecast()
- Moving COLUMN_GET() related code from
Item_func_signed::val_int() and Item_func_unsigned::val_int() to
Item_dyncol_get::val_int_signed_typecast() and
Item_dyncol_get::val_int_unsigned_typecast()
- Moving Item_func_signed::val_int_from_str() to Item::val_int_from_str()
and changing it to get the value from "this" instead of args[0].
The patch does not change behaviour. It's only to simplify fixing of the
mentioned bugs. It will also simplify switching the CAST related code to
use the type handler infrastructure easier (soon).
NAME_CONST QUERY
ISSUE:
------
Using NAME_CONST with a non-constant negated expression as
value can result in incorrect behavior.
SOLUTION:
---------
The problem can be avoided by checking whether the argument
is a constant value.
The fix is a backport of Bug#12735545.
Window functions need to have their own column in the work (temp) table,
like aggregate functions do.
They don't need val_int() -> val_int_result() conversion though, so they
should be wrapped with Item_direct_ref, not Item_aggregate_ref.
Item_func_or_sum.
Implemented method update_used_tables for class Item_findow_func.
Added the flag Item::with_window_func.
Made sure that window functions could be used only in SELECT list
and ORDER BY clause.
Added test cases that checked different illegal placements of
window functions.
1. Fixing Field_time::get_equal_const_item() to pass TIME_FUZZY_DATES
and TIME_INVALID_DATES to get_time_with_conversion().
This is needed to make the recursively called Item::get_date() return
non-NULL values on garbage input. This makes Field_time::get_equal_const_item()
work consistently with how Item::val_time_packed() works.
2. Fixing Item::get_date() to return TIME'00:00:00' rather than
DATE'0000-00-00' on empty or garbage input when:
- TIME_FUZZY_DATES is enabled
- The caller requested a TIME value (by passing TIME_TIME_ONLY).
This is needed to avoid conversion of DATE'0000-00-00' to TIME
in get_time_with_conversion(), which would erroneously try to subtract
CURRENT_DATE from DATE'0000-00-00' and return TIME'-838:59:59' rather than
the desired zero value TIME'00:00:00'.
#1 and #2 fix these type of scripts to return one row with both
MyISAM and InnoDB, with and without an index on t1.b:
CREATE TABLE t1 (a ENUM('a'), b TIME, c INT, KEY(b));
INSERT INTO t1 VALUES ('','00:00:00',0);
SELECT * FROM t1 WHERE b='';
SELECT * FROM t1 WHERE a=b;
SELECT * FROM t1 IGNORE INDEX(b) WHERE b='';
SELECT * FROM t1 IGNORE INDEX(b) WHERE a=b;
Additionally, #1 and #2 fix the originally reported in MDEV-9604 crash
in Item::save_in_field(), because now execution goes through a different
path, so save_in_field() is called for a Item_time_literal instance
(which is non-NULL) rather than a Item_cache_str instance (which could
return NULL without setting null_value).
3. Fixing Field_temporal::get_equal_const_item_datetime() to enable
equal field propagation for DATETIME and TIMESTAMP in case of
comparison (e.g. when ANY_SUBST), for symmetry with
Field_newdate::get_equal_const_item(). This fixes a number of problems
with empty set returned on comparison to empty/garbage input.
Now all SELECT queries in this script return one row for MyISAM and InnoDB,
with and without an index on t1.b:
CREATE TABLE t1 (a ENUM('a'), b DATETIME, c INT, KEY(b));
INSERT INTO t1 VALUES ('','0000-00-00 00:00:00',0);
SELECT * FROM t1 WHERE b='';
SELECT * FROM t1 WHERE a=b;
SELECT * FROM t1 IGNORE INDEX(b) WHERE b='';
SELECT * FROM t1 IGNORE INDEX(b) WHERE a=b;
INTERVALS
ISSUE:
------
Some string functions return one or a combination of the
parameters as their result. Here the resultant string's
charset could be incorrectly set to that of the chosen
parameter.
This results in incorrect behavior when an ascii string is
expected.
SOLUTION:
---------
Since an ascii string is expected, val_str_ascii should
explicitly convert the string.
Part of the fix is a backport of Bug#22340858 for mysql-5.5
and mysql-5.6.
when doing set_field_to_new_field (from switch_to_nullable_trigger_fields())
make sure that the field we're about to change actually belongs
to the right table (otherwise we cannot dereference new_field[]
array as the wrong table might have more fields than
new_field[] has elements)
Case: table with a NOT NULL field, BEFORE UPDATE trigger,
and UPDATE with a subquery that uses GROUP BY on that
NOT NULL field, and needs a temporary table for it.
Because of the BEFORE trigger, the field becomes nullable
temporarily. But its Item_field (used in GROUP BY) doesn't.
When working with the temptable some code looked at
item->maybe_null, some - at field->null_ptr.
The fix: make Item_field nullable when its field is.
This triggers an assert. The group key size is calculated
before the item is made nullable, so the group key doesn't
have a null byte. The fix: make fields/items nullable
before the group key size is calculated.
"Re-factor the code for post-join operations".
The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm
The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.
The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.
Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.
The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.