The problem is a shift operation that is not 64-bit safe.
The consequence is that used tables information for a join with 32 tables
or more will be incorrect.
Fixed by adding a type cast in Item_sum::update_used_tables().
Also used the opportunity to fix some other potential bugs by adding an
explicit type-cast to an integer in a left-shift operation.
Some of them were quite harmless, but was fixed in order to get the same
signed-ness as the other operand of the operation it was used in.
sql/item_cmpfunc.cc
Adjusted signed-ness for some integers in left-shift.
sql/item_subselect.cc
Added type-cast to nesting_map (which is a 32/64 bit type, so
potential bug for deeply nested queries).
sql/item_sum.cc
Added type-cast to nesting_map (32/64-bit type) and table_map
(64-bit type).
sql/opt_range.cc
Added type-cast to ulonglong (which is a 64-bit type).
sql/sql_base.cc
Added type-cast to nesting_map (which is a 32/64-bit type).
sql/sql_select.cc
Added type-cast to nesting_map (32/64-bit type) and key_part_map
(64-bit type).
sql/strfunc.cc
Changed type-cast from longlong to ulonglong, to preserve signed-ness.
The problem is a shift operation that is not 64-bit safe.
The consequence is that used tables information for a join with 32 tables
or more will be incorrect.
Fixed by adding a type cast in Item_sum::update_used_tables().
Also used the opportunity to fix some other potential bugs by adding an
explicit type-cast to an integer in a left-shift operation.
Some of them were quite harmless, but was fixed in order to get the same
signed-ness as the other operand of the operation it was used in.
sql/item_cmpfunc.cc
Adjusted signed-ness for some integers in left-shift.
sql/item_subselect.cc
Added type-cast to nesting_map (which is a 32/64 bit type, so
potential bug for deeply nested queries).
sql/item_sum.cc
Added type-cast to nesting_map (32/64-bit type) and table_map
(64-bit type).
sql/opt_range.cc
Added type-cast to ulonglong (which is a 64-bit type).
sql/sql_base.cc
Added type-cast to nesting_map (which is a 32/64-bit type).
sql/sql_select.cc
Added type-cast to nesting_map (32/64-bit type) and key_part_map
(64-bit type).
sql/strfunc.cc
Changed type-cast from longlong to ulonglong, to preserve signed-ness.
This task fixes an ineffeciency that is a remainder from MySQL 5.0/5.1. There, subqueries
were optimized in a lazy manner, when executed for the first time. During this lazy optimization
it may happen that the server finds a more efficient subquery engine, and substitute the current
engine of the query being executed with the new engine. This required re-execution of the engine.
MariaDB 5.3 pre-optimizes subqueries in almost all cases, and the engine is chosen in most cases,
except when subquery materialization found that it must use partial matching. In this case, the
current code was performing one extra re-execution although it was not needed at all. The patch
performs the re-execution only if the engine was changed while executing.
In addition the patch performs small cleanup by removing "enum store_key_result" because it is
essentially a boolean, and the code that uses it already maps it to a boolean.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
- Wrong thd uses in Item_subselect, could lead to crash
- Inititalize uninitialized variable in new autoincrement handling code
sql/handler.cc:
More DBUG_PRINT
sql/item_subselect.cc:
Wrong thd uses in Item_subselect, could lead to crash
storage/innobase/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
storage/xtradb/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
Analysis:
The queries in question use the [unique | index]_subquery execution methods.
These methods reuse the ref keys constructed by create_ref_for_key(). The
way create_ref_for_key() works is that it doesn't store in ref.key_copy[]
store_key elements that represent constants. In particular it doesn't store
the store_key for NULL constants.
The execution of [unique | index]_subquery calls
subselect_uniquesubquery_engine::copy_ref_key, which in addition to copy
the left IN argument into a index lookup key, is supposed to detect if
the left IN argument contains NULLs. Since the store_key for the NULL
constant is not copied into the key array, the null is not detected, and
execution erroneously proceeds as if it should look for a complete match.
Solution:
The solution (unlike MySQL) is to reuse already computed information about
NULL presence. Item_in_optimizer::val_int already finds out if the left IN
operand contains NULLs. The fix propagates this to the execution methods
subselect_[unique | index]subquery_engine::exec so it knows if there were
NULL values independent of the presence of keys.
In addition the patch siplifies copy_ref_key() and the logic that hanldes
the case of NULLs in the left IN operand.
feature_dynamic_columns,feature_fulltext,feature_gis,feature_locale,feature_subquery,feature_timezone,feature_trigger,feature_xml
Opened_views, Executed_triggers, Executed_events
Added new process status 'updating status' as part of 'freeing items'
mysql-test/r/features.result:
Test of feature_xxx status variables
mysql-test/r/mysqld--help.result:
Removed duplicated 'language' variable.
mysql-test/r/view.result:
Test of opened_views
mysql-test/suite/rpl/t/rpl_start_stop_slave.test:
Write more information on failure
mysql-test/t/features.test:
Test of feature_xxx status variables
mysql-test/t/view.test:
Test of opened_views
sql/event_scheduler.cc:
Increment executed_events status variable
sql/field.cc:
Increment status variable
sql/item_func.cc:
Increment status variable
sql/item_strfunc.cc:
Increment status variable
sql/item_subselect.cc:
Increment status variable
sql/item_xmlfunc.cc:
Increment status variable
sql/mysqld.cc:
Add new status variables to 'show status'
sql/mysqld.h:
Added executed_events
sql/sql_base.cc:
Increment status variable
sql/sql_class.h:
Add new status variables
sql/sql_parse.cc:
Added new process status 'updating status' as part of 'freeing items'
sql/sql_trigger.cc:
Increment status variable
sql/sys_vars.cc:
Increment status variable
sql/tztime.cc:
Increment status variable
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.
Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.
sql/item_subselect.cc:
Added purecov info
sql/sql_select.cc:
Added cast
storage/innobase/handler/ha_innodb.cc:
Added cast
storage/xtradb/btr/btr0btr.c:
Added buf_block_get_frame_fast() to avoid compiler warning
storage/xtradb/handler/ha_innodb.cc:
Added cast
storage/xtradb/include/buf0buf.h:
Innodb has buf_block_get_frame(block) defined as (block)->frame.
Didn't want to do a big change to break xtradb as it may use block_get_frame() differently, so I mad this quick hack to patch one compiler warning.
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.
sql/item_subselect.cc:
addon, fix for 11829691, item could be created in
runtime memroot, so we need to use real_item instead.
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.
Analysis:
The fix for bug lp:985667 implements the method Item_subselect::no_rows_in_result()
for all main kinds of subqueries. The purpose of this method is to be called from
return_zero_rows() and set Items to some default value in the case when a query
returns no rows. Aggregates and subqueries require special treatment in this case.
Every implementation of Item_subselect::no_rows_in_result() called
Item_subselect::make_const() to set the subquery predicate to its default value
irrespective of where the predicate was located in the query. Once the predicate
was set to a constant it was never executed.
At the same time, the JOIN object of the fake select for UNIONs (the one used for
the final result of the UNION), was set after all subqueries in the union were
executed. Since we set the subquery as constant, it was never executed, and the
corresponding JOIN was never created.
In order to decide whether the result of NOT IN is NULL or FALSE, Item_in_optimizer
needs to check if the subquery result was empty or not. This is where we got the
crash, because subselect_union_engine::no_rows() checks for
unit->fake_select_lex->join->send_records, and the join object was NULL.
Solution:
If a subquery is in the HAVING clause it must be evaluated in order to know its
result, so that we can properly filter the result records. Once subqueries in the
HAVING clause are executed even in the case of no result rows, this specific
crash will be solved, because the UNION will be executed, and its JOIN will be
constructed. Therefore the fix for this crash is to narrow the fix for lp:985667,
and to apply Item_subselect::no_rows_in_result() only when the subquery predicate
is in the SELECT clause.
Analysis:
Queries with implicit grouping (there is aggregate, but no group by)
follow some non-obvious semantics in the case of empty result set.
Aggregate functions produce some special "natural" value depending on
the function. For instance MIN/MAX return NULL, COUNT returns 0.
The complexity comes from non-aggregate expressions in the select list.
If the non-aggregate expression is a constant, it can be computed, so
we should return its value, however if the expression is non-constant,
and depends on columns from the empty result set, then the only meaningful
value is NULL.
The cause of the wrong result was that for subqueries the optimizer didn't
make a difference between constant and non-constant ones in the case of
empty result for implicit grouping.
Solution:
In all implementations of Item_subselect::no_rows_in_result() check if the
subquery predicate is constant. If it is constant, do not set it to the
default value for implicit grouping, instead let it be evaluated.
Analysis:
The optimizer detects an empty result through constant table optimization.
Then it calls return_zero_rows(), which in turns calls inderctly
Item_maxmin_subselect::no_rows_in_result(). The latter method set "value=0",
however "value" is pointer to Item_cache, and not just an integer value.
All of the Item_[maxmin | singlerow]_subselect::val_XXX methods does:
if (forced_const)
return value->val_real();
which of course crashes when value is a NULL pointer.
Solution:
When the optimizer discovers an empty result set, set
Item_singlerow_subselect::value to a FALSE constant Item instead of NULL.