CAUSES RESTORE PROBLEM
Merging the fix from mysql-5.1 to mysql-5.5
mysql-test/t/mysqldump.test:
There is a difference in the testcase which is added as
part of this fix, when compared with mysql-5.1. In mysql-5.5
and mysql-5.6, "DROP mysql database" fails by enabling
logging, hence removed those lines.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
client/mysqldump.c:
In get_table_structure() added code to skip the DROP TABLE stmts for general_log
and slow_log tables, because when logging is enabled those stmts will fail. And
replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make
sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
those tables.
In dump_all_tables_in_db() added code to call get_table_structure() for
general_log and slow_log tables.
mysql-test/r/mysqldump.result:
Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
Added a test as part of fix for Bug #11754178
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY
Original comment:
-----------------
3714 Jorgen Loland 2012-03-01
BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT
QUERY OUTPUT
For all but simple grouped queries, temporary tables are used to
resolve grouping. In these cases, the list of grouping fields is
stored in the temporary table and grouping is resolved
there (e.g. by adding a unique constraint on the involved
fields). Because of this, grouping is already done when the rows
are read from the temporary table.
In the case where a group clause may be optimized away, grouping
does not have to be resolved using a temporary table. However, if
a temporary table is explicitly requested (e.g. because the
SQL_BUFFER_RESULT hint is used, or the statement is
INSERT...SELECT), a temporary table is used anyway. In this case,
the temporary table is created with an empty group list (because
the group clause was optimized away) and it will therefore not
create groups. Since the temporary table does not take care of
grouping, JOIN::group shall not be set to false in
make_simple_join(). This was fixed in bug 12578908.
However, there is an exception where make_simple_join() should
set JOIN::group to false even if the query uses a temporary table
that was explicitly requested but is not strictly needed. That
exception is if the loose index scan access method (explain
says "Using index for group-by") is used to read into the
temporary table. With loose index scan, grouping is resolved
by the access method. This is exactly what happens in this bug.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
client/mysqldump.c:
In get_table_structure() added code to skip the DROP TABLE stmts for general_log
and slow_log tables, because when logging is enabled those stmts will fail. And
replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make
sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
those tables.
In dump_all_tables_in_db() added code to call get_table_structure() for
general_log and slow_log tables.
mysql-test/r/mysqldump.result:
Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
Added a test as part of fix for Bug #11754178
This is a backport of the fix for MySQL bug #13723054 in 5.6.
Original comment:
The crash is caused by arbitrary memory area owerwriting in case of
BLOB fields during attempt to copy BLOB field key image into record
buffer(record buffer is too small to get BLOB key part image).
note:
QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
because it uses record buffer as temporary buffer for key values
however this case is filtered out by covering_keys() check
in get_best_group_min_max() as BLOBs always require key length
modificator in the key declaration and if the key has a BLOB
then it can not be covered key.
The fix is to use 'max_used_key_length' key length instead of 0.
Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
The problem was increment of aborted_threads variable due to thd->killed which was set when threadpool connection was terminated . The fix is not to set thd->killed anymore, there is no real reason for doing it..
Added a test that checks that status variable aborted_clients does not grow for ordinary disconnects, and that successful KILL increments this variable.
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.
Solution is to mark constant only top equalities connected with AND.
Create an Item_cache based on item's cmp_type, not result_type in
subselect_engine.
Use result_field in Item_cache_temporal::cache_value(),
just like all other Item_cache*::cache_value() do.
Points and lines should disappear if we got negative D.
To make it work properly inside the GEOMETRYCOLLECTION,
we add the empty operation there.
bug #986977 Assertion `!cur_p->event' failed in Gcalc_scan_iterator::arrange_event(int, int).
The double->inernal coord conversion produced -0 (minus zero) on some data.
That minus-zero produces invalid comparison results when compared agains plus-zero.
So we fixed the gcalc_set_double() to avoid it.
per-file comments:
mysql-test/r/gis-precise.result
result updated.
mysql-test/t/gis-precise.test
tests for #977021 and #986977 added.
sql/gcalc_slicescan.cc
bug #986977. The gcalc_set_double fixed to not produce minus-zero.
sql/item_geofunc.cc
bug #977021. Add the NOOP for the disappearing features.
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.
The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address,
0.0.0.0 is choosen. So, there was no change in the behaviour.
This patch restores default value of the bind-address option to "0.0.0.0".
Analysis:
The reason for the wrong result is the interaction between constant
optimization (in this case 1-row table) and subquery optimization.
- First the outer query is optimized, and 'make_join_statistics' finds that
table t2 has one row, reads that row, and marks the whole table as constant.
This also means that all fields of t2 are constant.
- Next, we optimize the subquery in the end of the outer 'make_join_statistics'.
The field 'f2' is considered constant, with value '3'. The subquery predicate
is rewritten as the constant TRUE.
- The outer query execution detects early that the whole query result is empty
and calls 'return_zero_rows'. Since the query is with implicit grouping, we
have to produce one row with special values for the aggregates (depending on
each aggregate function), and NULL values for all non-aggregate fields. This
function calls 'no_rows_in_result' to set each aggregate function to the
default value when it aggregates over an empty result, and then calls
'send_data', which in turn evaluates each Item in the SELECT list.
- When evaluation reaches the subquery predicate, it executes the subquery
with field 'f2' having a constant value '3', and the subquery produces the
incorrect result '7'.
Solution:
Implement Item::no_rows_in_result for all subquery predicates. In order to
make this work, it is also needed to make all val_* methods of all subquery
predicates respect the Item_subselect::forced_const flag. Otherwise subqueries
are executed anyways, and override the default value set by no_rows_in_result
with whatever result is produced from the subquery evaluation.
Details:
- test case bug12427262.test was failing on windows because
on windows '/' was not recognized. And this was used in
LIKE clause of the query being run in this test case.
Fix:
- Windows needs '\\\\' for path seperater in mysql. I was
not sure how to keep a single query with two different
syntax based on platform. So modifying query to make sure
it runs correctly on both platform.
Points and lines should disappear if we got negative D.
To make it work properly inside the GEOMETRYCOLLECTION,
we add the empty operation there.
bug #986977 Assertion `!cur_p->event' failed in Gcalc_scan_iterator::arrange_event(int, int).
The double->inernal coord conversion produced -0 (minus zero) on some data.
That minus-zero produces invalid comparison results when compared agains plus-zero.
So we fixed the gcalc_set_double() to avoid it.
per-file comments:
mysql-test/r/gis-precise.result
result updated.
mysql-test/t/gis-precise.test
tests for #977021 and #986977 added.
sql/gcalc_slicescan.cc
bug #986977. The gcalc_set_double fixed to not produce minus-zero.
sql/item_geofunc.cc
bug #977021. Add the NOOP for the disappearing features.
Fixed incorrect type casting which made all fields (except very first) changes to materialized table incorrect.
Saved list of view/derived table used items after expanding '*'.
Part#1: make EXPLAIN's plan match the one by actual execution:
Item_subselect::used_tables() should return the same value irrespectively
of whether we're running an EXPLAIN or a SELECT.
mysql-test/r/select.result:
Added test result for Bug#12713907
mysql-test/t/select.test:
Added test case for Bug#12713907
sql/sql_select.cc:
Remove the call to set_keyread as we do it from access
functions 'join_read_first' and 'join_read_last'
ORDER BY COUNT(*) LIMIT.
PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)
The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.
Following is the code flow:
Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan
Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort
Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched
FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
mysql-test/r/func_group_innodb.result:
Added test result for Bug#12713907
mysql-test/t/func_group_innodb.test:
Added test case for Bug#12713907
sql/sql_select.cc:
Remove the call to set_keyread as we do it from access
functions 'join_read_first' and 'join_read_last'
TABLES IN INCORRECT ENGINE
PROBLEM:
CREATE/ALTER TABLE currently can move system tables like
mysql.db, user, host etc, to engines other than MyISAM. This is not
completely supported as of now, by mysqld. When some of system tables
like plugin, servers, event, func, *_priv, time_zone* are moved
to innodb, mysqld restart crashes. Currently system tables
can be moved to BLACKHOLE also!!!.
ANALYSIS:
The problem is that there is no check before creating or moving
a system table to some particular engine.
System tables are suppose to be residing in MyISAM. We can think
of restricting system tables to exist only in MyISAM. But, there could
be future needs of these system tables to be part of other engines
by design. For eg, NDB cluster expects some tables to be on innodb
or ndb engine. This calls for a solution, by which system
tables can be supported by any desired engine, with minimal effort.
FIX:
The solution provides a handlerton interface using which,
mysqld server can query particular storage engine handlerton for
system tables that it supports. This way each storage engine
layer can define their own system database and system tables.
The check_engine() function uses the new handlerton function
ha_check_if_supported_system_table() to check if db.tablename
provided in the DDL is supported by the SE.
Note: This fix has modified a test in help.test, which was moving
mysql.help_* to innodb. The primary intention of the test was not
to move them between engines.
This bug is a duplicate of Bug #31173, which was pushed to the
mysql-trunk 5.6 on 4th Aug, 2010. This is just a back-port of
the fix
mysql-test/r/mysqlslap.result:
A test added as part of the fix for Bug #59107
mysql-test/t/mysqlslap.test:
A test added as part of the fix for Bug #59107
This bug is a duplicate of Bug #31173, which was pushed to the
mysql-trunk 5.6 on 4th Aug, 2010. This is just a back-port of
the fix
mysql-test/r/mysqlslap.result:
A test added as part of the fix for Bug #59107
mysql-test/t/mysqlslap.test:
A test added as part of the fix for Bug #59107
When the function free_tmp_table deletes the handler object for
a temporary table the field TABLE::file for this table should be
set to NULL. Otherwise an assertion failure may occur.
This bug happened because the function find_field_in_view formed
autogenerated names of view columns without a possibility to roll
them back. In some situation it could cause memory misuses reported
by valgrind or even crashes.
When a view/derived table is converted from merged to materialized the
items from the used_item lists are substituted for items referring to
the fields of the result of the materialization. The problem appeared
with queries employing natural joins. Since the resolution of a natural
join was performed only once the used_item list formed at the second
execution of the query lacked the references to the fields that were
used only in the equality predicates generated for the natural join.
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).
mysql-test/r/subselect_innodb.result:
test result
mysql-test/t/subselect_innodb.test:
test case
sql/sql_select.cc:
added OUTER_REF_TABLE_BIT to the used_tables parameter
for create_ref_for_key() function.
storage/innobase/handler/ha_innodb.cc:
added assertion, request from Inno team
storage/innodb_plugin/handler/ha_innodb.cc:
added assertion, request from Inno team
Moved test from main suite to the new suites.
Move tests from maria/t and maria/r to maria
mysql-test/mysql-test-run.pl:
Added support for the new suites
The main problem was a bug in CSV where it provided wrong statistics (it claimed the table was empty when it wasn't)
I also fixed wrong freeing of blob's in the CSV handler. (Any call to handler::read_first_row() on a CSV table with blobs would fail)
mysql-test/r/csv.result:
Added new test case
mysql-test/r/partition_innodb.result:
Updated test results after fixing bug with impossible partitions and const tables
mysql-test/t/csv.test:
Added new test case
sql/sql_select.cc:
Cleaned up code for handling of partitions.
Fixed also a bug where we didn't threat a table with impossible partitions as a const table.
storage/csv/ha_tina.cc:
Allocate blobroot onces.
- When doing join optimization, pre-sort the tables so that they mimic the execution
order we've had with 'semijoin=off'.
- That way, we will not get regressions when there are two query plans (the old and the
new) that have indentical costs but different execution times (because of factors that
the optimizer was not able to take into account).
Background:
- as described in MySQL Internals Prepared Stored
(http://forge.mysql.com/wiki/MySQL_Internals_Prepared_Stored),
the Optimizer sometimes does destructive changes to the parsed
LEX-object (Item-tree), which makes it impossible to re-use
that tree for PS/SP re-execution.
- in order to be able to re-use the Item-tree, the destructive
changes are remembered and rolled back after the statement execution.
The problem, discovered by this bug, was that the objects representing
GROUP-BY clause did not restored after query execution. So, the GROUP-BY
part of the statement could not be properly re-initialized for re-execution
after destructive changes.
Those objects do not take part in the Item-tree, so they can not be saved
using the approach for Item-tree.
The fix is as follows:
- introduce a new array in st_select_lex to store the original
ORDER pointers, representing the GROUP-BY clause;
- Initialize this array in fix_prepare_information().
- restore the list of GROUP-BY items in reinit_stmt_before_use().
HANG IN PREPARING WITH 100% CPU USAGE
Infinite loop in the subselect_indexsubquery_engine::exec()
function caused Server hang with 100% CPU usage.
The BLACKHOLE storage engine didn't update handler's
table->status variable after index operations, that
caused an infinite "while(!table->status)" execution.
Index access methods of the BLACKHOLE engine handler
have been updated to set table->status variable to
STATUS_NOT_FOUND or 0 when such a method returns a
HA_ERR_END_OF_FILE error or 0 respectively.
mysql-test/r/blackhole.result:
Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE
New test file for the BLACKHOLE engine.
mysql-test/t/blackhole.test:
Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE
New test file for the BLACKHOLE engine.
storage/blackhole/ha_blackhole.cc:
Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE
Index access methods of the BLACKHOLE engine handler
have been updated to set table->status variable to
STATUS_NOT_FOUND or 0 when such a method returns a
HA_ERR_END_OF_FILE error or 0 respectively, like
we do in MyISAM storage engine.
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."
In other words, 'lower_case_table_names' does not affect the behaviour of
those identifiers.
On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.
The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.
Fix:
Modified the code so that comparison in case insensitive for routines
and events for "SHOW" operation.
As part of this commit, only fixing the test failures due to the actual code fix.
mysql-test/suite/innodb/t/group_commit_crash.test:
remove autoincrement to avoid rbr being used for insert ... select
mysql-test/suite/innodb/t/group_commit_crash_no_optimize_thread.test:
remove autoincrement to avoid rbr being used for insert ... select
mysys/my_addr_resolve.c:
a pointer to a buffer is returned to the caller -> the buffer cannot be on the stack
mysys/stacktrace.c:
my_vsnprintf() is ok here, in 5.5
- This is a regession introduced by fix for BUG#951937
- The problem was that there were scenarios where check_simple_equality() would create an
Item_equal object but would not call item_equal->set_context_field() on it.
- The fix was to add the missing calls.
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."
In other words, 'lower_case_table_names' does not affect the behaviour of
those identifiers.
On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.
The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.
Fix:
Modified the code so that comparison in case insensitive for routines
and events for "SHOW" operation.
- The problem was with execution strategy for cases where FirstMatch's inner tables
were interleaved with outer-uncorrelated tables.
- I was unable to find any cases where such join orders would be practically useful,
so fixed it by disabling them.
- Fix equality propagation to work with SJM nests and OR clauses (full descirption of problem and
solution in the comment in the patch)
(The second commit with post-review fixes)
- The problem was that
= we've picked a LooseScan that used full index scan (tab->type==JT_ALL) on certain index.
= there was also a quick select (tab->quick!=NULL), that used other indexes.
= some old code assumes that (tab->type==JT_ALL && tab->quick) -> means that the
quick select should be used, which is not true.
Fixed by discarding the quick select as soon as we know we're using LooseScan
without using the quick select.
If the first component of a ref key happened to be a constant appeared
after constant row substitution then no store_key element should be
created for such a component. Yet create_ref_for_key() erroneously could
create such an element that caused construction of invalid ref keys and
wrong results for some joins.
Problem:
Field_set::val_str in case of an empty SET value
returned a String with str_length==0 and Ptr==0,
which is not expected by some pieces of the code.
Fix:
Returning an empty string with str_length==0 and Ptr=="",
like Field_enum does.
- The problem was that convert_subq_to_jtbm() attached the semi-join
TABLE_LIST object into the wrong list: they used to attach it to the
end of parent_lex->leaf_tables.head()->next_local->...->next_local.
This was apparently inccorect, as one can construct an example where
JTBM nest is attached to a table that is inside some mergeable VIEW, which
breaks (causes crash) for name resolution on the subsequent statement
re-execution.
- Solution: Attach to the "right" list. The "wording" was copied from
st_select_lex::handle_derived.
AND SAVEPOINT.
The bug was introduced by the patch for bug#11766752. This patch sets too
strong condition on XA state for SAVEPOINT statement that disallows its
execution during XA transaction. But since the statement SAVEPOINT doesn't
imply implicit commit we can allow its handling during XA transaction.
The patch explicitly check for transaction state against states XA_NOTR
and XA_ACTIVE for which the handling of statement SAVEPOINT for XA
transaction is allowed.
mysql-test/t/xa.test:
Testcase was adjusted for bug#13737343. Now SAVEPOINT is allowed for XA
transactions in ACTIVE state.
The table contains one time value: '00:00:32'
This value is converted to timestamp by a subquery.
In convert_constant_item we call (*item)->is_null()
which triggers execution of the Item_singlerow_subselect subquery,
and the string "0000-00-00 00:00:32" is cached
by Item_cache_datetime.
We continue execution and call update_null_value, which calls val_int()
on the cached item, which converts the time value to ((longlong) 32)
Then we continue to do (*item)->save_in_field()
which ends up in Item_cache_datetime::val_str() which fails,
since (32 < 101) in number_to_datetime, and val_str() returns NULL.
Item_singlerow_subselect::val_str isnt prepared for this:
if exec() succeeds, and return !null_value, then val_str()
*must* succeed.
Solution: refuse to cache strings like "0000-00-00 00:00:32"
in Item_cache_datetime::cache_value, and return NULL instead.
This is similar to the solution for
Bug#11766860 - 60085: CRASH IN ITEM::SAVE_IN_FIELD() WITH TIME DATA TYPE
This patch is for 5.5 only.
The issue is not present after WL#946, since a time value
will be converted to a proper timestamp, with the current date
rather than "0000-00-00"
mysql-test/r/subselect.result:
New test case.
mysql-test/t/subselect.test:
New test case.
sql/item.cc:
Verify proper date format before caching timestamps.
sql/item_timefunc.cc:
Use named constant for readability.
We are trying to sort a lot of text/blob fields,
so the buffer is indeed too small.
Memory available = thd->variables.sortbuff_size = 262144
min_sort_memory = param.sort_length*MERGEBUFF2 = 292245
So the decision to abort the query is correct.
filesort() calls my_error(), the error is reported.
But, since we have DELETE IGNORE ... the error is converted to a warning by
THD::raise_condition
filesort currently expects an error to be recorded in the THD diagnostics
area.
If we lift this restriction (remove the assert) we end up in the familiar
void Protocol::end_statement()
default:
DBUG_ASSERT(0);
The solution seems to be to call my_error(ME_FATALERROR) in filesort,
so that the error is propagated as an error rather than a warning.
mysql-test/r/filesort_debug.result:
New test case.
mysql-test/t/filesort_debug.test:
New test case.
ENOUGH - CONCAT() HACKS. ALSO WRONG
ERROR MESSAGE WHILE TRYING TO CREATE
A VIEW ON A NON EXISTING DATABASE
PROBLEM:
The first part of the problem is concluded as not a
bug, as 'concat' is not a reserved word and it is
completely valid to create a view with the name
'concat'.
The second issue is, while trying to create a view on
a non existing database, we are not giving a proper error
message.
FIX:
We have added a check for the database existence while
trying to create a view. This check would give an error
as 'unknown database' when the database does not exist.
This patch is a backport of the patch for Bug#13601606
mysql-test/r/view.result:
Added test case result of Bug#12626844
mysql-test/t/view.test:
Added test case for Bug#12626844
sql/sql_view.cc:
Added a check for database existence in mysql_create_view
Do not call, directly or indirectly, SQL_SELECT::test_quick_select()
for derived materialized tables / views when optimizing joins referring
to these tables / views to get cost estimates of materialization.
The current code does not create B-tree indexes for materialized
derived tables / views. So now it's not possible to get any estimates
for ranges conditions over the results of the materialization.
The function mysql_derived_create() must take into account the fact
that array of the KEY structures specifying the keys over a derived
table / view may be moved after the optimization phase if the
derived table / view is materialized.
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Change ha_archive::unpack_row() to detect wrong field lengths.
Replication code changed to detect wrong field information in events.
mysql-test/r/archive.result:
dded test case for lp:917689
sql/field.cc:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
sql/field.h:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
Removed some not needed unpack() functions.
sql/filesort.cc:
Added buffer end parameter to unpack_addon_fields()
sql/log_event.h:
Added end of buffer argument to unpack_row()
sql/log_event_old.cc:
Added end of buffer argument to unpack_row()
sql/log_event_old.h:
Added end of buffer argument to unpack_row()
sql/records.cc:
Added buffer end parameter to unpack_addon_fields()
sql/rpl_record.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record.h:
Added end of buffer argument to unpack_row()
sql/rpl_record_old.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record_old.h:
Added end of buffer argument to unpack_row()
sql/table.h:
Added buffer end parameter to unpack()
storage/archive/ha_archive.cc:
Change ha_archive::unpack_row() to detect wrong field lengths.
This fixes lp:917689
- The bug would show up
- when using PS (so that we get re-execution)
- the left_expr of the subquery is a reference to viewname.column_name, so that it crashes
when one tries to use it without having called fix_fields for it.
- when using SJ-Materialization, which makes use of sj_subq_pred->left_expr expression
- The fix is to have setup_conds() fix sj_subq_pred->left_expr for semi-join nests it finds.
Problem: Grouping results by VALUES(alias for string literal) causes
the server to crash.
Item_insert_values is not constructed to handle other types of
arguments than field and reference to field. In this case, the
argument is an Item_string, and this causes
Item_insert_values::fix_fields() to crash.
Fix: Issue an error message when the argument to Item_insert_values is
not a field or a reference to a field.
This is slightly in breach with documentation, which states that
VALUES should return NULL, but the error message is only issued in
cases where the server otherwise would crash, so there is no change in
behavior for queries that already work. Future versions will restrict
syntax so that using VALUES in this way is illegal.
mysql-test/r/errors.result:
Add test case for bug #13031606.
mysql-test/t/errors.test:
Add test case for bug #13031606.
sql/item.cc:
Issue error message if argument is not field or reference to field.
https://mariadb.atlassian.net/browse/MDEV-28
This task implements a new clause LIMIT ROWS EXAMINED <num>
as an extention to the ANSI LIMIT clause. This extension
allows to limit the number of rows and/or keys a query
would access (read and/or write) during query execution.
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.
If in the where clause of the a query some comparison conditions on the
field under a MIN/MAX aggregate function contained constants whose sizes
exceeded the size of the field then the query could return a wrong result
when the optimizer had chosen to apply the MIN/MAX optimization.
With such conditions the MIN/MAX optimization still could be applied, yet
it would require a more thorough analysis of the keys built to find
the value of MIN/MAX aggregate functions with index look-ups.
The current patch just prohibits using the MIN/MAX optimization in this
situation.
The function create_hj_key_for_table() that builds the descriptor of
the hash join key to access a table of a materialized subquery must
ignore any equi-join predicate depending on the tables not belonging
to the subquery.
A defect in the subquery substitution code may lead to a server crash:
setting substitution's name should be followed by setting its length
(to keep them in sync).
mysql-test/r/gis.result:
BUG#12537203 - CRASH WHEN SUBSELECTING GLOBAL VARIABLES IN GEOMETRY FUNCTION ARGUMENTS
test result.
mysql-test/t/gis.test:
BUG#12537203 - CRASH WHEN SUBSELECTING GLOBAL VARIABLES IN GEOMETRY FUNCTION ARGUMENTS
test case.
sql/item_subselect.cc:
BUG#12537203 - CRASH WHEN SUBSELECTING GLOBAL VARIABLES IN GEOMETRY FUNCTION ARGUMENTS
set substitution's name length as well as the name itself (to keep them in sync).
Problem:
lack of incoming geometry data validation may
lead to a server crash when ISCLOSED() function called.
Solution:
necessary incoming data check added.
mysql-test/r/gis.result:
Fix for BUG#12414917 - ISCLOSED() CRASHES ON 64-BIT BUILDS
test result.
mysql-test/t/gis.test:
Fix for BUG#12414917 - ISCLOSED() CRASHES ON 64-BIT BUILDS
test case.
sql/spatial.cc:
Fix for BUG#12414917 - ISCLOSED() CRASHES ON 64-BIT BUILDS
check if a LINESTRING has at least one point as we
rely on that further.
This bug in the function JOIN::drop_unused_derived_keys() could
leave the internal structures for a materialized derived table
in an inconsistent state. This led to a not quite correct EXPLAIN
output when no additional key had been created to access the table.
It also may lead to more serious consequences: so, the test case
added with this fix caused a crash in mariadb-5.5.20.
This bug appeared after the patch for bug 939009 that in the
function merge_key_fields forgot to reset a proper value for
the val field in the result of the merge operation of the key
field created for a regular key access and the key field
created to look for a NULL key.
Adjusted the results of the test case for bug 939009 that
actually were incorrect.
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary
character like any other.
SQL_MODE set applies to the current client session. And while creating the stored
procedure, MySQL stores the current sql_mode and always executes the stored
procedure in sql_mode stored with the Procedure, regardless of the server SQL
mode in effect when the routine is invoked.
In the scenario (for which bug is reported), the routine is created with
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are
updated with "Axel's". The binary log generated for above update operation is as below,
set sql_mode=XXXXXX (for no_backslash_escapes)
update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');
While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function)
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)
On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash).
At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log
irrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is
that logging string parameter does not take into account sql_mode value.
Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin
log.
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several
ways to include quote characters within a string: )
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary
character like any other.
SQL_MODE set applies to the current client session. And while creating the stored
procedure, MySQL stores the current sql_mode and always executes the stored
procedure in sql_mode stored with the Procedure, regardless of the server SQL
mode in effect when the routine is invoked.
In the scenario (for which bug is reported), the routine is created with
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are
updated with "Axel's". The binary log generated for above update operation is as below,
set sql_mode=XXXXXX (for no_backslash_escapes)
update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');
While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function)
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)
On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash).
At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log
Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is
that logging string parameter does not take into account sql_mode value.
Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin
log.
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several
ways to include quote characters within a string: )
mysql-test/r/sql_mode.result:
Added test case for Bug#12601974.
mysql-test/suite/binlog/r/binlog_sql_mode.result:
Appended result of test cases added for Bug#12601974.
mysql-test/suite/binlog/t/binlog_sql_mode.test:
Added test case for Bug#12601974.
mysql-test/t/sql_mode.test:
Appended result of test cases added for Bug#12601974.
Fixed wrong mutex order bug in Aria when flush_log_for_bitmap() was called when table is not yet marked for change.
include/my_base.h:
Added flag that table is opened only for status
mysql-test/r/myisam-big.result:
Test case for lp:925377
mysql-test/t/myisam-big.test:
Test case for lp:925377
sql/sql_base.cc:
If thd->version == 0 (happens only when we are opening a table that is flushed under MYSQL_LOCK_IGNORE_FLUSH), open the table in HA_OPEN_FOR_STATUS mode
storage/maria/ma_bitmap.c:
Fixed wrong mutex order bug in Aria when flush_log_for_bitmap() was called when table is not yet marked for change.
storage/maria/ma_dbug.c:
Ignore last_version <= 1 as these are either flushed or only opened for status
storage/maria/ma_open.c:
Use last_version=1 as a marker that table was opened with HA_OPEN_FOR_STATUS.
In this case we just open a new version of the table in read only mode.
storage/myisam/mi_create.c:
Update prototype
storage/myisam/mi_dbug.c:
Ignore last_version <= 1 as these are either flushed or only opened for status
storage/myisam/mi_open.c:
Use last_version=1 as a marker that table was opened with HA_OPEN_FOR_STATUS.
If HA_OPEN_FOR_STATUS is used, we will not assert if there is an old not-to-be-used version of the table existing.
In this case we just open a new version of the table in read only mode.
storage/myisam/myisamdef.h:
Updated prototype
make sure that stored routines are evaluated (that is, de facto - cached) in convert_const_to_int().
revert the fix for lp:806943 because it cannot be repeated anymore.
add few tests for convert_const_to_int()
Problem - The default port number shown in SHOW SLAVE HOSTS is always 3306
though the slave is actually listening on a different port number.
This is a problem as the user can not be sure whether this port
value can be trusted and so client trying to read replication
topology can get confused.
Fix - 3306 ceases to be the default value of report-port. Moreover report-port
does not have a static default any longer.
Instead we initialize report-port to 0 as the new default value and change
it based on two checks :
1) If report_port is not set, the slave reports the port number its listening
on. (i.e. if report-port is not set we get the actual value of the slave's
port number).
2) If report-port is set, we show the value report-port is set to, as the slave's
port number.
mysql-test/include/show_slave_hosts.inc:
A .inc file is added to use show slave hosts in the new test added.
mysql-test/r/mysqld--help-notwin.result:
Updated the result file to show the default value passed for report-port.
mysql-test/suite/rpl/r/rpl_report_port.result:
The result file for the new test that is added.
mysql-test/suite/rpl/r/rpl_show_slave_hosts.result:
Updated the result file to show the default value passed for report-port.
mysql-test/suite/rpl/t/rpl_report_port-slave.opt:
Option file for the new test added.
mysql-test/suite/rpl/t/rpl_report_port.test:
Added a test to check the correct functionality of report-port.
We check this by running the replication twice.
In the first run we do not set the value of report-port through the opt file
and get the actual port number of the slave's port.
We then restart the server with report-port set to some value (in this case 9000)
and check the value reported for the slave's port number.
mysql-test/suite/sys_vars/t/report_port_basic.test:
Update the test file to show the value for report-port. It is replaced with
SLAVE_PORT as the actual value of the report-port will change with each run.
sql/mysqld.cc:
Changed the value reported by report port :
1. If the value for report-port is not set we assign report-port to be the
actual port number of the slave (mysqld_port).
2. If report-port is set we get the value set for the report-port.
sql/sys_vars.cc:
Passed 0 as the default value of the report-port.
The result of materialization of the right part of an IN subquery predicate
is placed into a temporary table. Each row of the materialized table is
distinct. A unique key over all fields of the temporary table is defined and
created. It allows to perform key look-ups into the table.
The table created for a materialized subquery can be accessed by key as
any other table. The function best_access-path search for the best access
to join a table to a given partial join. With some where conditions this
function considers a possibility of a ref_or_null access. If such access
employs the unique key on the temporary table then when estimating
the cost this access the function tries to use the array rec_per_key. Yet,
such array is not built for this unique key. This causes a crash of the server.
Rows returned by the subquery that contain nulls don't have to be placed
into temporary table, as they cannot be match any row produced by the
left part of the subquery predicate. So all fields of the temporary table
can be defined as non-nullable. In this case any ref_or_null access
to the temporary table does not make any sense and it does not make sense
to estimate such an access.
The fix makes sure that the temporary table for a materialized IN subquery
is defined with columns that are all non-nullable. The also ensures that
any row with nulls returned by the subquery is not placed into the
temporary table.
- After the exec_const_cond->val_int() call, check for error and return.
(if we don't do it, we will eventually hit an error when trying to set status OK in
the diagnostics area, which already has an error status).
CHECK_SIMPLE_EQUALITY
PROBLEM:
Crash in "check_simple_equality" when using a subquery with "IN" and
"ALL" in prepare.
ANALYSIS:
Crash can be reproduced using a simplified query like this one:
prepare s from "select 1 from g1 where 1 < all (
select @:=(1 in (select 1 from g1)) from g1)";
This bug is currently present only on 5.5.and 5.1. Its fixed as part
of work log(#1110) in 5.6. We are taking one change to fix this
in 5.5 and 5.1.
Problem seems to be present because we are trying to evaluate "is_null"
on an argument which is part of a subquery
(In Item_is_not_null_test::update_used_tables()).
But the condition to evaluate is only when we do not have a sub query
present, which means to say that "with_subselect" is not set.
With respect to the above query, we create an object of type
"Item_in_optimizer" which by definition is always associated with a
subquery. While in 5.6 we set "with_subselect" to true for
"Item_in_optimizer" object, we do not do the same in 5.5. This results in
the evaluation for "is_null" resulting in a coredump.
So, we are now setting "with_subselect" to true for "Item_in_optimizer"
in 5.1 and 5.5.
mysql-test/r/func_in.result:
Result file changes for the test case added
mysql-test/t/func_in.test:
Test case added for Bug#13012483
sql/item_cmpfunc.h:
Changed Item_in_optimizer::Item_in_optimizer( ) to set "with_subselect"
to true
BUILD/SETUP.sh:
By default, build also with innodb-plugin
mysql-test/mysql-test-run.pl:
Also search in lib64 directory for plugins (This is used at least on OpenSuse 12.1 when using default build scripts)
mysql-test/r/lock_multi.result:
Allow test to be re-run even if it crashed.
mysql-test/t/lock_multi.test:
Allow test to be re-run even if it crashed.
scripts/make_binary_distribution.sh:
Ensure that libexecdir is named libexec (was not on OpenSuse 12.1)
scripts/mysql_config.sh:
Fixed detection of lib64 was used.
mysql-test-run auto-disables all optional plugins.
mysql-test/include/default_client.cnf:
no @OPT.plugindir anymore
mysql-test/include/default_mysqld.cnf:
don't disable plugins manually - mtr can do it better
mysql-test/suite/innodb/t/innodb_bug47167.test:
mtr now uses suite-dir as an include path
mysql-test/suite/innodb/t/innodb_file_format.test:
mtr now uses suite-dir as an include path
mysql-test/t/partition_binlog.test:
this test uses partitions
storage/example/mysql-test/mtr/t/source.result:
update results. as mysqltest includes the correct overlayed include
storage/innobase/handler/ha_innodb.cc:
the assert is wrong
PARTITION STATISTICS
Problem was the fix for bug#11756867; It always used the first
partitions, and stopped after it checked 10 [sub]partitions.
(or until it found a partition which would contain a match).
This results in bad statistics for tables where the first 10 partitions
don't represent the majority of the data (like when the first 10
partitions only contained a few rows in total).
The solution was to take statisics from the partitions containing
the most rows instead:
Added an array of partition ids which is sorted by number of records
in descending order.
this array is used in records_in_range to cover as many records as
possible in as few calls as possible.
Also changed the limit of how many partitions to use for the statistics
from a static max of 10 partitions, into a dynamic model:
Maximum number of partitions is now log2(total number of partitions)
taken from the ordered array.
It will continue calling partitions records_in_range until it has
checked:
(total rows in matching partitions) * (maximum number of partitions)
/ (number of used partitions)
Also reverted the changes for ha_partition::scan_time() and
ha_partition::estimate_rows_upper_bound() to before
the fix of bug#11756867. Since they are not as slow as
records_in_range.
RESULT FROM PREVIOUS TRANSACTION
The current Query Cache API is not fully compatible with
the partitioning engine.
There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.
So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.
(There are some extra changes, due to removal of \r as line break)
- MySQL 5.5 introduced caching of constant items by means of
wrapping them in Item_cache_XXX objects. If a subquery was wrapped
in this cache, it could end up being pushed down by ICP.
- The fix is to add Item_cache::walk() which lets ICP to see that
the cache item has a subquery inside it, and disable pushdown for this case.
- In mi_rkey(), do correct handling of case where mi_yield_and_check_if_killed()
detects that the thread was killed (all other similar functions in MyISAM/Aria have
slightly different code and do not have this problem).
- Also fixed assignment in DBUG_ASSERT
- this is 2nd variant of the fix:
= make .result file smaller
= run KILLable statements in a separate connection, otherwise we could end up trying to
KILL the final "DROP TABLE" statement
Fixed README with link to source
Merged InnoDB change to XtraDB
README:
Added information of where to find MariaDB code
storage/archive/ha_archive.cc:
Removed memset() of rows, a MariaDB checksum's doesn't touch not used data.
- In return_zero_rows(), don't call mark_as_null_row() for semi-join
materialized tables, because 1) they may have been already freed, and
2)there is no real need to call mark_as_null_row() for them.
This bug is the result of an incomplete/inconsistent change introduced into
5.3 code when the cond_equal parameter were added to the function optimize_cond.
The change was made during a merge from 5.2 in October 2010.
The bug could affect only queries with HAVING.
An outer join query with a semi-join subquery could return a wrong result
if the optimizer chose to materialize the subquery.
It happened because when substituting for the best field into a ref item
used to build access keys not all COND_EQUAL objects that could be employed
at substitution were checked.
Also refined some code in the function check_join_cache_usage to make it
safer.