Analysis:
Constant table optimization of the outer query finds that
the right side of the equality is a constant that can
be used for an eq_ref access to fetch one row from t1,
and substitute t1 with a constant. Thus constant optimization
triggers evaluation of the subquery during the optimize
phase of the outer query.
The innermost subquery requires a plan with a temporary
table because with InnoDB tables the exact count of rows
is not known, and the empty tables cannot be optimzied
way. JOIN::exec for the innermost subquery substitutes
the subquery tables with a temporary table.
When EXPLAIN gets to print the tables in the innermost
subquery, EXPLAIN needs to print the name of each table
through the corresponding TABLE_LIST object. However,
the temporary table created during execution doesn't
have a corresponding TABLE_LIST, so we get a null
pointer exception.
Solution:
The solution is to forbid using expensive constant
expressions for eq_ref access for contant table
optimization. Notice that eq_ref with a subquery
providing the value is still possible during regular
execution.
Analysis:
During the first execution of the query through the stored
procedure, the optimization phase calls
substitute_for_best_equal_field(), which calls
Item_in_optimizer::transform(). The latter replaces
Item_in_subselect::left_expr with args[0] via assignment.
In this test case args[0] is an Item_outer_ref which is
created/deallocated for each re-execution. As a result,
during the second execution Item_in_subselect::left_expr
pointed to freed memory, which resulted in a crash.
Solution:
The solution is to use change_item_tree(), so that the
origianal left expression is restored after each execution.
Analysis:
Partial matching is used even when there are no NULLs in
a materialized subquery, as long as the left NOT IN operand
may contain NULL values.
This case was not handled correctly in two different places.
First, the implementation of parital matching did not clear
the set of matching columns when the merge process advanced
to the next row.
Second, there is no need to perform partial matching at all
when the left operand has no NULLs.
Solution:
First fix subselect_rowid_merge_engine::partial_match() to
properly cleanup the bitmap of matching keys when advancing
to the next row.
Second, change subselect_partial_match_engine::exec() so
that when the materialized subquery doesn't contain any
NULLs, and the left operand of [NOT] IN doesn't contain
NULLs either, the method returns without doing any
unnecessary partial matching. The correct result in this
case is in Item::in_value.
When the WHERE/HAVING condition of a subquery has been transformed
by the optimizer the pointer stored the 'where'/'having' field
of the SELECT_LEX structure used for the subquery must be updated
accordingly. Otherwise the pointer may refer to an invalid item.
This can lead to the reported assertion failure for some queries
with correlated subqueries
The bug is a duplicate of MySQL's Bug#11764086,
however MySQL's fix is incomplete for MariaDB, so
this fix is slightly different.
In addition, this patch renames
Item_func_not_all::top_level() to is_top_level_item()
to make it in line with the analogous methods of
Item_in_optimizer, and Item_subselect.
Analysis:
It is possible to determine whether a predicate is
NULL-rejecting only if it is a top-level one. However,
this was not taken into account for Item_in_optimizer.
As a result, a NOT IN predicate was erroneously
considered as NULL-rejecting, and the NULL-complemented
rows generated by the outer join were rejected before
being checked by the NOT IN predicate.
Solution:
Change Item_in_optimizer to be considered as
NULL-rejecting only if it a top-level predicate.
opt_range.cc: modified print_key() so that it doesn't do memory re-allocs when
printing multipart keys over varchar columns. When it did, key printout in
debug trace was interrupted with my_malloc/free printouts.
- add_ref_to_table_cond() should not just overwrite pre_idx_push_select_cond
with the contents tab->select_cond.
pre_idx_push_select_cond exists precisely for the reason that it may contain
a condition that is a strict superset of what is in tab->select_cond.
The fix is to inject generated equality into pre_idx_push_select_cond.
- Make simplify_joins() set maybe_null=FALSE for tables that were on the
inner sides of inner joins and then were moved to the inner sides of semi-joins.
This is useful when trying to find out why an automatic myisam repair failes.
storage/myisam/ha_myisam.cc:
If mysqld --log-warnings=3 or higher, then print all check and repair warnings for MyISAM tables to the error log.
thd->user_connect is now handled in thd->clenup() which will ensure that it works in all context (including slaves).
I added also some DBUG_ASSERT() to ensure that things are working correctly.
sql/sql_acl.cc:
Reset thd->user_connect on failed check_for_max_user_connections() to ensure we don't decrement value twice.
Removed not needed call to decrease_user_connections() as thd->cleanup() will now do it.
sql/sql_class.cc:
Call decrease_user_connections() in thd->cleanup()
sql/sql_connect.cc:
Ensure we don't allocate thd->user_connect twice.
Simplify check_for_max_user_connections().
sql/sql_parse.cc:
Ensure that thd->user_connect is handled properly in for 'change_user' command.
When merging a view / derived table the function SELECT_LEX::merge_subquery
incorrectly updated the list SELECT_LEX::leaf_tables. Erroneously it
appended the leaf_tables list of the merged object L and then removed the
reference to the merged object T from the SELECT_LEX::leaf_tables list.
A correct implementation should insert the list L into the
SELECT_LEX::leaf_tables list in place of the element of the list that
refers to T.
The bug could lead to wrong results or even crashes for queries with
nested outer joins over views / derived tables.
The bug was that when using bulk insert combined with lock table, we intitalized the io cache with the wrong file position.
This fixed a bug where MariaDB could not read in a table dump done with mysqldump.
mysql-test/suite/maria/r/locking.result:
Test case for locking + write cache bug
mysql-test/suite/maria/t/locking.test:
Test case for locking + write cache bug
storage/maria/ma_extra.c:
Initialize write cache used with bulk insert to correct file length.
(The old code didn't work if one was using LOCK TABLE for the given table).
The issues was:
- For some tables with a lot of not packed fields, we didn't allocate enough memory in head page which caused DBUG_ASSERT's
- Removed wrong DBUG_ASSERT()
- Fixed a problem with underflow() where it generates a key page where all keys didn't fit.
- Max key length is now limited by block_size/3 (was block_size /2). This is required for underflow() to work with packed keys.
mysql-test/lib/v1/mysql-test-run.pl:
Remove --alignment=8 as this doesn't work on 64 bit systems
mysql-test/suite/maria/r/small_blocksize.result:
Test case for Aria bug
mysql-test/suite/maria/t/small_blocksize-master.opt:
Test case for Aria bug
mysql-test/suite/maria/t/small_blocksize.test:
Test case for Aria bug
storage/maria/ha_maria.cc:
Fixed comment
storage/maria/ma_bitmap.c:
Fixed wrong variable usage in find_where_to_split_row() where we allocated too little memory for head page.
We did not take into account space for head extents (long VARCHAR) when trying to split row on head page. This caused us to allocate too little space from bitmap which lead to ASSERT failures later.
storage/maria/ma_blockrec.c:
Made some argument const (to ensure they was not accidently changed)
Removed wrong DBUG_ASSERT()
storage/maria/ma_blockrec.h:
Removed not used variable
storage/maria/ma_delete.c:
Added my_afree() in case of error
More comments and DBUG_ASSERT() for underflow()
storage/maria/ma_open.c:
Make keyinfo->underflow_block_length smaller for packed keys. This has to be done as for long packed keys, underflow() otherwise generates a key page where all keys didn't fit.
storage/maria/ma_page.c:
New DBUG_ASSERT()
storage/maria/ma_write.c:
Fixed comment
storage/maria/maria_def.h:
We have to have space for at least 3 keys on a key page.
(Otherwise the underflow() code doesn't work for packed keys, even when we have an underflow() for an empty key page)
The bug was a wrong check in aria_chk; The table was fine.
storage/maria/ma_bitmap.c:
Print whole bitmap to find errors in last bitmap
storage/maria/ma_check.c:
Fixed wrong test if bitmap was overallocated.
sql/sql_expression_cache.cc:
Early check of subquery cache hit rate added to limit its performance impact in the worst case.
Disabling cache moved to method.
sql/sql_expression_cache.h:
Disabling cache moved to method.
Identified all test cases in the MySQL file subquery.inc that are
not present in MariaDB. This patch adds the test cases that are:
- not present in MySQL 5.5, and
- already fixed in MariaDB 5.3
The patch adds test cases for the following mysql-trunk bugs:
- Bug#12763207 - not a bug, mysql-trunk, added test case
- BUG#50257 - not a bug, mysql-trunk, added test case
- Bug 11765699 - not a bug, mysql-trunk, added test case
- BUG#12616253 - not a bug, mysql-trunk, added test case
The comparison was based on the following version of
mysql-trunk:
revno: 3350 [merge]
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: mysql-trunk
timestamp: Mon 2011-08-08 12:42:09 +0300
message:
Merge mysql-5.5 to mysql-trunk.
The method Item_ref::not_null_tables() returned incorrect bitmap
for outer references to view columns. This could cause an invalid
conversion of an outer join into an inner join that could lead
to a wrong result set for a query with a correlated subquery over
an outer join whose where condition had an outer reference to a view.
The method Item_func_isnull::update_used_tables() erroneously did not
update cached values stored in the fields used_tables_cache and
const_item_cache of the Item_func_isnull objects. As a result the
Item_func_isnull::used_tables() returned wrong bitmaps and, as a
consequence, push-down predicates could be attached to wrong tables.
- Replaced old DBUG_ASSERT with a new correct one + a comment.
storage/maria/ma_pagecache.c:
Replaced old DBUG_ASSERT with a new correct one + a comment.
This fixes a bug that when you use mysqldump --no-create-info to generate a dump that you want to merge with an existing table,
you can get an innodb table with duplicated unique keys.
Patch orignally by Eric Bergen.
client/mysqldump.c:
Only use UNIQUE_CHECKS=0 for tables that are created.
This solves the issue that you can't get duplicate unique keys when merging two dumps.
mysql-test/r/mysqldump.result:
Test for mysqldump --no-create-info
Identified all test cases in the MySQL file subquery_mat.inc that are
not present in MariaDB. In total found 8 test cases for the following
MySQL bugs:
* BUG#49630 - not a bug in MariaDB, added test case
* BUG#52538 - not a bug in MariaDB, added test case (checked with VG)
* BUG#53103 - not a bug in MariaDB, added test case
* BUG#54511 - not a bug in MariaDB, added test case
* BUG#56367 - not a bug in MariaDB, added test case
* BUG#59833 - not a bug in MariaDB, added test case
* BUG#11852644 - not a bug in MariaDB, added test case
* BUG#12668294 - not a bug in MariaDB, added test case
All of these MySQL bugs are not present in MariaDB 5.3.
The comparison was based on the following version of
mysql-trunk:
revno: 3350 [merge]
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: mysql-trunk
timestamp: Mon 2011-08-08 12:42:09 +0300
message:
Merge mysql-5.5 to mysql-trunk.
This bug is a special case of lp:813447.
Analysis:
Constant optimization finds that the condition t2.a = 1
can be used to access the primary key of table 't2'. As
a result both outer table t1,t2 are considered as constant
when we reach the execution phase. At the same time, during
constant optimization, the IN predicate is not evaluated
because it is expensive.
When execution of the outer query reaches do_select(),
control flow enter the branch:
if (join->table_count == join->const_tables)
{ ... }
This branch checks only the WHERE and HAVING clauses,
but doesn't check the ON clauses of the query. Since the
IN predicate was not evaluated during optimization, it is
not evaluated at all, thus execution doesn't detect that
the ON clause is FALSE.
Solution:
Similar to the patch for bug lp:813447, exclude system
tables from constant substitution based on unique key
lookups if there is an expensive ON condition on the
inner table.
- create_ref_for_key() has the code that walks KEYUSE array and tries to use
maximum number of keyparts for ref (and eq_ref and ref_or_null) access.
When one constructs ref access for table that is inside a SJ-Materialization
nest, it is not possible to use tables that are ouside the nest (because
materialization is performed before they have any "current value").
The bug was caused by this function not taking this into account.
The reason for the long shutdown is hanging in io threads. It appears
that just closing completion port on XP does not necessarily signal
thread waiting in GetIOCompletionStatus() (even if this works fine
on later Windows versions)
The fix is to wakeup background threads using PostQueuedCompletionStatus()
with a special 'key' parameter indicating shutdown.
The reason for the crash is Innodb assertion after trying to load condition variables function
dynamically and not finding them
The fix is to skip dynamic loading if srv_use_native_conditions is FALSE. srv_use_native_conditions
is derived from Windows version and would be FALSE on XP and TRUE on later Windows.
This is the same handling as in MySQL 5.. In Maria 5.3 srv_use_native_conditions check was
presumably lost in the downporting.
storage/maria/ma_blockrec.c:
Unlock bitmaps earlier (no reason to have them unlocked over _ma_write_clr())
storage/maria/ma_extra.c:
Don't lock THR_LOCK_maria for HA_EXTRA_PREPARE_FOR_RENAME (upper level ensures that we are not opening the same table during this call)
We don't need to have share->intern_lock locked over _ma_flush_table_files()
storage/maria/ma_open.c:
Update comment