Analysis:
Partial matching is used even when there are no NULLs in
a materialized subquery, as long as the left NOT IN operand
may contain NULL values.
This case was not handled correctly in two different places.
First, the implementation of parital matching did not clear
the set of matching columns when the merge process advanced
to the next row.
Second, there is no need to perform partial matching at all
when the left operand has no NULLs.
Solution:
First fix subselect_rowid_merge_engine::partial_match() to
properly cleanup the bitmap of matching keys when advancing
to the next row.
Second, change subselect_partial_match_engine::exec() so
that when the materialized subquery doesn't contain any
NULLs, and the left operand of [NOT] IN doesn't contain
NULLs either, the method returns without doing any
unnecessary partial matching. The correct result in this
case is in Item::in_value.
When the WHERE/HAVING condition of a subquery has been transformed
by the optimizer the pointer stored the 'where'/'having' field
of the SELECT_LEX structure used for the subquery must be updated
accordingly. Otherwise the pointer may refer to an invalid item.
This can lead to the reported assertion failure for some queries
with correlated subqueries
The bug is a duplicate of MySQL's Bug#11764086,
however MySQL's fix is incomplete for MariaDB, so
this fix is slightly different.
In addition, this patch renames
Item_func_not_all::top_level() to is_top_level_item()
to make it in line with the analogous methods of
Item_in_optimizer, and Item_subselect.
Analysis:
It is possible to determine whether a predicate is
NULL-rejecting only if it is a top-level one. However,
this was not taken into account for Item_in_optimizer.
As a result, a NOT IN predicate was erroneously
considered as NULL-rejecting, and the NULL-complemented
rows generated by the outer join were rejected before
being checked by the NOT IN predicate.
Solution:
Change Item_in_optimizer to be considered as
NULL-rejecting only if it a top-level predicate.
opt_range.cc: modified print_key() so that it doesn't do memory re-allocs when
printing multipart keys over varchar columns. When it did, key printout in
debug trace was interrupted with my_malloc/free printouts.
- add_ref_to_table_cond() should not just overwrite pre_idx_push_select_cond
with the contents tab->select_cond.
pre_idx_push_select_cond exists precisely for the reason that it may contain
a condition that is a strict superset of what is in tab->select_cond.
The fix is to inject generated equality into pre_idx_push_select_cond.
- Make simplify_joins() set maybe_null=FALSE for tables that were on the
inner sides of inner joins and then were moved to the inner sides of semi-joins.
This is useful when trying to find out why an automatic myisam repair failes.
storage/myisam/ha_myisam.cc:
If mysqld --log-warnings=3 or higher, then print all check and repair warnings for MyISAM tables to the error log.
thd->user_connect is now handled in thd->clenup() which will ensure that it works in all context (including slaves).
I added also some DBUG_ASSERT() to ensure that things are working correctly.
sql/sql_acl.cc:
Reset thd->user_connect on failed check_for_max_user_connections() to ensure we don't decrement value twice.
Removed not needed call to decrease_user_connections() as thd->cleanup() will now do it.
sql/sql_class.cc:
Call decrease_user_connections() in thd->cleanup()
sql/sql_connect.cc:
Ensure we don't allocate thd->user_connect twice.
Simplify check_for_max_user_connections().
sql/sql_parse.cc:
Ensure that thd->user_connect is handled properly in for 'change_user' command.
The bug was that when using bulk insert combined with lock table, we intitalized the io cache with the wrong file position.
This fixed a bug where MariaDB could not read in a table dump done with mysqldump.
mysql-test/suite/maria/r/locking.result:
Test case for locking + write cache bug
mysql-test/suite/maria/t/locking.test:
Test case for locking + write cache bug
storage/maria/ma_extra.c:
Initialize write cache used with bulk insert to correct file length.
(The old code didn't work if one was using LOCK TABLE for the given table).
The issues was:
- For some tables with a lot of not packed fields, we didn't allocate enough memory in head page which caused DBUG_ASSERT's
- Removed wrong DBUG_ASSERT()
- Fixed a problem with underflow() where it generates a key page where all keys didn't fit.
- Max key length is now limited by block_size/3 (was block_size /2). This is required for underflow() to work with packed keys.
mysql-test/lib/v1/mysql-test-run.pl:
Remove --alignment=8 as this doesn't work on 64 bit systems
mysql-test/suite/maria/r/small_blocksize.result:
Test case for Aria bug
mysql-test/suite/maria/t/small_blocksize-master.opt:
Test case for Aria bug
mysql-test/suite/maria/t/small_blocksize.test:
Test case for Aria bug
storage/maria/ha_maria.cc:
Fixed comment
storage/maria/ma_bitmap.c:
Fixed wrong variable usage in find_where_to_split_row() where we allocated too little memory for head page.
We did not take into account space for head extents (long VARCHAR) when trying to split row on head page. This caused us to allocate too little space from bitmap which lead to ASSERT failures later.
storage/maria/ma_blockrec.c:
Made some argument const (to ensure they was not accidently changed)
Removed wrong DBUG_ASSERT()
storage/maria/ma_blockrec.h:
Removed not used variable
storage/maria/ma_delete.c:
Added my_afree() in case of error
More comments and DBUG_ASSERT() for underflow()
storage/maria/ma_open.c:
Make keyinfo->underflow_block_length smaller for packed keys. This has to be done as for long packed keys, underflow() otherwise generates a key page where all keys didn't fit.
storage/maria/ma_page.c:
New DBUG_ASSERT()
storage/maria/ma_write.c:
Fixed comment
storage/maria/maria_def.h:
We have to have space for at least 3 keys on a key page.
(Otherwise the underflow() code doesn't work for packed keys, even when we have an underflow() for an empty key page)
The bug was a wrong check in aria_chk; The table was fine.
storage/maria/ma_bitmap.c:
Print whole bitmap to find errors in last bitmap
storage/maria/ma_check.c:
Fixed wrong test if bitmap was overallocated.
sql/sql_expression_cache.cc:
Early check of subquery cache hit rate added to limit its performance impact in the worst case.
Disabling cache moved to method.
sql/sql_expression_cache.h:
Disabling cache moved to method.
Identified all test cases in the MySQL file subquery.inc that are
not present in MariaDB. This patch adds the test cases that are:
- not present in MySQL 5.5, and
- already fixed in MariaDB 5.3
The patch adds test cases for the following mysql-trunk bugs:
- Bug#12763207 - not a bug, mysql-trunk, added test case
- BUG#50257 - not a bug, mysql-trunk, added test case
- Bug 11765699 - not a bug, mysql-trunk, added test case
- BUG#12616253 - not a bug, mysql-trunk, added test case
The comparison was based on the following version of
mysql-trunk:
revno: 3350 [merge]
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: mysql-trunk
timestamp: Mon 2011-08-08 12:42:09 +0300
message:
Merge mysql-5.5 to mysql-trunk.
The method Item_ref::not_null_tables() returned incorrect bitmap
for outer references to view columns. This could cause an invalid
conversion of an outer join into an inner join that could lead
to a wrong result set for a query with a correlated subquery over
an outer join whose where condition had an outer reference to a view.
The method Item_func_isnull::update_used_tables() erroneously did not
update cached values stored in the fields used_tables_cache and
const_item_cache of the Item_func_isnull objects. As a result the
Item_func_isnull::used_tables() returned wrong bitmaps and, as a
consequence, push-down predicates could be attached to wrong tables.
- Replaced old DBUG_ASSERT with a new correct one + a comment.
storage/maria/ma_pagecache.c:
Replaced old DBUG_ASSERT with a new correct one + a comment.
This fixes a bug that when you use mysqldump --no-create-info to generate a dump that you want to merge with an existing table,
you can get an innodb table with duplicated unique keys.
Patch orignally by Eric Bergen.
client/mysqldump.c:
Only use UNIQUE_CHECKS=0 for tables that are created.
This solves the issue that you can't get duplicate unique keys when merging two dumps.
mysql-test/r/mysqldump.result:
Test for mysqldump --no-create-info
Identified all test cases in the MySQL file subquery_mat.inc that are
not present in MariaDB. In total found 8 test cases for the following
MySQL bugs:
* BUG#49630 - not a bug in MariaDB, added test case
* BUG#52538 - not a bug in MariaDB, added test case (checked with VG)
* BUG#53103 - not a bug in MariaDB, added test case
* BUG#54511 - not a bug in MariaDB, added test case
* BUG#56367 - not a bug in MariaDB, added test case
* BUG#59833 - not a bug in MariaDB, added test case
* BUG#11852644 - not a bug in MariaDB, added test case
* BUG#12668294 - not a bug in MariaDB, added test case
All of these MySQL bugs are not present in MariaDB 5.3.
The comparison was based on the following version of
mysql-trunk:
revno: 3350 [merge]
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: mysql-trunk
timestamp: Mon 2011-08-08 12:42:09 +0300
message:
Merge mysql-5.5 to mysql-trunk.
This bug is a special case of lp:813447.
Analysis:
Constant optimization finds that the condition t2.a = 1
can be used to access the primary key of table 't2'. As
a result both outer table t1,t2 are considered as constant
when we reach the execution phase. At the same time, during
constant optimization, the IN predicate is not evaluated
because it is expensive.
When execution of the outer query reaches do_select(),
control flow enter the branch:
if (join->table_count == join->const_tables)
{ ... }
This branch checks only the WHERE and HAVING clauses,
but doesn't check the ON clauses of the query. Since the
IN predicate was not evaluated during optimization, it is
not evaluated at all, thus execution doesn't detect that
the ON clause is FALSE.
Solution:
Similar to the patch for bug lp:813447, exclude system
tables from constant substitution based on unique key
lookups if there is an expensive ON condition on the
inner table.
- create_ref_for_key() has the code that walks KEYUSE array and tries to use
maximum number of keyparts for ref (and eq_ref and ref_or_null) access.
When one constructs ref access for table that is inside a SJ-Materialization
nest, it is not possible to use tables that are ouside the nest (because
materialization is performed before they have any "current value").
The bug was caused by this function not taking this into account.
The reason for the long shutdown is hanging in io threads. It appears
that just closing completion port on XP does not necessarily signal
thread waiting in GetIOCompletionStatus() (even if this works fine
on later Windows versions)
The fix is to wakeup background threads using PostQueuedCompletionStatus()
with a special 'key' parameter indicating shutdown.
The reason for the crash is Innodb assertion after trying to load condition variables function
dynamically and not finding them
The fix is to skip dynamic loading if srv_use_native_conditions is FALSE. srv_use_native_conditions
is derived from Windows version and would be FALSE on XP and TRUE on later Windows.
This is the same handling as in MySQL 5.. In Maria 5.3 srv_use_native_conditions check was
presumably lost in the downporting.
storage/maria/ma_blockrec.c:
Unlock bitmaps earlier (no reason to have them unlocked over _ma_write_clr())
storage/maria/ma_extra.c:
Don't lock THR_LOCK_maria for HA_EXTRA_PREPARE_FOR_RENAME (upper level ensures that we are not opening the same table during this call)
We don't need to have share->intern_lock locked over _ma_flush_table_files()
storage/maria/ma_open.c:
Update comment
revno: 2876.47.174
revision-id: jorgen.loland@oracle.com-20110519120355-qn7eprkad9jqwu5j
parent: mayank.prasad@oracle.com-20110518143645-bdxv4udzrmqsjmhq
committer: Jorgen Loland <jorgen.loland@oracle.com>
branch nick: mysql-trunk-11765831
timestamp: Thu 2011-05-19 14:03:55 +0200
message:
BUG#11765831: 'RANGE ACCESS' MAY INCORRECTLY FILTER
AWAY QUALIFYING ROWS
The problem was that the ranges created when OR'ing two
conditions could be incorrect. Without the bugfix,
"I <> 6 OR (I <> 8 AND J = 5)" would create these ranges:
"NULL < I < 6",
"6 <= I <= 6 AND 5 <= J <= 5",
"6 < I < 8",
"8 <= I <= 8 AND 5 <= J <= 5",
"8 < I"
While the correct ranges is
"NULL < I < 6",
"6 <= I <= 6 AND 5 <= J <= 5",
"6 < I"
The problem occurs when key_or() ORs
(1) "NULL < I < 6, 6 <= I <= 6 AND 5 <= J <= 5, 6 < I" with
(2) "8 < I AND 5 <= J <= 5"
The reason for the bug is that in key_or(), SEL_ARG *tmp is
used to point to the range in (1) above that is merged with
(2) while key1 points to the root of the red-black tree of
(1). When merging (1) and (2), tmp refers to the "6 < I"
part whereas the root is the "6 <= ... AND 5 <= J <= 5" part.
key_or() decides that the tmp range needs to be split into
"6 < I < 8, 8 <= I <= 8, 8 < I", in which next_key_part of the
second range should be that of tmp. However, next_key_part is
set to key1->next_key_part ("5 <= J <= 5") instead of
tmp->next_key_part (empty). Fixing this gives the correct but
not optimal ranges:
"NULL < I < 6",
"6 <= I <= 6 AND 5 <= J <= 5",
"6 < I < 8",
"8 <= I <= 8",
"8 < I"
A second problem can be seen above: key_or() may create
adjacent ranges that could be replaced with a single range.
Fixes for this is also included in the patch so that the range
above becomes correct AND optimal:
"NULL < I < 6",
"6 <= I <= 6 AND 5 <= J <= 5",
"6 < I"
Merging adjacent ranges like this gives a slightly lower cost
estimate for the range access.