The result of materialization of the right part of an IN subquery predicate
is placed into a temporary table. Each row of the materialized table is
distinct. A unique key over all fields of the temporary table is defined and
created. It allows to perform key look-ups into the table.
The table created for a materialized subquery can be accessed by key as
any other table. The function best_access-path search for the best access
to join a table to a given partial join. With some where conditions this
function considers a possibility of a ref_or_null access. If such access
employs the unique key on the temporary table then when estimating
the cost this access the function tries to use the array rec_per_key. Yet,
such array is not built for this unique key. This causes a crash of the server.
Rows returned by the subquery that contain nulls don't have to be placed
into temporary table, as they cannot be match any row produced by the
left part of the subquery predicate. So all fields of the temporary table
can be defined as non-nullable. In this case any ref_or_null access
to the temporary table does not make any sense and it does not make sense
to estimate such an access.
The fix makes sure that the temporary table for a materialized IN subquery
is defined with columns that are all non-nullable. The also ensures that
any row with nulls returned by the subquery is not placed into the
temporary table.
- After the exec_const_cond->val_int() call, check for error and return.
(if we don't do it, we will eventually hit an error when trying to set status OK in
the diagnostics area, which already has an error status).
CHECK_SIMPLE_EQUALITY
PROBLEM:
Crash in "check_simple_equality" when using a subquery with "IN" and
"ALL" in prepare.
ANALYSIS:
Crash can be reproduced using a simplified query like this one:
prepare s from "select 1 from g1 where 1 < all (
select @:=(1 in (select 1 from g1)) from g1)";
This bug is currently present only on 5.5.and 5.1. Its fixed as part
of work log(#1110) in 5.6. We are taking one change to fix this
in 5.5 and 5.1.
Problem seems to be present because we are trying to evaluate "is_null"
on an argument which is part of a subquery
(In Item_is_not_null_test::update_used_tables()).
But the condition to evaluate is only when we do not have a sub query
present, which means to say that "with_subselect" is not set.
With respect to the above query, we create an object of type
"Item_in_optimizer" which by definition is always associated with a
subquery. While in 5.6 we set "with_subselect" to true for
"Item_in_optimizer" object, we do not do the same in 5.5. This results in
the evaluation for "is_null" resulting in a coredump.
So, we are now setting "with_subselect" to true for "Item_in_optimizer"
in 5.1 and 5.5.
mysql-test/r/func_in.result:
Result file changes for the test case added
mysql-test/t/func_in.test:
Test case added for Bug#13012483
sql/item_cmpfunc.h:
Changed Item_in_optimizer::Item_in_optimizer( ) to set "with_subselect"
to true
BUILD/SETUP.sh:
By default, build also with innodb-plugin
mysql-test/mysql-test-run.pl:
Also search in lib64 directory for plugins (This is used at least on OpenSuse 12.1 when using default build scripts)
mysql-test/r/lock_multi.result:
Allow test to be re-run even if it crashed.
mysql-test/t/lock_multi.test:
Allow test to be re-run even if it crashed.
scripts/make_binary_distribution.sh:
Ensure that libexecdir is named libexec (was not on OpenSuse 12.1)
scripts/mysql_config.sh:
Fixed detection of lib64 was used.
mysql-test-run auto-disables all optional plugins.
mysql-test/include/default_client.cnf:
no @OPT.plugindir anymore
mysql-test/include/default_mysqld.cnf:
don't disable plugins manually - mtr can do it better
mysql-test/suite/innodb/t/innodb_bug47167.test:
mtr now uses suite-dir as an include path
mysql-test/suite/innodb/t/innodb_file_format.test:
mtr now uses suite-dir as an include path
mysql-test/t/partition_binlog.test:
this test uses partitions
storage/example/mysql-test/mtr/t/source.result:
update results. as mysqltest includes the correct overlayed include
storage/innobase/handler/ha_innodb.cc:
the assert is wrong
PARTITION STATISTICS
Problem was the fix for bug#11756867; It always used the first
partitions, and stopped after it checked 10 [sub]partitions.
(or until it found a partition which would contain a match).
This results in bad statistics for tables where the first 10 partitions
don't represent the majority of the data (like when the first 10
partitions only contained a few rows in total).
The solution was to take statisics from the partitions containing
the most rows instead:
Added an array of partition ids which is sorted by number of records
in descending order.
this array is used in records_in_range to cover as many records as
possible in as few calls as possible.
Also changed the limit of how many partitions to use for the statistics
from a static max of 10 partitions, into a dynamic model:
Maximum number of partitions is now log2(total number of partitions)
taken from the ordered array.
It will continue calling partitions records_in_range until it has
checked:
(total rows in matching partitions) * (maximum number of partitions)
/ (number of used partitions)
Also reverted the changes for ha_partition::scan_time() and
ha_partition::estimate_rows_upper_bound() to before
the fix of bug#11756867. Since they are not as slow as
records_in_range.
RESULT FROM PREVIOUS TRANSACTION
The current Query Cache API is not fully compatible with
the partitioning engine.
There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.
So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.
(There are some extra changes, due to removal of \r as line break)
- MySQL 5.5 introduced caching of constant items by means of
wrapping them in Item_cache_XXX objects. If a subquery was wrapped
in this cache, it could end up being pushed down by ICP.
- The fix is to add Item_cache::walk() which lets ICP to see that
the cache item has a subquery inside it, and disable pushdown for this case.
- In mi_rkey(), do correct handling of case where mi_yield_and_check_if_killed()
detects that the thread was killed (all other similar functions in MyISAM/Aria have
slightly different code and do not have this problem).
- Also fixed assignment in DBUG_ASSERT
- this is 2nd variant of the fix:
= make .result file smaller
= run KILLable statements in a separate connection, otherwise we could end up trying to
KILL the final "DROP TABLE" statement
Fixed README with link to source
Merged InnoDB change to XtraDB
README:
Added information of where to find MariaDB code
storage/archive/ha_archive.cc:
Removed memset() of rows, a MariaDB checksum's doesn't touch not used data.
- In return_zero_rows(), don't call mark_as_null_row() for semi-join
materialized tables, because 1) they may have been already freed, and
2)there is no real need to call mark_as_null_row() for them.
This bug is the result of an incomplete/inconsistent change introduced into
5.3 code when the cond_equal parameter were added to the function optimize_cond.
The change was made during a merge from 5.2 in October 2010.
The bug could affect only queries with HAVING.
An outer join query with a semi-join subquery could return a wrong result
if the optimizer chose to materialize the subquery.
It happened because when substituting for the best field into a ref item
used to build access keys not all COND_EQUAL objects that could be employed
at substitution were checked.
Also refined some code in the function check_join_cache_usage to make it
safer.
timestamp: Thu 2011-12-01 15:12:10 +0100
Fix for Bug#13430436 PERFORMANCE DEGRADATION IN SYSBENCH ON INNODB DUE TO ICP
When running sysbench on InnoDB there is a performance degradation due
to index condition pushdown (ICP). Several of the queries in sysbench
have a WHERE condition that the optimizer uses for executing these
queries as range scans. The upper and lower limit of the range scan
will ensure that the WHERE condition is fulfilled. Still, the WHERE
condition is part of the queries' condition and if ICP is enabled the
condition will be pushed down to InnoDB as an index condition.
Due to the range scan's upper and lower limits ensure that the WHERE
condition is fulfilled, the pushed index condition will not filter out
any records. As a result the use of ICP for these queries results in a
performance overhead for sysbench. This overhead comes from using
resources for determining the part of the condition that can be pushed
down to InnoDB and overhead in InnoDB for executing the pushed index
condition.
With the default configuration for sysbench the range scans will use
the primary key. This is a clustered index in InnoDB. Using ICP on a
clustered index provides the lowest performance benefit since the
entire record is part of the clustered index and in InnoDB it has the
highest relative overhead for executing the pushed index condition.
The fix for removing the overhead ICP introduces when running sysbench
is to disable use of ICP when the index used by the query is a
clustered index.
When WL#6061 is implemented this change should be re-evaluated.
Problem was that now we can merge derived table (subquery in the FROM clause).
Fix: in case of detected conflict and presence of derived table "over" the table which cased the conflict - try materialization strategy.
If the flag 'optimize_join_buffer_size' is set to 'off' and the value
of the system variable 'join_buffer_size' is greater than the value of
the system variable 'join_buffer_space_limit' than no join cache can
be employed to join tables of the executed query.
A bug in the function JOIN_CACHE::alloc_buffer allowed to use join
buffer even in this case while another bug in the function
revise_cache_usage could cause a crash of the server in this case if the
chosen execution plan for the query contained outer join or semi-join
operation.
This patch is a backport of some of the cleanups/refactorings that were done
as part of WL#1393 Optimizing filesort with small limit.
mysql-test/t/bug13633383.test:
New test case.
mysql-test/valgrind.supp:
Changed signature for find_all_keys
Fixed a typo in the comment.
Fixing test cases which were previouslyno throwing due
disable warnings macro.
sql/sql_base.cc:
Change in indentation and fixing a typo in the comment.
Problem: Statements that write to tables with auto_increment columns
based on the selection from another table, may lead to master
and slave going out of sync, as the order in which the rows
are retrieved from the table may differ on master and slave.
Solution: We mark writing to a table with auto_increment table
based on the rows selected from another table as unsafe. This
will cause the execution of such statements to throw a warning
and forces the statement to be logged in ROW if the logging
format is mixed.
Changes:
1. All the statements that writes to a table with auto_increment
column(s) based on the rows fetched from another table, will now
be unsafe.
2. CREATE TABLE with SELECT will now be unsafe.
sql/share/errmsg-utf8.txt:
Added new warning messages.
sql/sql_base.cc:
-Created function to check statements that write to
tables with auto_increment column and has select.
-Marked all the statements that write to a table
with auto_increment column based on rows fetched
from other table(s) as unsafe.
sql/sql_table.cc:
mark CREATE TABLE[with auto_increment column] as unsafe.
- mysql-test-run.pl --valgrind complains when all tests succeed.
- perfschema.all_instances fail on non-linux, where ENABLE_TEMP_POOL
is not set and therefore BITMAP mutex is not used.
- MDEV-132: main.mysqldump fails because it depends on exact size of stdio
buffers.
- MDEV-99: rpl.rpl_cant_read_event_incident fails due to a race where the
slave manages to connect while the test case is in the middle of setting up
the master, causing the slave to replicate extra/wrong events.
- MDEV-133: rpl.rpl_rotate_purge_deadlock fails because it issues a
DEBUG_SYNC SIGNAL immediately followed by RESET; this means that sometimes
the intended receipient has no time to see the signal before it is cleared
by the RESET, causing wait to timeout.
Problem: Statements that write to tables with auto_increment columns
based on the selection from another table, may lead to master
and slave going out of sync, as the order in which the rows
are retrived from the table may differ on master and slave.
Solution: We mark writing to a table with auto_increment table
as unsafe. This will cause the execution of such statements to
throw a warning and forces the statement to be logged in ROW if
the logging format is mixed.
Changes:
1. All the statements that writes to a table with auto_increment
column(s) based on the rows fetched from another table, will now
be unsafe.
2. CREATE TABLE with SELECT will now be unsafe.
sql/share/errmsg-utf8.txt:
Added new Warning messages
sql/sql_base.cc:
created a new function that checks for select + write on a autoinc table
made all such statements to be unsafe.
sql/sql_parse.cc:
made create autoincremnet tabble + select unsafe
storage/innobase/handler/handler0alter.cc:
for NEWDATE key_type says unsigned, thus col->prtype says unsigned,
but field->flags says signed. Use the same flag for value retrieval
that was used for value storage.
IS EXECUTED TWICE FROM P
This bug is a duplicate of bug 12567331, which was pushed to the
optimizer backporting tree on 2011-06-11. This is just a back-port of
the fix. Both test cases are included as they differ somewhat.
For single table update/insert added deep check of single tables (single_table_updatable()).
For multi-table view insert added additional check of target table (check_view_single_update).
Multi-update was correct.
Test suite for all cases added.
ALTER TABLE AFTER DROP PARTITION
Bug#13608188 - 64038: CRASH IN HANDLER::HA_THD ON ALTER TABLE AFTER
REPAIR NON-EXISTING PARTITION
Backport of bug#13357766 from -trunk to -5.5.
The state of some partitions was not reset on failure, leading
to invalid states of partitions in consequent statements.
Fixed by reverting back to original state for all partitions
if not all partition names was resolved.
Also adding extra security by forcing tables to be reopened
in case of error in mysql_alter_table.
(There is also removal of \r at the end of some lines.)
CASES RESETS DATA POINTER TO SMAL
ISSUE: Myisamchk doing sort recover
on a table reduces data_file_length.
Maximum size of data file decreases,
lesser number of rows are stored.
SOLUTION: Size of data_file_length is
fixed to the original length.
CASES RESETS DATA POINTER TO SMAL
ISSUE: Myisamchk doing sort recover
on a table reduces data_file_length.
Maximum size of data file decreases,
lesser number of rows are stored.
SOLUTION: Size of data_file_length is
fixed to the original length.
- If LooseScan is used with quick select, require that quick select produces
data in key order (this disables use of MRR, which can return data in arbitrary order).
KEY HANDLING ON SUBSEQUENT CREATE TABLE IF NOT EXISTS
PROBLEM:
--------
Consider a SP routine which does CREATE TABLE
with REFERENCES clause. The first call to this routine
invokes parser and the parsed items are cached, so as
to avoid parsing for the second execution of the routine.
It is obsevered that valgrind reports a warning
upon read of thd->lex->alter_info->key_list->Foreign_key object,
which seem to be pointing to a invalid memory address
during second time execution of the routine. Accessing this object
theoretically could cause a crash.
ANALYSIS:
---------
The problem stems from the fact that for some reason
elements of ref_columns list in thd->lex->alter_info->
key_list->Foreign_key object are changed to point to
objects allocated on runtime memory root.
During the first execution of routine we create
a copy of thd->lex->alter_info object.
As part of this process we create a clones of objects in
Alter_info::key_list and of Foreign_key object in particular.
Then Foreign_key object is cloned for some reason we
perform shallow copies of both Foreign_key::ref_columns
and Foreign_key::columns list. So new instance of
Foreign_key object starts to SHARE contents of ref_columns
and columns list with the original instance.
After that as part of cloning process we call
list_copy_and_replace_each_value() for elements of
ref_columns list. As result ref_columns lists in both
original and cloned Foreign_key object start to contain
pointers to Key_part_spec objects allocated on runtime
memory root because of shallow copy.
So when we start copying of thd->lex->alter_info object
during the second execution of stored routine we indeed
encounter pointer to the Key_part_spec object allocated
on runtime mem-root which was cleared during at the end
of previous execution. This is done in sp_head::execute(),
by a call to free_root(&execute_mem_root,MYF(0));
As result we get valgrind warnings about accessing
unreferenced memory.
FIX:
----
The safest solution to this problem is to
fix Foreign_key(Foreign_key, MEM_ROOT) constructor to do
a deep copy of columns lists, similar to Key(Key, MEM_ROOT)
constructor.
When working on MWL#247 I forgot to adjust the function create_hj_key_for_table()
that created a key definition for hash join keys. The modified function must
set the values of the fields ext_key_parts, ext_key_flags, ext_key_part_map
added to the key definition structure in MWL#247.
not be reproduced in the latest release of mariadb-5.3 as it was was fixed
by Sergey Petrunia when working on the problems concerning outer joins within
in subqueries converted to semi-joins.
Fixed Item* Item_equal::get_first(JOIN_TAB *context, Item *field_item) to work correctly in the case where:
- context!= NO_PARTICULAR_TAB, it points to a table within SJ-Materialization nest
- field_item points to an item_equal that has a constant Item_field but does not have any fields
from tables that are within semi-join nests.
Bug#13011410 CRASH IN FILESORT CODE WITH GROUP BY/ROLLUP
The assert in 13580775 is visible in 5.6 only,
but shows that all versions are vulnerable.
13011410 crashes in all versions.
filesort tries to re-use the sort buffer between invocations in order to save
malloc/free overhead.
The fix for Bug 11748783 - 37359: FILESORT CAN BE MORE EFFICIENT.
added an assert that buffer properties (num_records, record_length) are
consistent between invocations. Indeed, they are not necessarily consistent.
Fix: re-allocate the sort buffer if properties change.
mysql-test/r/partition.result:
New tests.
mysql-test/t/partition.test:
New tests.
sql/filesort.cc:
If we already have allocated a sort buffer in a previous execution,
then verify that it is big enough for the current one.
sql/table.h:
Add sort_keys_size; Number of bytes allocated for the sort_keys buffer.
BUG#13519696 - 62940: SELECT RESULTS VARY WITH VERSION AND
WITH/WITHOUT INDEX RANGE SCAN
BUG#13453382 - REGRESSION SINCE 5.1.39, RANGE OPTIMIZER WRONG
RESULTS WITH DECIMAL CONVERSION
BUG#13463488 - 63437: CHAR & BETWEEN WITH INDEX RETURNS WRONG
RESULT AFTER MYSQL 5.1.
Those are all cases where the range optimizer got it wrong
with > and >=.
mysql-test/r/range.result:
Without the code fix for DECIMAL, "select count(val) from t2 where val > 0.1155"
(which uses a range scan) returned 127 instead of 128);
Moreover, both
select * from t1 force index (primary) where a=1 and c>= 2.9;
and
select * from t1 force index (primary) where a=1 and c> 2.9;
would miss "1 1 3".
Without the code fix for strings, both
SELECT * FROM t1 WHERE F1 >= 'A ';
and
SELECT * FROM t1 WHERE F1 BETWEEN 'A ' AND 'AAAAA';
would miss "A A A".
sql/item.cc:
Preamble to the explanations below: opt_range.cc:get_mm_leaf() does
this (this is not changed by the patch): changes
column > value
to
column OP V
where:
* V is what is in "column" after we stored "value" in it
(such store operation may have done rounding...)
* OP is > or >=, depending on what's correct.
For example, if c is an INT column,
c > 2.9 is changed to
c OP 3
where OP is >= ('>' would not be correct).
The bugs below are cases where we chose OP wrongly.
Note that such transformations are visible in the optimizer trace.
1) Fix for STRING. In the scenario with CHAR(5) in range.test, this happens,
in get_mm_tree(), for the condition F1>='A ':
* value->save_in_field_no_warnings(field, 1) wants to store the right argument
(named 'item') into the CHAR(5) field; this stores 'A ' (the item's value)
padded with spaces (which changes nothing: still 'A ')
* we come to
case Item_func::GE_FUNC:
/* Don't use open ranges for partial key_segments */
if ((!(key_part->flag & HA_PART_KEY_SEG)) &&
(stored_field_cmp_to_item(param->thd, field, value) < 0))
tree->min_flag= NEAR_MIN;
tree->max_flag=NO_MAX_RANGE;
What this wants to do is: if the field's value is strictly smaller
than the item's, then ">=" can be changed to ">" (this is an optimization,
it can help pruning one useless partition).
* stored_field_cmp_to_item() is called; it compares the field's
and item's values: the item's value (Item_string::val_str()) is
'A ') and the field's value (Field_string::val_str()) is
'A' (yes val_str() removes end spaces unless sql_mode='PAD_CHAR_TO_FULL_LENGTH');
and the comparison is done with stringcmp() which considers
end spaces as relevant; as end spaces differ, function returns a
negative number, and ">='A '" becomes ">'A'" (i.e. the NEAR_MIN
flag is turned on).
During execution the index range scan code will search for "A", find
a match, but exclude it (because of ">"), wrongly.
The badness is the string comparison done by stored_field_cmp_to_item():
we use the reply of this function to determine where the index search
should start, so it should do comparison like index search does
comparisons; index search comparisons are ha_key_cmp() which uses
a collation-aware comparison (in our case, my_strnncollsp_simple(),
which ignores end spaces); so stored_field_cmp_to_item()
needs to do the same. When this is fixed, condition becomes
">='A '".
2) Fix for DECIMAL: just like in other comparisons in stored_field_cmp_to_item(),
we must first pass the field and then the item; otherwise expectations
on what <0 and >0 mean (inferiority, superiority) get violated.
In the test in range.test about c>2.9: c is an INT column, so 2.9
gets stored as 3, then stored_field_cmp_to_item() compares 3
and 2.9; because of the wrong order of arguments passed
to my_decimal_cmp(), range optimizer
thinks that 3 is < 2.9 and thus changes "c> 2.9" to "c> 3".
After fixing the order, it changes to the correct "c>= 3".
In the test in range.inc for val > 0.1155, it was changed to
val > 0.116, now it is changed to val >= 0.116.
- Disable use of join cache when we're using FirstMatch strategy, and the join
order is such that subquery's inner tables are interleaved with outer. Join
buffering code is incapable of handling such join orders.
- The testcase requires use of @@debug_optimizer_prefer_join_prefix to hit the bug,
but I'm pushing it anyway (including the mention of the variable in .test file),
so that it can be found and enabled when/if we get something comparable in the
main tree.
Bug#12985021 SIMPLE QUERY WITH DECIMAL NUMBERS TAKE AN
When parsing the fractional part of a string which
is to be converted to double, we can stop after a few digits:
the extra digits will not contribute to the actual result anyways.
mysql-test/r/func_str.result:
New tests.
mysql-test/t/func_str.test:
New tests.
strings/dtoa.c:
The problem was s2b() multiplying and adding hundreds-of-thousands
of ever smaller fractions.
The problem was that LooseScan execution code assumed that tab->key holds
the index used for looseScan. This is only true when range or full index
scan are used. In case of ref access, the index is in tab->ref.key (and
tab->index==0 which explains how LooseScan passed tests with ref access: they
used one index)
Fixed by setting/using loosescan_key, which always the correct index#.
Bug#11758543 50756: BIGINT '100' MATCHES 1.001E2
Expressions of the form
BIGINT_COL <compare> <non-integer constant>
should be done either as decimal, or float.
Currently however, such comparisons are done as int,
which means that the constant may be truncated,
and yield false positives/negatives for all queries
where compare is '>' '<' '>=' '<=' '=' '!='.
BIGINT_COL IN <list of contstants>
and
BIGINT_COL BETWEEN <constant> AND <constant>
are also affected.
mysql-test/r/bigint.result:
New tests.
mysql-test/r/func_in.result:
BIGINT <=> string comparison should be done as float,
so a warning for the value 'abc' is appropriate.
mysql-test/t/bigint.test:
New tests.
sql/item_cmpfunc.cc:
In convert_constant_item() we verify that the constant item
can be stored in the given field.
For BIGINT columns (MYSQL_TYPE_LONGLONG) we must verify that the
stored constant value is actually comparable as int,
i.e. that the value was not truncated.
For between: compare as int only if both arguments convert correctly to int.
MEMORY LEAK.
Background:
- There are caches for stored functions and stored procedures (SP-cache);
- There is no similar cache for events;
- Triggers are cached together with TABLE objects;
- Those SP-caches are per-session (i.e. specific to each session);
- A stored routine is represented by a sp_head-instance internally;
- SP-cache basically contains sp_head-objects of stored routines, which
have been executed in a session;
- sp_head-object is added into the SP-cache before the corresponding
stored routine is executed;
- SP-cache is flushed in the end of the session.
The problem was that SP-cache might grow without any limit. Although this
was not a pure memory leak (the SP-cache is flushed when session is closed),
this is still a problem, because the user might take much memory by
executing many stored routines.
The patch fixes this problem in the least-intrusive way. A soft limit
(similar to the size of table definition cache) is introduced. To represent
such limit the new runtime configuration parameter 'stored_program_cache'
is introduced. The value of this parameter is stored in the new global
variable stored_program_cache_size that used to control the size of SP-cache
to overflow.
The parameter 'stored_program_cache' limits number of cached routines for
each thread. It has the following min/default/max values given from support:
min = 256, default = 256, max = 512 * 1024.
Also it should be noted that this parameter limits the size of
each cache (for stored procedures and for stored functions) separately.
The SP-cache size is checked after top-level statement is parsed.
If SP-cache size exceeds the limit specified by parameter
'stored_program_cache' then SP-cache is flushed and memory allocated for
cache objects is freed. Such approach allows to flush cache safely
when there are dependencies among stored routines.
sql/mysqld.cc:
Added global variable stored_program_cache_size to store value of
configuration parameter 'stored-program-cache'.
sql/mysqld.h:
Added declaration of global variable stored_program_cache_size.
sql/sp_cache.cc:
Extended interface for sp_cache by adding helper routine
sp_cache_enforce_limit to control size of stored routines cache for
overflow. Also added method enforce_limit into class sp_cache that
implements control of cache size for overflow.
sql/sp_cache.h:
Extended interface for sp_cache by adding standalone routine
sp_cache_enforce_limit to control size of stored routines cache
for overflow.
sql/sql_parse.cc:
Added flush of sp_cache after processing of next sql-statement
received from a client.
sql/sql_prepare.cc:
Added flush of sp_cache after preparation/execution of next prepared
sql-statement received from a client.
sql/sys_vars.cc:
Added support for configuration parameter stored-program-cache.
The fields ext_key_flags and ext_key_part_map must be initialized for any
key, even for a MyISAM key that never is considered by the optimizer as one
extended by hidden components.
This patch changes the mechanism by which the client enables a
plugin. Instead of using INSERT IGNORE to reload a plugin library,
it now uses REPLACE INTO. This allows users to load a library
multiple times replacing the existing values in the mysql.plugin
table. This allows users to replace the symbol reference to a
different dl name in the table. Thus permitting enabling of
multiple versions of the same library without first disabling
the old version.
A regression test was added to ensure this feature works.
- Reverting the patch for Bug # 12584302
The patch will be reverted in 5.1 and 5.5.
The patch will not be reverted in 5.6, the change will
be properly documented in 5.6.
- Backporting DBUG_ASSERT not to crash on '0000-01-00'
(already fixed in mysql-trunk (5.6))
Introducing new collations:
utf8_general_mysql500_ci and ucs2_general_mysql500_ci,
to reproduce behaviour of utf8_general_ci and ucs2_general_ci
from mysql-5.1.23 (and earlier).
The collations are added to simplify upgrade from mysql-5.1.23 and earlier.
Note: The patch does not make new server start over old data automatically.
Some manual upgrade procedures are assumed.
Paul: please get in touch with me to discuss upgrade procedures
when documenting this bug.
modified:
include/m_ctype.h
mysql-test/r/ctype_utf8.result
mysql-test/t/ctype_utf8.test
mysys/charset-def.c
strings/ctype-ucs2.c
strings/ctype-utf8.c
of mysql-5.6 code line. The bugs could not be reproduced in the latest release
of mariadb-5.3 as they were fixed either when the code of subquery optimization
was back-ported from mysql-6.0 or later when some other bugs were fixed.
of mysql-5.6 code line. The bugs could not be reproduced in the latest release
of mariadb-5.3 as they were fixed either when the code of subquery optimization
was back-ported from mysql-6.0 or later when some other bugs were fixed.
The function subselect_uniquesubquery_engine::copy_ref_key has to take into
account that when EXPLAIN is processed the array of store_key object created
for any TABLE_REF may contain elements for constant items. These items should
be ignored by thefunction.
The issue is that xa.test failed sporadically on some platforms.
The reason for the test failure is a race condition in xa.test.
The race condition occures between connection that executes statement
INSERT INTO t2 SELECT FROM t1 and other connection that tries to run
statements DELETE FROM t1 and COMMIT. If COMMIT statement had been executed
before the statement INSERT INTO t2 SELECT FROM t1 was locked by lock
on table t1 (as a result of query from table t1) then the INSERT statement
is executed successfully and a following test for deadlock would failed.
This patch fixes this race condition by moving COMMIT statement after commit
of distributed transaction from concurrent session.
- equality substitution code was geared towards processing WHERE/ON clauses.
that is, it assumed that it was doing substitions on the code that
= wasn't attached to any particular join_tab yet
= was going to be fed to make_join_select() which would take the condition
apart and attach various parts of it to tables inside/outside semi-joins.
- However, somebody added equality substition for ref access. That is, if
we have a ref access on TBL.key=expr, they would do equality substition in
'expr'. This possibility wasn't accounted for.
- Fixed equality substition code by adding a mode that does equality
substition under assumption that the processed expression will be
attached to a certain particular table TBL.
If the expression for a derived table of a query contained a LIMIT
clause the estimate of the number of rows in this derived table
returned by the EXPLAIN command could be badly off since the
optimizer ignored the limit number from the LIMIT clause when
getting the estimate.
The call of the method SELECT_LEX_UNIT->set_limit added in the code
of mysql_derived_optimize() will be needed also in maria-5.5 where
parameters in the LIMIT clause are supported.
The patch for MWL #247 forgot to initialize the TABLE::ext_key_parts and
TABLE::ext_key_flags of the temporary tables by a query. This could cause
crashes for queries the execution of which needed creation of temporary
tables.
Fixing the 5.5 part (the 5.6 part will go in a separate commit soon).
Problem:
Item_direct_ref::get_date() incorrectly calculated its "null_value",
which made UNIX_TIMESTAMP(view_column) incorrectly return NULL
for a NOT NULL view_column.
Fix:
Make Item_direct_ref::get_date() calculate null_value
in the similar way with the other methods
(val_real,val_str,val_int,val_decimal):
copy null_value from the referenced Item.
modified:
mysql-test/r/func_time.result
mysql-test/t/func_time.test
sql/item.cc
Problem: When building the condition for JOIN::outer_ref_cond the optimizer forgot to take into account
that this condition could depend on constant tables as well.
routines.
mysqldump in xml mode did not dump routines, events or
triggers.
This patch fixes this issue by fixing the if conditions
that disallowed the dump of above mentioned objects in
xml mode, and added the required code to enable dump
in xml format.
client/mysqldump.c:
BUG#11760384 - 52792: mysqldump in XML mode does not dump
routines.
Fixed some if conditions to allow execution of dump methods
for xml and further added the relevant code at places to produce
the dump in xml format.
mysql-test/r/mysqldump.result:
Added a test case for Bug#11760384.
mysql-test/t/mysqldump.test:
Added a test case for Bug#11760384.
mysql-test/r/information_schema_all_engines.result:
Update result
mysql-test/t/information_schema_all_engines.test:
Added --sorted-results as tables in information_schema are not sorted.
------------------------------------------------------------
revno: 3258
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug12663165
timestamp: Thu 2011-07-14 10:05:12 +0200
message:
Bug#12663165 SP DEAD CODE REMOVAL DOESN'T UNDERSTAND CONTINUE HANDLERS
When stored routines are loaded, a simple optimizer tries to locate
and remove dead code. The problem was that this dead code removal
did not work correctly with CONTINUE handlers.
If a statement triggers a CONTINUE handler, the following statement
will be executed after the handler statement has completed. This
means that the following statement is not dead code even if the
previous statement unconditionally alters control flow. This fact
was lost on the dead code removal routine, which ended up with
removing instructions that could have been executed. This could
then lead to assertions, crashes and generally bad behavior when
the stored routine was executed.
This patch fixes the problem by marking as live code all stored
routine instructions that are in the same scope as a CONTINUE handler.
Test case added to sp.test.
- Create/use do_copy_nullable_row_to_notnull() function for ref access, which is used
when copying from not-NULL field in table that can be NULL-complemented to not-NULL field.
Main change is that non-blocking operation is now an option that must be
explicitly enabled with mysql_option(mysql, MYSQL_OPT_NONBLOCK, ...)
before any non-blocing operation can be used.
Also the CLIENT_REMEMBER_OPTIONS flag is now always enabled and thus
effectively ignored (it was not really useful anyway, and this simplifies
things when non-blocking mysql_real_connect() fails).
If init_command was incorrect, we couldn't let users execute
queries, but we couldn't report the issue to the client either
as it does not expect error messages before even sending a
command. Thus, we simply disconnected them without throwing
a clear error.
We now go through the proper sequence once (without executing
any user statements) so we can report back what the problem
is. Only then do we disconnect the user.
As always, root remains unaffected by this as init_command is
(still) not executed for them.
mysql-test/r/init_connect.result:
We now report a proper error if init_command fails.
Expect as much.
mysql-test/t/init_connect.test:
We now report a proper error if init_command fails.
Expect as much.
sql/sql_connect.cc:
If init_command fails, throw an error explaining this to
the user.
Completed the fix for this bug.
Note: in 5.3 the affected 'if' statement in Item_in_subselect::single_value_transformer()
starting with the condition (thd->variables.sql_mode & MODE_ONLY_FULL_GROUP_BY)
should be removed altogether. The change from table.cc is not needed either.
This is because in 5.3
- min/max transformation for subqueries are done at the optimization phase
- evaluation of the expensive subqueries is done at the execution phase.
Added an EXPLAIN EXTENDED to the test case for bug #12329653.
from a heap temptable, which uses pointers to records (that is, byte*
pointers) as rowids.
This meant that for rows with the same sort key value, the order
was determined by memory layout.
The MIN/MAX optimizer code from the function opt_sum_query erroneously
did not take into account conjunctive conditions that did not depend on
any table, yet were not identified as constant items. These could be
items containing rand() or PS/SP parameters. These items are supposed
to be evaluated at the execution phase. That's why if such conditions
can be extracted from the WHERE condition the MIN/MAX optimization is
not applied as currently it is always done at the optimization phase.
(In 5.3 expensive subqueries are also evaluated only at the execution
phase. So, if a constant condition with such subquery can be extracted
from the WHERE clause the MIN/MAX optimization should not be applied
in 5.3.)
IF an IN/ALL/SOME predicate with a constant left part is transformed
into an EXISTS subquery the resulting subquery should not be considered
uncacheable if the right part of the predicate is not uncacheable.
Backported the function dbug_print_item() from 5.3. The function is used
only for debugging.
fixed several defects in the greedy optimization:
1) The greedy optimizer calculated the 'compare-cost' (CPU-cost)
for iterating over the partial plan result at each level in
the query plan as 'record_count / (double) TIME_FOR_COMPARE'
This cost was only used locally for 'best' calculation at each
level, and *not* accumulated into the total cost for the query plan.
This fix added the 'CPU-cost' of processing 'current_record_count'
records at each level to 'current_read_time' *before* it is used as
'accumulated cost' argument to recursive
best_extension_by_limited_search() calls. This ensured that the
cost of a huge join-fanout early in the QEP was correctly
reflected in the cost of the final QEP.
To get identical cost for a 'best' optimized query and a
straight_join with the same join order, the same change was also
applied to optimize_straight_join() and get_partial_join_cost()
2) Furthermore to get equal cost for 'best' optimized query and a
straight_join the new code substrcated the same '0.001' in
optimize_straight_join() as it had been already done in
best_extension_by_limited_search()
3) When best_extension_by_limited_search() aggregated the 'best' plan a
plan was 'best' by the check :
'if ((search_depth == 1) || (current_read_time < join->best_read))'
The term '(search_depth == 1' incorrectly caused a new best plan to be
collected whenever the specified 'search_depth' was reached - even if
this partial query plan was more expensive than what we had already
found.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
client/mysqldump.c:
Bug#12809202 61854: MYSQLDUMP --SINGLE-TRANSACTION
--FLUSH-LOG BREAKS CONSISTENCY
Added logic to make sure that, if flush-log option
is specified, mysql_refresh() is never executed after
the transaction has started.
Added verbose messages for all the executions of
mysql_refresh() in order to track its invocation.
mysql-test/r/mysqldump.result:
Added test case for Bug#12809202.
mysql-test/t/mysqldump.test:
Added test case for Bug#12809202.
unix_timestamp() is implemented in this part of the code in place of current_time().
Also, since the pb2 machines may be extremely fast, instead of looping through the code,
we use sleep(1.1) so that the variables t0 and t1 have different values.
The time comparison using current_time() stored in an int variable was giving wrong results as
the current_time() format as an int implementation has been changed in mysql-trunk but not in mysql-5.5.
The time is stored in the format hh:mm:ss as 'time' datatype.But as an int, it is stored as hhmmss,
but only on the trunk. On mysql-5.5,as an int, it is stored as hh.
Hence, the current_time() function has been changed to unix_timestamp() function.
The function st_table::mark_virtual_columns_for_write() did not take into
account the fact that for any table the value of st_table::vfield is 0
when there are no virtual columns in the table definition.
If the sorted table belongs to a dependent subquery then the function
create_sort_index() should not clear TABLE:: select and TABLE::select
for this table after the sort of the table has been performed, because
these members are needed for the second execution of the subquery.
The patch differs from the original MySQL patch as follows:
- All test case differences have been reviewed one by one, and
care has been taken to restore the original plan so that each
test case executes the code path it was designed for.
- A bug was found and fixed in MariaDB 5.3 in
Item_allany_subselect::cleanup().
- ORDER BY is not removed because we are unsure of all effects,
and it would prevent enabling ORDER BY ... LIMIT subqueries.
- ref_pointer_array.m_size is not adjusted because we don't do
array bounds checking, and because it looks risky.
Original comment by Jorgen Loland:
-------------------------------------------------------------
WL#5953 - Optimize away useless subquery clauses
For IN/ALL/ANY/SOME/EXISTS subqueries, the following clauses are
meaningless:
* ORDER BY (since we don't support LIMIT in these subqueries)
* DISTINCT
* GROUP BY if there is no HAVING clause and no aggregate
functions
This WL detects and optimizes away these useless parts of the
query during JOIN::prepare()
- The problem was that const-table-reading code would try to evaluate MATCH()
before init_ftfuncs() was called.
- Fixed by making MATCH function "expensive" so that nobody tries to evaluate it
at optimization phase.
- Correctly handle plan refinement stage for LooseScan plans: run create_ref_for_key() if LooseScan
plan includes a ref access, and if we don't have any fixed key components, switch to a full index scan.
Problem description:
When a view is created using function FORMAT and if FORMAT function uses locale
option,definition of view saved into server doesn't contain that locale information,
Ex:
create table test2 (bb decimal (10,2));
insert into test2 values (10.32),(10009.2),(12345678.21);
create view test3 as select format(bb,1,'sk_SK') as cc from test2;
select * from test3;
+--------------+
| cc |
+--------------+
| 10.3 |
| 10,009.2 |
| 12,345,678.2 |
+--------------+
3 rows in set (0.02 sec)
show create view test3
View: test3
Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `test3` AS select format(`test2`.`bb`,1) AS `cc`
from `test2`
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.02 sec)
Problem Analysis:
The function Item_func_format::print() which prints the query string to create
the view does not print the third argument (i.e the locale information). Hence
view is created without locale information.
Problem Solution:
If argument count is more than 2 we now print the third argument onto the query string.
Files changed:
sql/item_strfunc.cc
Function call changes: Item_func_format::print()
mysql-test/t/select.test
Added test case to test the bug
mysql-test/r/select.result
Result of the test case appended here
- Let JTBM optimization code handle the case where the subquery is degenerate and doesn't have a
join query plan. Regular materialization would fall back to IN->EXISTS for such cases. Semi-Join
materialization does not have such option, instead we introduce and use "constant JTBM join tabs".
- Do a "more thorough" cleanup of SJ-Materialization join tab in JOIN_TAB::cleanup. The bug
was due to the fact that JOIN_TAB::cleanup() may be called multiple times for the same tab
if the join has grouping.
- Changed storage to be 2 bytes instead of sizeof(size_t) (simple optimization)
- Fixed bug when using query_cache_strip_comments and query that started with '('
- Fixed DBUG_PRINT() that used wrong (not initialized) variables.
mysql-test/mysql-test-run.pl:
Added some space to make output more readable.
mysql-test/r/query_cache.result:
Updated test results
mysql-test/t/query_cache.test:
Added test with query_cache_strip_comments
sql/mysql_priv.h:
Added QUERY_CACHE_DB_LENGTH_SIZE
sql/sql_cache.cc:
Fixed bug when using query_cache_strip_comments and query that started with '('
Store db length in 2 characters instead of size_t.
Get db length from correct position (earlier we had an error when query started with ' ')
Fixed DBUG_PRINT() that used wrong (not initialized) variables.
* rename all debugging related command-line options
and variables to start from "debug-", and made them all
OFF by default.
* replace "MySQL" with "MariaDB" in error messages
* "Cast ... converted ... integer to it's ... complement"
is now a note, not a warning
* @@query_cache_strip_comments now has a session scope,
not global.
SMALL KEY CACHE
The server crashed on division by zero because the key cache was not
initialized and the block length was 0 which was used in a division.
The fix was to not allow CACHE INDEX if the key cache was not initiallized.
Thus never try LOAD INDEX INTO CACHE for an uninitialized key cache.
Also added some windows files/directories to .bzrignore.
The range optimizer incorrectly chose a loose scan for group by
when there is a correlated WHERE condition. This range access
method cannot be executed for correlated conditions also with the
"range checked for each record" because generally the range access
method can change for each outer record. Loose scan destructively
changes the query plan and removes the GROUP operation, which will
result in wrong query plans if another range access is chosen
dynamically.
If the duplicate elimination strategy is used for a semi-join and potentially
one of the block-based join algorithms can be employed to join the inner
tables of the semi-join then sorting of the head (first non-constant) table
for a query with ORDER BY / GROUP BY cannot be used.
The function setup_sj_materialization_part1() forgot to set the value
of TABLE::map for any materialized IN subquery.
This could lead to wrong results for queries with subqueries that were
converted to queries with semijoins.
Analysis:
The class member QUICK_GROUP_MIN_MAX_SELECT::seen_first_key
was not reset between subquery re-executions. Thus each
subsequent execution continued from the group that was
reached by the previous subquery execution. As a result
loose scan reached end of file much earlier, and returned
empty result where it shouldn't.
Solution:
Reset seen_first_key before each re-execution of the
loose scan.
- if we're considering FirstMatch access with one inner table, and
@@optimizer_switch has semijoin_with_cache flag, calculate costs
as if we used join cache (because we will be able to do so)
The execution plan cannot use sorting on the first table from the
sequence of the joined tables if it plans to employ the block-based
hash join algorithm.
- Part 1 of the fix: for semi-join merged subqueries, calling child_join->optimize() until we're done with all
PS-lifetime optimizations in the parent.
- Make create_tmp_table() set KEY_PART_INFO attributes for the keys it creates.
This wasn't needed before but is needed now, when temp. tables that are
results of SJ-Materialization are being used for joins.
This particular bug depended on HA_VAR_LENGTH_PART being set,
but also added code to set HA_BLOB_PART and HA_NULL_PART when appropriate.
KEYUSE elements for a possible hash join key are not sorted by field
numbers of the second table T of the hash join operation. Besides
some of these KEYUSE elements cannot be used to build any key as their
key expressions depend on the tables that are planned to be accessed
after the table T.
The code before the patch did not take this into account and, as a result,
execition of a query the employing block-based hash join algorithm could
cause a crash or return a wrong result set.
The predicate is re-written from
((`test`.`g1`.`a` = geometryfromtext('')) or ...
to
((`test`.`g1`.`a` = <cache>(geometryfromtext(''))) or ...
The range optimizer calls save_in_field_no_warnings, in order to fetch keys.
save_in_field_no_warnings returns 0 because of the cache wrapper,
and get_mm_leaf() proceeded to call Field_blob::get_key_image()
which accesses un-initialized data.
mysql-test/r/gis.result:
New test case.
mysql-test/t/gis.test:
New test case.
sql/item.cc:
If we have cached a null_value, then verify that the Field can accept it.
in EXPLAIN as select_type==MATERIALIZED.
Before, we had select_type==SUBQUERY and it was difficult to tell materialized
subqueries from uncorrelated scalar-context subqueries.
If has been decided that the first match strategy is to be used to join table T
from a semi-join nest while no buffer can be employed to join this table
then no join buffer can be used to join any table in the join sequence between
the first one belonging to the semi-join nest and table T.
2. dialog plugin now always returns mysql->password if non-empty and the first question is of password type
3. split get_tty_password into get_tty_password_buff and strdup.
4. dialog plugin now uses get_tty_password by default
5. dialog.test
6. moved small tests of individual plugins into a dedicated suite
After sending packet that is too large, clienrt can get either an error packet with
ER_NET_PACKET_TOO_LARGE, or a socket error. Both cases are valid, since the
server does not ensure reply was fully read by client, before shutting down and closing
the socket.
The tables from the same semi-join or outer join nest cannot use
join buffers if in the join sequence of the query execution plan
they are separated by a table that is planned to be joined without
usage of a join buffer.
client/mysqltest.cc:
Free mutex after usage (fixes valgrind warnings in embedded server)
mysql-test/include/gis_keys.inc:
Fixed failure in innodb.gis_test
mysql-test/r/gis.result:
Updated result
mysql-test/suite/innodb/r/innodb_gis.result:
Updated results
mysql-test/suite/innodb/t/innodb_bug38231.test:
Added handling of timeouts (happend on some servers in buildbot)
mysql-test/suite/innodb_plugin/r/innodb_gis.result:
Updated results
mysql-test/suite/innodb_plugin/t/innodb.test:
Use error names instead of numbers
mysql-test/suite/innodb_plugin/t/innodb_misc1.test:
This test requires utf8
mysql-test/suite/innodb_plugin/t/innodb_mysql.test:
This test requires Xtradb
sql/sql_base.cc:
Don't print table names for placeholders.
sql/sql_show.cc:
Temporary fix:
Save and restore db and table_name in mysqld_show_create (to get rid of valgrind warning)
A better solution that needs to be investgated is to not change these fields in mysql_derived_prepare()
sql/sql_view.cc:
Fixed valgrind warning
storage/xtradb/handler/ha_innodb.cc:
Don't access THD directly
The cause of the wrong result was that Item_ref_null_helper::get_date()
didn't use a method of the *_result() family, and fetched the data
for the field from the current row instead of result_field. Changed to
use the correct *_result() method, like to all other similar methods
of Item_ref_null_helper.
Analysis:
lp:894397 was a consequence of a prior incorrect fix of lp:833777
which didn't take into account that even when all tables are
constant there may be correlated conditions, and the where clause
is not equivalent to the constant conditions.
Solution:
When there are constant tables only, evaluate only the conditions
that reference outer fields, because the constant conditions are
already checked, and the where clause doesn't have other conditions
than constant ones, and outer referencing ones. The fix for
lp:894397 also fixes lp:833777.
The problem was that when we have single row subquery with no rows
Item_cache(es) which represent result row was not null and being
requested via element_index() returned random value.
The fix is setting all Item_cache(es) in NULL before executing the
query (reset() method) which guaranty NULL value of whole query
or its elements requested in any way if no rows was found.
set_null() method was added to Item_cache to guaranty correct NULL
value in case of reseting the cache.
- Make EXPLAIN display "Start temporary" at the start of the fanout (it used to display
at the first table whose rowid gets into temp. table which is not that useful for
the user)
- Updated test results (all checked)
mysql-test/mysql-test-run.pl:
Rename MYSQLD -> MYSQLD_SIMPLE_CMD to avoid conflict with new MYSQLD variable from MySQL 5.1
mysql-test/r/innodb_file_format.result:
Remove old duplicated test
mysql-test/suite/pbxt/r/endspace.result:
Update test to last version
mysql-test/suite/pbxt/r/heap.result:
Removed heap test (not part of pbxt)
mysql-test/suite/pbxt/r/select_safe.result:
Updated results after error message change
mysql-test/suite/pbxt/r/view_grant.result:
Removed view test (not part of pbxt)
mysql-test/suite/pbxt/t/endspace.test:
Update test to last version
mysql-test/suite/pbxt/t/heap.test:
Removed heap test (not part of pbxt)
mysql-test/suite/pbxt/t/view_grant.test:
Removed view test (not part of pbxt)
mysql-test/t/innodb_file_format.test:
Remove old duplicated test
mysql-test/t/mysqld_option_err.test:
Use renamed variable
sql/my_decimal.h:
Fixed wrong define
storage/maria/ma_loghandler.c:
Fixed compiler warning
Stop attempts to apply IN/ALL/ANY optimizations to so called "fake_select"
(used for ordering and filtering results of union) in union subquery execution.
Analysis:
The bug is a result of an incomplete fix for bug lp:869036.
That fix didn't take into account that there may be a case
when ther are no NULLs in the materialized subquery, however
all columns without NULLs may not be grouped in the only
non-null index. This is the case when the left subquery expression
has nullable columns.
Solution:
The patch handles two missing sub-cases of the case when there are
no value (non-null matches) for any outer expression, and there are
both NULLs and non-NUll values in the outer reference.
a) If the materialized subquery contains no NULLs there cannot be a
partial match, because there are no NULLs in those columns where
the outer reference has no NULLs.
b) If the materialized subquery contains NULLs, but there exists a
column, such that its corresponding outer expression has no NULL,
and this column also has no NULL. Then there cannot be a partial
match either.
AND HANG IN SHOW TABLE STATUS.
ISSUE: Table corruption due to concurrent queries.
Different threads running insert and check
query leads to table corruption. Not properly locked,
rows are inserted in between check query.
SOLUTION: In check query mutex lock is acquired
for a longer time to handle concurrent
insert and check query.
NOTE: Additionally we backported the fix for CHECKSUM
issue(bug#11758979).
- Break down POSITION/advance_sj_state() into four classes
representing potential semi-join strategies.
- Treat all strategies uniformly (before, DuplicateWeedout
was special as it was the catch-all strategy. Now, we're
still relying on it to be the catch-all, but are able to
function,e.g. with firstmatch=on,duplicate_weedout=off.
- Update test results (checked)
This bug in the function Loose_scan_opt::check_ref_access_part1 could lead
to choosing an invalid execution plan employing a loose scan access to a
semi-join table even in the cases when such access could not be used at all.
This could result in wrong answers for some queries with IN subqueries.
Analysis:
The optimizer distinguishes two kinds of 'constant' conditions:
expensive ones, and non-expensive ones. The non-expensive conditions
are evaluated inside make_join_select(), and if false, already the
optimizer detects empty query results.
In order to avoid arbitrarily expensive optimization, the evaluation of
expensive constant conditions is delayed until execution. These conditions
are attached to JOIN::exec_const_cond and evaluated in the beginning of
JOIN::exec. The relevant execution logic is:
JOIN::exec()
{
if (! join->exec_const_cond->val_int())
{
produce an empty result;
stop execution
}
continue execution
execute the original WHERE clause (that contains exec_const_cond)
...
}
As a result, when an expensive constant condition is
TRUE, it is evaluated twice - once through
JOIN::exec_const_cond, and once through JOIN::cond.
When the expensive constant condition is a subquery,
predicate, the subquery is evaluated twice. If we have
many levels of subqueries, this logic results in a chain
of recursive subquery executions that walk a perfect
binary tree. The result is that for subquries with depth N,
JOIN::exec is executed O(2^N) times.
Solution:
Notice that the second execution of the constant conditions
happens inside do_select(), in the branch:
if (join->table_count == join->const_tables) { ... }
In this case exec_const_cond is equivalent to the whole WHERE
clause, therefore the WHERE clause has already been checked in
the beginnig of JOIN::exec, and has been found to be true.
The bug is addressed by not evaluating the WHERE clause if there
was exec_const_conds, and it was TRUE.
A patch for alter_table-big.test has been committed earlier.
This is a patch for create-big.test:
The test used to time-out after 900 seconds.
It relied on debug sleeps that are no longer present in the
code. Since the sleeps are long gone, fixing the problem didn't
involve just updating the result file or using macro
"show_binlog_events2.inc" instead of "show binlog events"
statement. The test needed to be rewritten using debug sync
points, and result then needed to be updated.
So, the sleeps have been replaced by debug_sync points and the test execution time has
been reduced significantly.
A non-first execution of a prepared statement missed a call of the
TABLE_LIST::process_index_hints() method in the code of the function
setup_tables().
At some scenarios this could lead to the choice of a quite inefficient
execution plan for the base query of the prepared statement.
This bug in the function setup_semijoin_dups_elimination() could
lead to invalid choice of the sequence of tables for which semi-join
duplicate elimination was applied.
Due to this bug the function SEL_IMERGE::or_sel_tree_with_checks()
could build an inconsistent merge tree if one of the SEL_TREEs in the
resulting index merge happened to contain a full key range.
This could trigger an assertion failure.
The function key_and() erroneously called SEL_ARG::increment_use_count()
when SEL_ARG::incr_refs() should had been called. This could lead to
wrong values of use_count for some SEL_ARG trees.
Apart from the fix, the patch also adds few more unrelated test
cases for partial matching, and fixes few typos.
Analysis:
This bug uncovered that partial matching via rowid intersection
didn't handle the case when:
- the left IN argument has some NULLs,
- there are no non-null value matches, and there is no non-null
column,
- the subquery columns that are not covered with the NULLs in
the left IN argument contain at least one row, such that it
has NULL values in all columns where the left IN operand has
no NULLs.
In this case there is a partial match.
In addition the analysis of the related code uncovered incorrect
handling of few other related cases.
Solution:
The solution for the bug is to check if there exists a row with
NULLs in all columns other than the ones having NULL in the
let IN operand.
The check is implemented via checking whether the bitmaps that
store NULL information in class Ordered_key have a non-empty
intersection for the relevant columns.
The intersection itself is implemented via the function
bitmap_exists_intersection() in my_bitmap.c.
The function setup_semijoin_dups_elimination erroneously assumed
that if join_cache_level is set to 3 or 4 then the type of the
access to a table cannot be JT_REF or JT_EQ_REF. This could lead
to wrong query result sets.
If the optimizer switch 'semijoin_with_cache' is set to 'off' then
join cache cannot be used to join inner tables of a semijoin.
Also fixed a bug in the function check_join_cache_usage() that led
to wrong output of the EXPLAIN commands for some test cases.
BY CACHING OR REDUCING CREATEEVENT CALLS".
5.5 versions of MySQL server performed worse than 5.1 versions
under single-connection workload in autocommit mode on Windows XP.
Part of this slowdown can be attributed to overhead associated
with constant creation/destruction of MDL_lock objects in the MDL
subsystem. The problem is that creation/destruction of these
objects causes creation and destruction of associated
synchronization primitives, which are expensive on Windows XP.
This patch tries to alleviate this problem by introducing a cache
of unused MDL_object_lock objects. Instead of destroying such
objects we put them into the cache and then reuse with a new
key when creation of a new object is requested.
To limit the size of this cache, a new --metadata-locks-cache-size
start-up parameter was introduced.
mysql-test/r/mysqld--help-notwin.result:
Updated test after adding --metadata-locks-cache-size
parameter.
mysql-test/r/mysqld--help-win.result:
Updated test after adding --metadata-locks-cache-size
parameter.
mysql-test/suite/sys_vars/r/metadata_locks_cache_size_basic.result:
Added test coverage for newly introduced --metadata_locks_cache_size
start-up parameter and corresponding global read-only variable.
mysql-test/suite/sys_vars/t/metadata_locks_cache_size_basic-master.opt:
Added test coverage for newly introduced --metadata_locks_cache_size
start-up parameter and corresponding global read-only variable.
mysql-test/suite/sys_vars/t/metadata_locks_cache_size_basic.test:
Added test coverage for newly introduced --metadata_locks_cache_size
start-up parameter and corresponding global read-only variable.
sql/mdl.cc:
Introduced caching of unused MDL_object_lock objects, in order to
avoid costs associated with constant creation and destruction of
such objects in single-connection workloads run in autocommit mode.
Such costs can be pretty high on systems where creation and
destruction of synchronization primitives require a system call
(e.g. Windows XP).
To implement this cache,a list of unused MDL_object_lock instances
was added to MDL_map object. Instead of being destroyed
MDL_object_lock instances are put into this list and re-used later
when creation of a new instance is required. Also added
MDL_lock::m_version counter to allow threads having outstanding
references to an MDL_object_lock instance to notice that it has
been moved to the unused objects list.
Added a global variable for a start-up parameter that limits
the size of the unused objects list.
Note that we don't cache MDL_scoped_lock objects since they
are supposed to be created only during execution of DDL
statements and therefore should not affect performance much.
sql/mdl.h:
Added a global variable for start-up parameter that limits the
size of the unused MDL_object_lock objects list and constant
for its default value.
sql/sql_plist.h:
Added I_P_List<>::pop_front() function.
sql/sys_vars.cc:
Introduced --metadata-locks-cache-size start-up parameter
for specifying size of the cache of unused MDL_object_lock
objects.
OPTION SKIP-WRITE-BINLOG
System tables were not getting upgraded when
mysql_upgrade was run with --skip-write-binlog
option. (Same for --write-binlog.) Also, with
this option, mysql_upgrade_info file was not
getting created after the upgrade.
mysql_upgrade makes use of mysql client tool in
order to run upgrade scripts, while doing so it
passes some of the command line options (used to
start mysql_upgrade) directly to mysql client.
The reason behind this bug being, some options
like skip-write-binlog and upgrade-system-tables
were being passed to mysql tool along with other
options, and hence mysql execution failed due
presence of these invalid options.
Fixed this issue by filtering out the above mentioned
options from the list of options that will be passed to
mysql and mysqlcheck tools. However, since --write-binlog
is supported by mysqlcheck, this option would be used
explicitly while running mysqlcheck. (not part of patch,
already there)
Checking the contents of general log after the upgrade
is not doable via an mtr test. So performed manual test.
Added a test to verify the creation of mysql_upgrade_info.
client/mysql_upgrade.c:
Bug#11827359 60223: MYSQL_UPGRADE PROBLEM WITH
OPTION SKIP-WRITE-BINLOG
With this patch, --upgrade-system-tables and
--write-binlog options will not be added to the
list of options, used to start mysql and mysqlcheck
tools.
mysql-test/r/mysql_upgrade.result:
Added a testcase for Bug#11827359.
mysql-test/t/mysql_upgrade.test:
Added a testcase for Bug#11827359.
mysql-test/r/select.result:
Test case for lp:780425
mysql-test/r/select_pkeycache.result:
lp:780425
mysql-test/t/select.test:
lp:780425
sql/sql_select.cc:
Added DBUG_ASSERT to be prove some logic and later be able to simplify the code
Set implicit_grouping if we delete a GROUP BY to signal do_select() that a grouping needs to be done.
A bug in the code of the function key_or could lead to a situation
when performing of an OR operation for one index changes the result
the operation for another index. This bug is fixed with this patch.
Also corrected the specification and the code of the function
or_sel_tree_with_checks.
In MariaDB, when running in ONLY_FULL_GROUP_BY mode,
the server produced in incorrect error message that there
is an aggregate function without GROUP BY, for artificially
created MIN/MAX functions during subquery MIN/MAX optimization.
The fix introduces a way to distinguish between artifially
created MIN/MAX functions as a result of a rewrite, and normal
ones present in the query. The test for ONLY_FULL_GROUP_BY violation
now tests in addition if a MIN/MAX function was part of a MIN/MAX
subquery rewrite.
In order to be able to distinguish these MIN/MAX functions, the
patch introduces an additional flag in Item_in_subselect::in_strategy -
SUBS_STRATEGY_CHOSEN. This flag is set when the optimizer makes its
final choice of a subuqery strategy. In order to make the choice
consistent, access to Item_in_subselect::in_strategy is provided
via new class methods.
******
Fix MySQL BUG#12329653
In MariaDB, when running in ONLY_FULL_GROUP_BY mode,
the server produced in incorrect error message that there
is an aggregate function without GROUP BY, for artificially
created MIN/MAX functions during subquery MIN/MAX optimization.
The fix introduces a way to distinguish between artifially
created MIN/MAX functions as a result of a rewrite, and normal
ones present in the query. The test for ONLY_FULL_GROUP_BY violation
now tests in addition if a MIN/MAX function was part of a MIN/MAX
subquery rewrite.
In order to be able to distinguish these MIN/MAX functions, the
patch introduces an additional flag in Item_in_subselect::in_strategy -
SUBS_STRATEGY_CHOSEN. This flag is set when the optimizer makes its
final choice of a subuqery strategy. In order to make the choice
consistent, access to Item_in_subselect::in_strategy is provided
via new class methods.
The function add_ref_to_table_cond missed updating the value of
join_tab->pre_idx_push_select_cond after having updated the value
of join_tab->select->pre_idx_push_select_cond.
alter_treable-big.test was failing due to the use of RAND() function which is no more
replication safe.
This has been modified using static values.
Also, 'sleep' has been replaced using 'debug_sync' and the execution time of the
test has been reduced significantly.
This test is now taken out of the disabled.def file and is being enabled.
A patch for this bug has already been pushed. A minor change is made here.
The database to be used after re-enabling the disabled code is 'TEST'.
But instead, 'MYSQL' was being used.
This is the minor change that is being made here.
Don't do this for echo, instead:
1) Enable replacements also for assignment from backquoted SQL
2) Allow replace_regex to take a variable for the *entire* argument list
With this, the test can be amended, but only in its version in trunk
The bug happened because in some cases the function JOIN::exec
did not save the value of TABLE::pre_idx_push_select_cond in
TABLE::select->pre_idx_push_select_cond for the sort table.
Noticed and fixed a bug in the function make_cond_remainder
that builds the remainder condition after extraction of an index
pushdown condition from the where condition. The code
erroneously assumed that the function make_cond_for_table left
the value of ICP_COND_USES_INDEX_ONLY in sub-condition markers.
Adjusted many result files from the regression test suite
after this fix .
The call of the virtual function cancel_pushed_idx_cond in the code of
the function test_if_skip_sort_order was misplaced when backporting the
fix for bug 58816.
Analysis:
Equality propagation propagated the constant '7' into
args[0] of the Item_in_optimizer that stands for the
"< ANY" predicate. At the same the min/max subquery
rewrite swapped the order of the left and right operands
of the "<" predicate, but used Item_in_subselect::left_expr.
As a result, when the <ANY predicate is executed early in the
execution phase as a contant condition, instead of a constant
right (swapped) argument of the < predicate, there was a field
(t3.a). This field had no data, since the whole predicate is
considered constant, and it is evaluated before any tables are
read. Having junk in the field row buffer produced wrong result
Solution:
Fix create_swap to pick the correct Item_in_optimizer left
argument.
The problem was that merged views has its own nest_level numbering =>
when we compare nest levels we should take into considiration basis (i.e. 0 level),
if it is different then nest levels are not comparable.
- Make eliminate_tables_for_list() take into account that it is not possible
to eliminate a table if it is used in the upper-side ON expressions. Example:
xxx JOIN (t1 LEFT JOIN t2 ON cond ) ON func(t2.columns)
Here it would eliminate t2 which is not possible because of use of t2.columns.
- The bug was caused by the following scenario:
= a quick select is created with get_quick_select_for_ref. The quick
select refers to temporary (derived) table. It saves table->file, which
refers to a ha_heap object.
= When temp table is populated, ha_heap reaches max. size and is converted
to a ha_myisam. However, quick->file remains pointing to where ha_heap
was.
= Attempt to use the quick select causes crash.
- Fixed by introducing QUICK_SELECT_I::replace_handler(). Note that it will
not work for index_merge quick selects. Which is fine, because these
quick selects are never created for derived tables.
The function Item_direct_view_ref::fix_fields erroneously did not correct
the value of the flag maybe_null when the view for which the item was
being fixed happened to be an inner table of an outer join.
Honor unique/not unique when creating keys for internal tempory tables.
Added new variables to be used to limit how keys are created for internal temporary tables.
include/maria.h:
Added maria_max_key_length() and maria_max_key_segments()
include/myisam.h:
Added myisam_max_key_length() and myisam_max_key_segments()
mysql-test/r/mysql.result:
Drop all used tables
mysql-test/r/subselect4.result:
Added test case for lp:879939
mysql-test/t/mysql.test:
Drop all used tables
mysql-test/t/subselect4.test:
Added test case for lp:879939
sql/mysql_priv.h:
Added internal_tmp_table_max_key_length and internal_tmp_table_max_key_segments to be used to limit how keys for derived tables are created.
sql/mysqld.cc:
Added internal_tmp_table_max_key_length and internal_tmp_table_max_key_segments to be used to limit how keys for derived tables are created.
sql/share/errmsg.txt:
Added new error message for internal errors
sql/sql_select.cc:
Give error if we try to create a wrong key (this error should never happen)
Honor unique/not unique when creating keys for internal tempory tables.
storage/maria/ha_maria.cc:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
(Not having this caused an assert in the included test)
storage/maria/ha_maria.h:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
storage/maria/ma_check.c:
Fixed bug in Duplicate key error printing (now row position is printed correctly)
storage/maria/ma_create.c:
maria_max_key_length() -> _ma_max_key_length()
storage/maria/ma_info.c:
Added extern function maria_max_key_length() to calculate the max key length based on current block size.
storage/maria/ma_open.c:
maria_max_key_length() -> _ma_max_key_length()
storage/maria/maria_def.h:
maria_max_key_length() -> _ma_max_key_length()
storage/myisam/ha_myisam.cc:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
(Not having this caused an assert in the included test)
storage/myisam/ha_myisam.h:
Added change_table_ptr() to ensure that external_ref points always to the correct table.
The function SELECT_LEX::update_used_tables first must clean up
all bitmaps to be recalculated for all tables that require it
and only after this walk through on conditions attached to the
tables to update these bitmaps.