WITH --SKIP-INNODB
Description
-----------
If the server is started with skip-innodb or InnoDB otherwise fails to
start, any one of these queries will crash the server:
For (5.5)
SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE;
SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRU;
SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS;
In (5.6+) ,following queries will also crash the server.
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_COLUMNS;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FIELDS;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN_COLS;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESTATS;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES;
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES;
FIX
----
When Innodb is not active we must prevent it from processing
these tables,so we return a warning saying that innodb is not
active.
Approved by marko (http://rb.no.oracle.com/rb/r/1891)
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
sql/opt_range.cc:
check whether the keeypart_tree has any range tree.
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
With innodb_change_buffering enabled, Innodb buffers
all modifications to secondary index leaf pages when
the leaf pages are not in buffer pool.
Crash InnoDB while an IBUF_OP_DELETE is being applied.
Restart and note that the same record can be applied
again which may lead to crash.
Mark the change buffer record processed, so that it will
not be merged again in case the server crashes between
the following mtr_commit() and the subsequent mtr_commit()
of deleting the change buffer record.
Testcase: No testcase because it is difficult to get the
timing right with the two asyncronous task purge and change
buffering
Approved by Marko. rb#1893
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
mysql-test/r/func_compress.result:
changing result file
mysql-test/r/variables.result:
changing result file
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
changing result file
sql/item_func.cc:
Quote the user variable items
Due to not resetting a member (last_added) of
Deferred events class inside a clean up function
(Deferred_log_events::rewind), there is a memory
leak on filtered slaves.
Fix:
Resetting last_added to NULL in rewind() function.
sql/rpl_utility.cc:
Resetting last_added to NULL to avoid memory leak
CERTAIN LEVEL
Problem description: mysqld crashes when we update the max_connections
variable to lesser value than the number of currently open connections.
Analysis: The "alarm_queue.max_elements" size will be decided at the
server start time and it will get modified if we change max_connections
value. In the current scenario the value of "alarm_queue.max_elements"
is decremented when the max_connections is set to 2. When updating the
"alarm_queue.max_elements" value we are not updating "max_used_alarms"
value. Hence, instead of getting the warning "thr_alarm queue is full"
it is ending up in asserting the server at the time of inserting new
elements in the queue.
Fix: the fix is to dynamically increase the size of the alarm_queue.
In order to do that, queue_insert_safe() should be used instead if
queue_insert().
FROM MYSQL_BINLOG_SEND
As part Bug #11747416 A DISK FULL MAKES BINARY LOG CORRUPT,
reading the variable "binlog_can_be_corrupted" was removed
In the existing code the value of this variable is only set,
never read. And also this issue causing compiler warnings.
So the variable is completely redundant and should be removed.
sql/sql_repl.cc:
Removing dead code
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.
There were a few different issues with similar assertion
failures on different queries:
1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).
Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.
Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.
2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.
Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been with the has_subquery() function call.
3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.
Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.
4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.
Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.
5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
< <is_not_null_test>(a),
---
> <cache>(<is_not_null_test>(a)),
or
< having (<is_not_null_test>(a) and <is_not_null_test>(a))
---
> having 1
etc.
Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
buf_page_get_gen(): Do not attempt to decompress a compressed-only
page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by
btr_search_drop_page_hash_when_freed(). There cannot be any adaptive
hash index pointing to a page that does not exist in uncompressed
format in the buffer pool.
innodb_buffer_pool_evict_update(): New function for debug builds, to handle
SET GLOBAL innodb_buffer_pool_evicted='uncompressed'
by evicting all uncompressed page frames of compressed tablespaces
from the buffer pool.
rb#1873 approved by Jimmy Yang
RATHER THAN A TABLE
Problem: In RBR, If a table is converted into a view at slave,
(i.e., "drop table 'object1'" & "create view 'object1'"), then any
DML operations on the table at master are causing crash at slave.
Analysis: Slave prepares tables to be opened for DML list when it
receives Table_map_log_event(s). And the same list will be sent to
open_table function. Open_table logic assumes that if the list
contains a view object, it also contains "select_lex" object of
that view. In the above special case, the table object does not
contain 'select_lex' as it is base table at master. Since it
is a view at slave, open_table logic goes to 'mysql_make_view()'
function which assumes that 'select_lex' exists for the object.
Fix: While preparing 'tables to be opened' list, we should make
sure that table required type is 'base table'. If it is not
base table while opening the object, mysql_make_view will throw an
error similar to 'object is not a base table'
sql/log_event.cc:
Restrict that all table_map_log_event's objects should be
base tables @ slave also.
The test, binlog.binlog_spurious_ddl_errors was failing on pb2 at the statement
"UNINSTALL PLUGIN example;" with this warning:
"Warning 1620 Plugin is busy and will be uninstalled on shutdown "
Fix
Spurious warnings occur in the test since we do not empty the Query cache,
used by the example plugin at the time of creating tables using the plugin.
Hence, the query chache is flushed before uninstalling the plugin.
Also, as part of running the test across platforms, the plugin installation
script is changed.
Get rid of O(n^2) scan in dyn array (mtr->memo) operations, accessing
the dyn array blocks directly.
dyn_array_get_last_block(), dyn_array_get_next_block(),
dyn_array_get_prev_block(): Define as a constness-preserving macro.
Add const qualifiers to many dyn_array functions.
mtr_memo_slot_release_func(): Renamed from mtr_memo_slot_release():
Make mtr_t* a debug-only parameter. Assume that slot->object != NULL.
mtr_memo_pop_all(): Access the dyn_array blocks directly, replacing
O(n^2) operation with O(n).
mtr_memo_release(): Access the dyn_array blocks directly, replacing
O(n^2) operation with O(n). This caused the performance problem.
rb#1540 approved by Jimmy Yang
Problem: When a view, with a specific character set and collation,
is created on another view with a different character set and collation the
dump restoration results in an illegal mix of collations error.
SOLUTION: To avoid this confusion of collations, the create table datatype
being used is hardcoded as "tinyint NOT NULL". This will not matter as the table
created will be dropped at runtime and specifically tinyint is used to
avoid hitting the row size conflicts.
Consider the following query:
SELECT f_1,..,f_m, AGGREGATE_FN(C)
FROM t1
WHERE ...
GROUP BY ...
Loose index scan ("Using index for group-by") can be used for
this query if there is an index 'i' covering all fields in the
select list, and the GROUP BY clause makes up a prefix f1,...,fn
of 'i'. Furthermore, according to rule NGA2 of
get_best_group_min_max(), the WHERE clause must contain a
conjunction of equality predicates for all fields fn+1,...,fm.
The problem in this bug was that a query with WHERE clause that
broke NGA2(NGA: Non Group Attribuite) was not detected and therefore
used loose index scan.
This lead to wrong result. The query had an index
covering (c1,c2) and had:
"WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
or
"WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
This WHERE clause cannot be transformed to a conjunction of
equality predicates.
The solution is to introduce another rule, NGA3, that complements
NGA2. NGA3 says that if a gap field (field between those
listed in GROUP BY and C in the index) has a predicate, then
there can only be one range in the query. This requirement is
more strict than it has to be in theory. BUG 15947433 will deal
with that.
sql/opt_range.cc:
check for the repetition of non group field.
Consider the following query:
SELECT f_1,..,f_m, AGGREGATE_FN(C)
FROM t1
WHERE ...
GROUP BY ...
Loose index scan ("Using index for group-by") can be used for
this query if there is an index 'i' covering all fields in the
select list, and the GROUP BY clause makes up a prefix f1,...,fn
of 'i'. Furthermore, according to rule NGA2 of
get_best_group_min_max(), the WHERE clause must contain a
conjunction of equality predicates for all fields fn+1,...,fm.
The problem in this bug was that a query with WHERE clause that
broke NGA2 was not detected and therefore used loose index scan.
This lead to wrong result. The query had an index
covering (c1,c2) and had:
"WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
or
"WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
This WHERE clause cannot be transformed to a conjunction of
equality predicates.
The solution is to introduce another rule, NGA3, that complements
NGA2. NGA3 says that if a gap field (field between those
listed in GROUP BY and C in the index) has a predicate, then
there can only be one range in the query. This requirement is
more strict than it has to be in theory. BUG 15947433 will deal
with that.
sql/opt_range.cc:
check for the repetition of non group field.
Analysis:
---------
When the server is out of memory, an error is raised
to indicate the same. Handling the error requires
more memory to be allocated which fails, hence the
error handling loops in a recursion and causes the
server to crash.
Fix:
---
a) Prevents pushing the 'out of memory' error condition
to the diagnostic area as it requires memory allocation.
GET DIAGNOSTICS, SHOW WARNINGS and SHOW ERRORS statements
will not show information about this error. However the
'out of memory' error is returned to the client.
b) It sets the ME_FATALERROR flag when 'out of memory' errors
are reported (for places where the flag is not already set).
This flag prevents activation of SP error handlers which also
require memory allocation and therefore are likely to fail.
Problem:-
In case of blob data field, UNION ALL doesn't give correct result.
Analysis:-
In MyISAM table, when we dont want to check for the distinct for particular
key, we set the key_map to zero.
While writing record in MyISAM table, we check the distinct with the help
of keys, by checking whether that key is active in key_map and then writing
the record.
In case of blob field, we are checking for distinct by unique constraint,
where we are not checking whether that unique key is active or not in key_map.
Solution:
Before checking for distinct, check whether any key is active in key_map.
storage/myisam/mi_write.c:
check whether key_map is active before checking distinct.
Problem:-
In case of blob data field, UNION ALL doesn't give correct result.
Analysis:-
In MyISAM table, when we dont want to check for the distinct for particular
key, we set the key_map to zero.
While writing record in MyISAM table, we check the distinct with the help
of keys, by checking whether that key is active in key_map and then writing
the record.
In case of blob field, we are checking for distinct by unique constraint,
where we are not checking whether that unique key is active or not in key_map.
Solution:-
Before checking for distinct, check whether any key is active in key_map.
storage/myisam/mi_write.c:
check whether key_map is active before checking distinct.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
mysql-test/suite/innodb/r/innodb_mysql.result:
Changed explain output: The explain now contains "Using where" since we
have restored the select pointer after doing the filesort operation.
sql/sql_select.cc:
Change create_sort_index() so that it always sets the pointer to
the select object to NULL. This is done in order to avoid that the
select->quick object can be used when execution the main part of
the select operation.
sql/sql_select.h:
New member in JOIN_TAB: saved_select. Used by create_sort_index to
make a backup copy of the select pointer.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
mysql-test/suite/innodb/r/innodb_mysql.result:
Changed explain output: The explain now contains "Using where" since we
have restored the select pointer after doing the filesort operation.
sql/sql_select.cc:
Change create_sort_index() so that it always sets the pointer to
the select object to NULL. This is done in order to avoid that the
select->quick object can be used when execution the main part of
the select operation.
sql/sql_select.h:
New member in JOIN_TAB: saved_select. Used by create_sort_index to
make a backup copy of the select pointer.
WITH AN ASSERTION
Recently we added check to handle kill query signal for long operating
queries.
While the query interruption is reported it must to ensure cursor is restore
to proper state for HANDLER interface to work correctly.
Normal select query will not face this problem, as on recieving interrupt,
select query is aborted and new select query result in re-initialization
(including cursor).
rb://1836. Approved by Marko.
Analysis:
--------
REPLACE operation provides incorrect output when
user variable is supplied as an argument and there
are multiple rows on which the operation is performed.
Consider the example below:
SET @var='(( 00000000 ++ 00000000 ))';
SELECT REPLACE(@var, '00000000', table_name) AS a FROM
INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql';
Invalid output:
+---------------------------------------+
| REPLACE(@var, '00000000', TABLE_NAME) |
+---------------------------------------+
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
......
......
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
+---------------------------------------+
The user argument supplied as the string to REPLACE
operation is overwritten after the first iteration
to '(( columns_priv ++ columns_priv ))'.
The overwritten string after the first iteration
is used for the subsequent REPLACE iteration. Since
the pattern string is not found, it returns invalid
output as mentioned above.
Fix:
---
If the Alloced_length is zero, realloc() and create a
copy of the string which is then used for the REPLACE
operation for every iteration.
INCLUDES FIRST PARTITION WHEN PRUNING
PROBLEM
-------
TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which
was stored in the first partition ,therefore the first partition
was always included for the scan when range was specified.
FIX
---
The fix is a small optimization which we have included ,which will
prune the scanning of NULL/first partition if the dates specified
in the range are valid and in the same year and month . TO_SECONDS()
function is not supported in 5.1 so removed it from the fix and test
scripts for mysql-5.1 version.
Problem description: When client loses the connection to the MySQL server or
if the server gets shutdown after mysql_stmt_prepare() then the next
mysql_stmt_prepare() will return an error(as expected) but consecutive call
mysql_stmt_execute(), will crash the client program.
The expected behavior would be, it should through an error.
Analysis: The mysql_stmt_prepare() interns calls the function end_server()
and net->vio and net->buff are freed and set to NULL. Then the next call
mysql_stmt_execute() will interns call net_clear() where we are "net->vio"
with out validating it.
Fix: we are validating the net->vio, before calling net_clear().