AND 'KILL SESSION' LEAD TO CRASH
Analysis:
--------
This situation occurs when the connection executes query
"show engine innodb status" and this connection is killed by
executing statement "kill <con>" by another connection.
In function "innodb_show_status", function "stat_print"
is called to print the status but return value of function
is not checked. After killing connection, if write to
connection fails then error is returned and same is set
in Diagnostic area. Since FALSE is returned from
"innodb_show_status" now, assert to check no error
is set in function "set_eof_status" (called from
my_eof) is failing.
Fix:
----
Changed code to check return value of function "stat_print"
in "innodb_show_status".
ha_innobase::records_in_range() should return HA_POS_ERROR for the table during discarded without requesting pages.
The later other handler method should treat the error correctly.
Approved by Sunny in rb#3433
Problem:
When the user specified foreign key name contains "_ibfk_", InnoDB wrongly
tries to rename it.
Solution:
When a table is renamed, all its associated foreign keys will also be renamed,
only if the foreign key names are automatically generated. If the foreign key
names are given by the user, even if it has _ibfk_ in it, it must not be
renamed.
rb#2935 approved by Jimmy, Krunal and Satya
SERIALIZABLE
Problem:
The documentation claims that WITH CONSISTENT SNAPSHOT will work for both
REPEATABLE READ and SERIALIZABLE isolation levels. But it will work only
for REPEATABLE READ isolation level. Also, the clause WITH CONSISTENT
SNAPSHOT is silently ignored when it is not applicable to the given isolation
level.
Solution:
Generate a warning when the clause WITH CONSISTENT SNAPSHOT is ignored.
rb#2797 approved by Kevin.
Note: Support team wanted to push this to 5.5+.
Analysis
--------
The pthread_mutex commit_threads_m was initiliazed but never
used.
Fix
---
Removing the commit_threads_m mutex from the code base.
[ Approved by Marko rb#2475]
DDL AND I_S QUERIES
Skip partially created indexes (ones whose name starts with TEMP_INDEX_PREFIX)
from stats gathering.
Because InnoDB reports HA_INPLACE_ADD_INDEX_NO_WRITE to MySQL, the latter
allows parallel execution of ha_innobase::add_index() and ha_innobase::info().
Reviewed by: Inaam (rb:2613)
IF IT HAS A WRONG COUNT
If CHECK TABLE finds that a secondary index contains the wrong
number of entries, it used to report an error but not mark the
index as corrupt. The error means that the index should be rebuilt,
which can be done with ALTER TABLE DROP INDEX and ALTER TABLE ADD
INDEX. But just in case the DBA does not pay any attention to the
output of CHECK TABLE, the secondary index should be marked as
corrupted so that it is not used again.
Approved by Inaam in RB:2607
AFTER A ROW IS READ
Approved by: Sunny Bains rb://2425
Don't release concurrency tickets when asked to release
btr_search_latch. This is a 5.5 only bug. It is already
fixed in 5.6 upwards.
TRANSACTION ROLLBACK
Problem:
=======
"prepare_commit_mutex" is acquired during "innobase_xa_prepare"
and it is freed only in "innobase_commit". After prepare,
if the commit operation fails the transaction is rolled back
but the mutex is not released.
Analysis:
========
During transaction commit process transaction is prepared and
the "prepare_commit_mutex" is acquired to preserve the order
of commit. After prepare write to binlog is initiated.
File: sql/handler.cc
if (error || (is_real_trans && xid &&
-----> (error= !(cookie= tc_log->log_xid(thd, xid)))))
{
ha_rollback_trans(thd, all);
In the above code "tc_log->log_xid" operation fails.
When the write to binlog fails the transaction is rolled back
with out freeing the mutex. A subsequent "INSERT" operation
tries to acquire the same mutex during its commit process
and the server aborts.
Fix:
===
"prepare_commit_mutex" is freed during "innobase_rollback".
storage/innobase/handler/ha_innodb.cc:
Added code to free "prepare_commit_mutex"
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There are two situations here. The constraint name is explicitly
given by the user and the constraint name is automatically generated
by InnoDB. In the case of generated constraint name, it is formed by
adding table name as prefix. The table names are stored internally in
my_charset_filename. In the case of constraint name explicitly given
by the user, it is stored in UTF8 format itself. So, in some
situations the constraint name is in utf8 and in some situations it is
in my_charset_filename format. Hence this problem.
Solution:
Always store the foreign key constraint name in UTF-8 even when
automatically generated.
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There was a memory leak in the function dict_create_add_foreign_to_dictionary().
The allocated pars_info_t object is not freed in the error code path.
Solution:
Allocate the pars_info_t object after the error checking.
rb#2368 in review
OPENING MISSING PARTITION
In the ha_innobase::open() call, for normal tables, there is no retry logic.
But for partitioned tables, there is a retry logic introduced as fix for:
http://bugs.mysql.com/bug.php?id=33349https://support.mysql.com/view.php?id=21080
The Bug#33349, does not provide sufficient information to analyze the original
problem. The original problem reported by bug#33349 is also minor (just an
annoyance and no loss of functionality). Most importantly, the retry logic
has been introduced without any associated test case.
So we are removing the retry logic for partitioned tables. When the original
problem occurs, a different solution will be explored.
UPDATES
After checking that the table has changed too much in
row_update_statistics_if_needed() and calling dict_update_statistics(),
also check if the same condition holds after acquiring the table stats
latch. This is to avoid multiple threads concurrently entering and
executing the stats update code.
Approved by: Marko (rb:2186)
This is a deadlock that will also be fixed in the server by
Bug #11844915 - HANG IN THDVAR MUTEX ACQUISITION.
So this is a simple alternate method of fixing the same problem,
but from within InnoDB.
The simple change is to make rename table start a transaction
before locking dict_sys->mutex since thd_supports_xa() can call
THDVAR which can lock a mutex, LOCK_global_system_variables, that
is used in the server by many other activities. At least one of
those, sys_var::update(), can call back into InnoDB and try to
lock dict_sys->mutex while holding LOCK_global_system_variables.
The other bug fix for 11844915 eliminates the use of
LOCK_global_system_variables for calls to THDVAR.
Approved by marko in http://rb.no.oracle.com/rb/r/2000/
FROM SHOW CREATE
Problem: The length of the internally generated foreign key name
is not checked.
Solution: The length of the internally generated foreign key name is
checked. If it is greater than the allowed limit, an error message
is reported. Also, the constraint name is printed in the same manner
as the table name, using the system charset information.
rb://1969 approved by Marko.
Problem:
During the index intersect access method, the SQL layer will access one row,
that satisfies a set of conditions, using an index i1. And then it will try to
access the same row, with other set of conditions using the next index i2. If
the fetch from i2 fails (we are talking about an error situation here and not
simply an unmatched row situation), then it will unlock the row accessed via
i1. This will work in all situations except deadlock error.
When a deadlock happens, InnoDB will rollback the transaction. InnoDB intimates
the SQL layer about this through the THD::transaction_rollback_request member.
But this is not currently used by the SQL layer.
Solution:
When an error happens, the SQL layer must check the
THD::transaction_rollback_request member, before calling handler::unlock_row().
We have also added a debug assert in ha_innobase::unlock_row() checking that
it must be called only when the transaction is in active state.
rb#1773 approved by Marko and Sunny.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
Delete-mark change buffer records when resorting to a pessimistic
delete from the change buffer B-tree. Skip delete-marked records in
the change buffer merge and when estimating whether an operation can
be buffered. Without this fix, we could try to apply the same buffered
changes multiple times if the server was killed at the right moment.
In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
delete-marked (already processed) records.
ibuf_delete_rec(): Add a crash point before optimistic delete. If the
optimistic delete fails, flag the record processed before
mtr_commit().
ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
processed) records.
Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
btr_cur_set_deleted_flag_for_ibuf() and add a parameter.
rb:1307 approved by Jimmy Yang
ha_innobase::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
This assertion catches unnecessary calls to this method,
but such calls are not harming correctness.
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.
primary key with innodb tables
The bug was triggered if a single ALTER TABLE statement both
added and dropped indexes and ALTER TABLE failed during drop
(e.g. because the index was needed in a foreign key constraint).
In such cases, the server index information would get out of
sync with InnoDB - the added index would be present inside
InnoDB, but not in the server. This could then lead to InnoDB
error messages and/or server crashes.
The root cause is that new indexes are added before old indexes
are dropped. This means that if ALTER TABLE fails while dropping
indexes, index changes will be reverted in the server but not
inside InnoDB.
This patch fixes the problem by dropping any added indexes
if drop fails (for ALTER TABLE statements that both adds
and drops indexes).
However, this won't work if we added a primary key as this
key might not be possible to drop inside InnoDB. Therefore,
we resort to the copy algorithm if a primary key is added
by an ALTER TABLE statement that also drops an index.
In 5.6 this bug is more properly fixed by the handler interface
changes done in the scope of WL#5534 "Online ALTER".
WHEN KILLING
Suppose there is a query waiting for a lock. If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file. Since this is a user
requested interruption, no spurious error message must be logged
in the server log. This patch will remove the error message from
the log.
approved by joh and tatjana
Fixed by backport of:
------------------------------------------------------------
revno: 3402.50.156
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-test
timestamp: Wed 2012-02-08 14:10:23 +0100
message:
Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
This assert could be triggered if an InnoDB table was being moved
to a different database using ALTER TABLE ... RENAME, while this
database concurrently was being dropped by DROP DATABASE.
The reason for the problem was that no metadata lock was taken
on the target database by ALTER TABLE ... RENAME.
DROP DATABASE was therefore not blocked and could remove
the database while ALTER TABLE ... RENAME was executing. This
could cause the assert in InnoDB to be triggered.
This patch fixes the problem by taking a IX metadata lock on
the target database before ALTER TABLE ... RENAME starts
moving a table to a different database.
Note that this problem did not occur with RENAME TABLE which
already takes the correct metadata locks.
Also note that this patch slightly changes the behavior of
ALTER TABLE ... RENAME. Before, the statement would abort and
return an error if a lock on the target table name could not
be taken immediately. With this patch, ALTER TABLE ... RENAME
will instead block and wait until the lock can be taken
(or until we get a lock timeout). This also means that it is
possible to get ER_LOCK_DEADLOCK errors in this situation
since we allow ALTER TABLE ... RENAME to wait and not just
abort immediately.