Commit graph

116 commits

Author SHA1 Message Date
Annamalai Gurusami
b76a59f5a6 Bug 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Georgi Kodinov
262c156849 merge mysql-5.1->mysql-5.1-security 2012-03-21 14:53:09 +02:00
karen.langford@oracle.com
3adb401c8a Merge from mysql-5.1.62-release 2012-03-20 17:35:41 +01:00
Annamalai Gurusami
d4ed7cf411 Bug 59783: INNODB DATA GROWS UNEXPECTEDLY WHEN INSERTING, TRUNCATING, INSERTING THE
The test case must insert all the records using a single transaction. Otherwise the test 
case takes more than 15 minutes and will time out in pb2 and mtr.
2012-03-16 12:06:29 +05:30
Annamalai Gurusami
da4418977d Bug 59783: InnoDB data grows unexpectedly when inserting,
truncating, inserting the same set of rows. When a table is 
re-created with the same set of rows, the data file size must
not grow.  

rb:968
Approved by Marko.
2012-03-09 11:07:16 +05:30
Georgi Kodinov
8232d9a6ee merge mysql-5.1->mysql-5.1-security 2012-03-08 17:16:53 +02:00
Annamalai Gurusami
27ecea534c Bug#13635833: MULTIPLE CRASHES IN FOREIGN KEY CODE WITH CONCURRENT DDL/DML
There are two threads.  In one thread, dml operation is going on 
involving cascaded update operation.  In another thread, alter 
table add foreign key constraint is happening.  Under these 
circumstances, it is possible for the dml thread to access a 
dict_foreign_t object that has been freed by the ddl thread.  
The debug sync test case provides the sequence of operations.  
Without fix, the test case will crash the server (because of 
newly added assert).  With fix, the alter table stmt will return 
an error message.  
      
Backporting the fix from MySQL 5.5 to 5.1

rb:961
rb:947
2012-03-01 11:05:51 +05:30
Vasil Dimov
a66f29c30c Fix Bug#13639142 64128: INNODB ERROR IN SERVER LOG OF INNODB_BUG34300
Suppress innodb_bug34300 from failing if InnoDB prints:

  120221 11:05:03  InnoDB: ERROR: the age of the last checkpoint is 9439048,
  InnoDB: which exceeds the log group capacity 9433498.

by default the log capacity is 2 log files, 5 MB each.
2012-02-21 17:57:07 +02:00
Georgi Kodinov
637c2d9e4e merge mysql-5.1->mysql-5.1-security 2012-02-17 11:52:41 +02:00
Marko Mäkelä
8b0f2c4d7d Remove a race condition in innodb_bug53756.test.
Before killing the server, tell mysql-test-run that it is to be expected.

Discussed with Bjorn Munch on IM.
2012-02-15 16:28:00 +02:00
Georgi Kodinov
145043fd69 merged mysql-5.1->mysql-5.1-security 2012-02-06 18:24:51 +02:00
Vasil Dimov
17afdb9051 Fix Bug#11754376 45976: INNODB LOST FILES FOR TEMPORARY TABLES ON
GRACEFUL SHUTDOWN

During startup mysql picks up .frm files from the tmpdir directory and
tries to drop those tables in the storage engine.

The problem is that when tmpdir ends in / then ha_innobase::delete_table()
is passed a string like "/var/tmp//#sql123", then it wrongly normalizes it
to "/#sql123" and calls row_drop_table_for_mysql() which of course fails
to delete the table entry from the InnoDB dictionary cache.
ha_innobase::delete_table() returns an error but nevertheless mysql wipes
away the .frm file and the entry in the InnoDB dictionary cache remains
orphaned with no easy way to remove it.

The "no easy" way to remove it is to create a similar temporary table again,
copy its .frm file to tmpdir under "#sql123.frm" and restart mysqld with
tmpdir=/var/tmp (no trailing slash) - this way mysql will pick the .frm file
after restart and will try to issue drop table for "/var/tmp/#sql123"
(notice do double slash), ha_innobase::delete_table() will normalize it to
"tmp/#sql123" and row_drop_table_for_mysql() will successfully remove the
table entry from the dictionary cache.

The solution is to fix normalize_table_name_low() to normalize things like
"/var/tmp//table" correctly to "tmp/table".

This patch also adds a test function which invokes
normalize_table_name_low() with various inputs to make sure it works
correctly and a mtr test that calls this test function.

Reviewed by:	Marko (http://bur03.no.oracle.com/rb/r/929/)
2012-02-06 12:44:59 +02:00
Marko Mäkelä
647abc1312 Suppress messages about long semaphore waits in innodb_bug34300.test. 2012-02-02 12:07:06 +02:00
Georgi Kodinov
aa03fc5333 weave merge mysql-5.1->mysql-5.1-security 2012-01-12 16:42:23 +02:00
Yasufumi Kinoshita
40203bd584 Bug#12400341 INNODB CAN LEAVE ORPHAN IBD FILES AROUND
If we meet DB_TOO_MANY_CONCURRENT_TRXS during the execution tab_create_graph from row_create_table_for_mysql(), .ibd file for the table should be created already but was not deleted for the error handling.

rb:875 approved by Jimmy Yang
2012-01-10 14:18:58 +09:00
Vasil Dimov
43ea968d45 Fix Bug#13510739 63775: SERVER CRASH ON HANDLER READ NEXT AFTER DELETE RECORD.
CREATE TABLE bug13510739 (c INTEGER NOT NULL, PRIMARY KEY (c)) ENGINE=INNODB;
INSERT INTO bug13510739 VALUES (1), (2), (3), (4);
DELETE FROM bug13510739 WHERE c=2;
HANDLER bug13510739 OPEN;
HANDLER bug13510739 READ `primary` = (2);
HANDLER bug13510739 READ `primary` NEXT;  <-- crash

The bug is that in the particular testcase row_search_for_mysql() picked up
a delete-marked record and quit, leaving the cursor non-positioned state and
on the subsequent 'get next' call the code crashed because of the
non-positioned cursor.

In row0sel.cc (line numbers from mysql-trunk):

4653         if (rec_get_deleted_flag(rec, comp)) {
...
4679                 if (index == clust_index && unique_search) {
4680 
4681                         err = DB_RECORD_NOT_FOUND;
4682                         
4683                         goto normal_return;
4684                 }       

it quit from here, not storing the cursor position.

In contrast, if the record=2 is not found at all (e.g. sleep(1) after DELETE
to let the purge wipe it away completely) then 'get = 2' does find record=3
and quits from here:

4366                 if (0 != cmp_dtuple_rec(search_tuple, rec, offsets)) {
...
4394                         btr_pcur_store_position(pcur, &mtr);
4395 
4396                         err = DB_RECORD_NOT_FOUND;
4397 #if 0
4398                         ut_print_name(stderr, trx, FALSE, index->name);
4399                         fputs(" record not found 3\n", stderr);
4400 #endif
4401 
4402                         goto normal_return;

Another fix could be to extend the condition on line 4366 to hold only if
seach_tuple matches rec AND if rec is not delete marked.

Notice that in the above test case if we wait about 1 second somewhere after
DELETE and before 'get = 2', then the testcase does not crash and returns 4
instead. Not sure if this is the correct behavior, but this bugfix removes
the crash and makes the code return what it also returns in the non-crashing
case (if rec=2 is not found during 'get = 2', e.g. we have sleep(1) there).

Approved by:	Marko (http://bur03.no.oracle.com/rb/r/863/)
2011-12-22 12:55:44 +02:00
Annamalai Gurusami
22b3830483 Bug : Innodb increments handler_read_key when it should not
The counter handler_read_key (SSV::ha_read_key_count) is incremented 
incorrectly.

The mysql server maintains a per thread system_status_var (SSV)
object.  This object contains among other things the counter
SSV::ha_read_key_count. The purpose of this counter is to measure the
number of requests to read a row based on a key (or the number of
index lookups).

This counter was wrongly incremented in the
ha_innobase::innobase_get_index(). The fix removes
this increment statement (for both innodb and innodb_plugin).

The various callers of the innobase_get_index() was checked to
determine if anybody must increment this counter (if they first call
innobase_get_index() and then perform an index lookup).  It was found
that no caller of innobase_get_index() needs to worry about the
SSV::ha_read_key_count counter.
2011-12-13 14:26:12 +05:30
Karen Langford
4de17022c2 Merge from mysql-5.1.60-release 2011-11-17 00:26:16 +01:00
Marko Mäkelä
dcab3c9393 Bug INNODB LOCKING REGRESSION FOR INSERT IGNORE: Add a test case.
The bug was accidentally fixed by fixing
Bug#11759688 52020: InnoDB can still deadlock on just INSERT...ON DUPLICATE KEY
a.k.a. the reintroduction of
Bug#7975 deadlock without any locking, simple select and update
2011-11-10 16:45:47 +02:00
Marko Mäkelä
d7946a908f Bug#11759688 52020: InnoDB can still deadlock on just INSERT...ON DUPLICATE KEY
a.k.a. Bug#7975 deadlock without any locking, simple select and update

Bug#7975 was reintroduced when the storage engine API was made
pluggable in MySQL 5.1. Instead of looking at thd->lex directly, we
rely on handler::extra(). But, we were looking at the wrong extra()
flag, and we were ignoring the TRX_DUP_REPLACE flag in places where we
should obey it.

innodb_replace.test: Add tests for hopefully all affected statement
types, so that bug should never ever resurface. This kind of tests
should have been added when fixing Bug#7975 in MySQL 5.0.3 in the
first place.

rb:806 approved by Sunny Bains
2011-11-10 12:49:31 +02:00
Marko Mäkelä
825f88634b Fix results after Bug#12661768 fix. 2011-10-26 09:34:32 +03:00
Marko Mäkelä
579234694f Bug#13002783 PARTIALLY UNINITIALIZED CASCADE UPDATE VECTOR
In the ON UPDATE CASCADE clause of FOREIGN KEY constraints, the
calculated update vector was not fully initialized. This bug was
introduced in the InnoDB Plugin when implementing support for
ROW_FORMAT=DYNAMIC.

Additionally, the data type information was not initialized, but
apparently it has never been needed in this case.  Nevertheless, it is
not good programming practice to pass uninitialized values around.

calc_row_difference(): Declare the update field uninitialized in
Valgrind. Copy the data type information as well, except when the
field is SQL NULL. In the built-in InnoDB, initialize
ufield->extern_storage = FALSE (an initialization bug that had gone
unnoticed this far). The InnoDB Plugin and later have this flag to
dfield_t and have always initialized it properly.

row_ins_cascade_calc_update_vec(): Reduce the scope of some
pointers. Initialize orig_len. (This caused the bug in InnoDB Plugin
and later.)

row_ins_foreign_check_on_constraint(): Simplify a condition. Declare
the update vector uninitialized.

rb:771 approved by Jimmy Yang
2011-10-25 17:33:38 +03:00
Vasil Dimov
7312f83cb9 Fix Bug#12661768 UPDATE IGNORE CRASHES SERVER IF TABLE IS INNODB AND IT IS
PARENT FOR OTHER ONE

Do not try to lookup key_nr'th key in 'table' because there may not be such
a key there. key_nr is the number of the key in the _child_ table name, not
in the parent table.

Instead just print the fields of the record that are covered by the first key
defined on the parent table.

This bug gets a better fix in MySQL 5.6, which is too risky for 5.1 and 5.5.

Approved by:	Jon Olav Hauglid (via IM)
2011-10-25 16:46:38 +03:00
Dmitry Lenev
d076be2a32 Fix for bug - "54553: INNODB ASSERTS IN
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK".

Attempt to update an InnoDB temporary table under LOCK TABLES
led to assertion failure in both debug and production builds
if this temporary table was explicitly locked for READ. The 
same scenario works fine for MyISAM temporary tables.

The assertion failure was caused by discrepancy between lock 
that was requested on the rows of temporary table at LOCK TABLES
time and by update operation. Since SQL-layer requested a 
read-lock at LOCK TABLES time InnoDB engine assumed that upcoming
statements which are going to be executed under LOCK TABLES will 
only read table and therefore should acquire only S-lock.
An update operation broken this assumption by requesting X-lock.

Possible approaches to fixing this problem are:

1) Skip locking of temporary tables as locking doesn't make any
   sense for connection-local objects.
2) Prohibit changing of temporary table locked by LOCK TABLES ... 
   READ.

Unfortunately both of these approaches have drawbacks which make 
them unviable for stable versions of server.

So this patch takes another approach and changes code in such way
that LOCK TABLES for a temporary table will always request write
lock. In 5.1 version of this patch switch from read lock to write
lock is done inside of InnoDBs handler methods as doing it on 
SQL-layer causes compatibility troubles with FLUSH TABLES WITH
READ LOCK.
2011-05-26 17:14:47 +04:00
Marko Mäkelä
1a0dde9206 Bug - 59641: Prepared XA transaction in system after hard crash
causes future shutdown hang

InnoDB would hang on shutdown if any XA transactions exist in the
system in the PREPARED state. This has been masked by the fact that
MySQL would roll back any PREPARED transaction on shutdown, in the
spirit of Bug  Xa recovery and client disconnection.

[mysql-test-run] do_shutdown_server: Interpret --shutdown_server 0 as
a request to kill the server immediately without initiating a
shutdown procedure.

xid_cache_insert(): Initialize XID_STATE::rm_error in order to avoid a
bogus error message on XA ROLLBACK of a recovered PREPARED transaction.

innobase_commit_by_xid(), innobase_rollback_by_xid(): Free the InnoDB
transaction object after rolling back a PREPARED transaction.

trx_get_trx_by_xid(): Only consider transactions whose
trx->is_prepared flag is set. The MySQL layer seems to prevent
attempts to roll back connected transactions that are in the PREPARED
state from another connection, but it is better to play it safe. The
is_prepared flag was introduced in the InnoDB Plugin.

trx_n_prepared: A new counter, counting the number of InnoDB
transactions in the PREPARED state.

logs_empty_and_mark_files_at_shutdown(): On shutdown, allow
trx_n_prepared transactions to exist in the system.

trx_undo_free_prepared(), trx_free_prepared(): New functions, to free
the memory objects of PREPARED transactions on shutdown. This is not
needed in the built-in InnoDB, because it would collect all allocated
memory on shutdown. The InnoDB Plugin needs this because of
innodb_use_sys_malloc.

trx_sys_close(): Invoke trx_free_prepared() on all remaining
transactions.
2011-04-07 21:12:54 +03:00
Vasil Dimov
619f684f54 Add the testcase for Bug#59410 to 5.1/builtin
Bug#59410 read uncommitted: unlock row could not find a 3 mode lock
on the record

This bug is present only in 5.6 but I am adding the test case to earlier
versions to ensure it never appears in earlier versions too.
2011-04-05 11:08:36 +03:00
Bjorn Munch
aa4bfebaee Bug 55442: MYSQLD DEBUG CRASHES WHILE RUNNING MYISAM_CRASH_BEFORE_FLUSH_KEYS.TEST
This will cause affected tests to skip if CrashReporter would popup
Found 5 tests that needed modification
2011-03-15 16:06:59 +01:00
Vasil Dimov
f912dcd82e Merge mysql-5.1-innodb -> mysql-5.1 2011-02-18 14:57:11 +02:00
Marko Mäkelä
db55cf8526 Allow 30 seconds for slow shutdown in the Bug test. 2011-02-17 22:25:33 +02:00
Vasil Dimov
0a3e7beb1e Merge mysql-5.1-innodb -> mysql-5.1 2011-02-17 13:56:05 +02:00
Marko Mäkelä
e428f0ed9d Disable the Bug test on embedded, as it requires server restart. 2011-02-17 09:45:07 +02:00
Marko Mäkelä
afda842f02 Make the implicit unpack parameter explicit in the Bug test. 2011-02-16 15:34:16 +02:00
Marko Mäkelä
518a4440ea Add a test for suspected Bug#60049. 2011-02-15 12:12:27 +02:00
Dmitry Lenev
3473329d3b Fix for bug "Failing assertion: primary_key_no == -1 ||
primary_key_no == 0".

Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.

The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.

The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.

This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as 
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
2011-02-02 16:17:48 +03:00
Jimmy Yang
669ce69483 Fix Bug#30423 "InnoDBs treatment of NULL in index stats causes bad
"rows examined" estimates". This change implements "innodb_stats_method"
with options of "nulls_equal", "nulls_unequal" and "null_ignored".
      
rb://553 approved by Marko
2011-01-14 09:02:28 -08:00
Vasil Dimov
f6acea697e Suppress InnoDB warning about long semaphore wait if running under Valgrind
Sometimes Valgrind could be extremely slow and could trigger the InnoDB
diagnostic message making the test to fail.
2011-01-12 17:53:05 +02:00
Vasil Dimov
90e38f3636 Merge mysql-5.1-bugteam -> mysql-5.1-innodb 2010-12-27 19:21:21 +02:00
Vasil Dimov
66c518dcef Speed up innodb_bug57255.test
Submitted by:	Stewart Smith (via internals@lists.mysql.com)
2010-12-14 11:38:19 +02:00
Sergey Glukhov
cd36a6a5d5 Fixed following problems:
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
  before being populated and it leads to incorrect result

We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:

1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.

2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
   process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.

3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.

Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;

4. Derived tables with subquery where subquery
   is evaluated before table locking(Bug#54475, Bug#52157)

Suggested solution is following:

-Introduce new field LEX::context_analysis_only with the following
 possible flags:
 #define CONTEXT_ANALYSIS_ONLY_PREPARE 1
 #define CONTEXT_ANALYSIS_ONLY_VIEW    2
 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
 context analysis operation
-Item_subselect::const_item() returns
 result depending on LEX::context_analysis_only.
 If context_analysis_only is set then we return
 FALSE that means that subquery is non-const.
 As all subquery types are wrapped by Item_subselect
 it allow as to make subquery non-const when
 it's necessary.
2010-12-14 12:33:03 +03:00
Sergey Glukhov
0e77c3295a Bug#39828 : Autoinc wraps around when offset and increment > 1
Auto increment value wraps when performing a bulk insert with
auto_increment_increment and auto_increment_offset greater than
one.
The fix:
If overflow happened then return MAX_ULONGLONG value as an
indication of overflow and check this before storing the
value into the field in update_auto_increment().
2010-12-13 14:48:12 +03:00
Sergey Glukhov
7704e3c2c2 Bug#56862 Execution of a query that uses index merge returns a wrong result
In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates
temporary file where is stores row ids which meet QUICK_SELECT ranges
except of clustered pk range, clustered range is processed separately.
In init_read_record we check if temporary file is used and choose
appropriate record access method. It does not take into account that
temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT
with clustered pk range.
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
2010-11-23 13:18:47 +03:00
Dmitry Shulga
ce3a7f4b01 Fixed bug#56619 - Assertion failed during
ALTER TABLE RENAME, DISABLE KEYS.

The code of ALTER TABLE RENAME, DISABLE KEYS could
issue a commit while holding LOCK_open mutex.
This is a regression introduced by the fix for
Bug 54453.
This failed an assert guarding us against a potential
deadlock with connections trying to execute
FLUSH TABLES WITH READ LOCK.

The fix is to move acquisition of LOCK_open outside
the section that issues ha_autocommit_or_rollback().
LOCK_open is taken to protect against concurrent
operations with .frms and the table definition
cache, and doesn't need to cover the call to commit.

A test case added to innodb_mysql.test.

The patch is to be null-merged to 5.5, which
already has 54453 null-merged to it.
2010-11-10 14:32:42 +06:00
Georgi Kodinov
7e2fa49edf merge 2010-11-03 16:09:17 +02:00
Marko Mäkelä
f2d39c9eaf Bug wrong InnoDB results from a case-insensitive covering index
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.

REC_INFO_DELETED_FLAG: Move the definition from rem0rec.ic to rem0rec.h.

ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().

ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.

mysql_row_templ_t: Add clust_rec_field_no.

row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug )

buf_LRU_free_block(): Refactored from
buf_LRU_search_and_free_block(). This is needed for the
innodb_change_buffering_debug diagnostics.

[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
2010-10-19 08:58:53 +03:00
Vasil Dimov
33496519e1 Fix Bug#57252 disabling innobase_stats_on_metadata disables ANALYZE
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).

Approved by:	Jimmy (rb://487)
2010-10-18 13:48:11 +03:00
Vasil Dimov
08daccd469 Merge mysql-5.1-bugteam -> mysql-5.1-innodb 2010-10-15 17:38:39 +03:00
Vasil Dimov
71550f7d35 Tune the test for Bug#56143 too many foreign keys causes output of show create table to become invalid
Use a CREATE statement with all the FKs instead of ALTERing the table many
times because it is faster (11 seconds vs 3 seconds).
2010-10-14 12:33:56 +03:00
Vasil Dimov
f19fa5277a Fix Bug#56143 too many foreign keys causes output of show create table to become invalid
Just remove the check whether the file is "too big".
A similar code exists in ha_innobase::update_table_comment() but that
method does not seem to be used.
2010-10-13 20:18:59 +03:00
Jimmy Yang
34c61d0448 Fix Bug Cascade Delete results in "Got error -1 from storage engine".
rb://477 approved by Marko
2010-10-06 03:41:26 -07:00
Georgi Kodinov
539291cde9 merged mysql-5.1 into mysql-5.1-bugteam 2010-10-05 11:11:56 +03:00