non-determinism related to reading the table's autoinc value for the first
time. This change has also reduced the sizeof dict_table_t by sizeof(ibool)
bytes because we don't need the dict_table_t::autoinc_inited field anymore.
This also fixes Bug#39830 Table autoinc value not updated on first insert.
rb://16
the table's global autoinc counter value. This is part of simplifying the
AUTOINC sub-system. We extract the type info from MySQL data structures at
runtime.
This fixes Bug#37788 InnoDB Plugin: AUTO_INCREMENT wrong for compressed tables
------------------------------------------------------------------------
r2702 | sunny | 2008-09-30 11:41:56 +0300 (Tue, 30 Sep 2008) | 13 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
branches/5.1: Since handler::get_auto_increment() doesn't allow us
to return the cause of failure we have to inform MySQL using the
sql_print_warning() function to return the cause for autoinc failure.
Previously we simply printed the error code, this patch prints the
text string representing the following two error codes:
DB_LOCK_WAIT_TIMEOUT
DB_DEADLOCK.
Bug#35498 Cannot get table test/table1 auto-inccounter value in ::info
Approved by Marko.
------------------------------------------------------------------------
r2709 | vasil | 2008-10-01 10:13:13 +0300 (Wed, 01 Oct 2008) | 10 lines
Changed paths:
M /branches/5.1/include/lock0lock.h
M /branches/5.1/lock/lock0lock.c
A /branches/5.1/mysql-test/innodb_bug38231.result
A /branches/5.1/mysql-test/innodb_bug38231.test
M /branches/5.1/row/row0mysql.c
branches/5.1:
Fix Bug#38231 Innodb crash in lock_reset_all_on_table() on TRUNCATE + LOCK / UNLOCK
In TRUNCATE TABLE and discard tablespace: do not remove table-level S
and X locks and do not assert on such locks not being wait locks.
Leave such locks alone.
Approved by: Heikki (rb://14)
------------------------------------------------------------------------
r2710 | vasil | 2008-10-01 14:13:58 +0300 (Wed, 01 Oct 2008) | 6 lines
Changed paths:
M /branches/5.1/include/sync0sync.ic
branches/5.1:
Silence a compilation warning in UNIV_DEBUG.
Approved by: Marko (via IM)
------------------------------------------------------------------------
r2719 | vasil | 2008-10-03 18:17:28 +0300 (Fri, 03 Oct 2008) | 49 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
A /branches/5.1/mysql-test/innodb_bug39438-master.opt
A /branches/5.1/mysql-test/innodb_bug39438.result
A /branches/5.1/mysql-test/innodb_bug39438.test
branches/5.1:
Fix Bug#39438 Testcase for Bug#39436 crashes on 5.1 in fil_space_get_latch
In ha_innobase::info() - do not try to get the free space for a tablespace
which has been discarded with ALTER TABLE ... DISCARD TABLESPACE or if the
.ibd file is missing for some other reason.
ibd_file_missing and tablespace_discarded are manipulated only in
row_discard_tablespace_for_mysql() and in row_import_tablespace_for_mysql()
and the manipulation is protected/surrounded by
row_mysql_lock_data_dictionary()/row_mysql_unlock_data_dictionary() thus we
do the same in ha_innobase::info() when checking the values of those members
to avoid race conditions. I have tested the code-path with UNIV_DEBUG and
UNIV_SYNC_DEBUG.
Looks like it is not possible to avoid mysqld printing warnings in the
mysql-test case and thus this test innodb_bug39438 must be added to the
list of exceptional test cases that are allowed to print warnings. For this,
the following patch must be applied to the mysql source tree:
--- cut ---
=== modified file 'mysql-test/lib/mtr_report.pl'
--- mysql-test/lib/mtr_report.pl 2008-08-12 10:26:23 +0000
+++ mysql-test/lib/mtr_report.pl 2008-10-01 11:57:41 +0000
@@ -412,7 +412,10 @@
# When trying to set lower_case_table_names = 2
# on a case sensitive file system. Bug#37402.
- /lower_case_table_names was set to 2, even though your the file system '.*' is case sensitive. Now setting lower_case_table_names to 0 to avoid future problems./
+ /lower_case_table_names was set to 2, even though your the file system '.*' is case sensitive. Now setting lower_case_table_names to 0 to avoid future problems./ or
+
+ # this test is expected to print warnings
+ ($testname eq 'main.innodb_bug39438')
)
{
next; # Skip these lines
--- cut ---
The mysql-test is currently somewhat disabled (see inside
innodb_bug39438.test), after the above patch has been applied to the mysql
source tree, the test can be enabled.
rb://20
Reviewed by: Inaam, Calvin
Approved by: Heikki
------------------------------------------------------------------------
r2720 | vasil | 2008-10-03 19:52:39 +0300 (Fri, 03 Oct 2008) | 8 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
branches/5.1:
Print a warning if an attempt is made to get the free space for a table
whose .ibd file is missing or the tablespace has been discarded. This is a
followup to r2719.
Suggested by: Inaam
------------------------------------------------------------------------
r2721 | sunny | 2008-10-04 02:08:23 +0300 (Sat, 04 Oct 2008) | 6 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
branches/5.1: We need to send the messages to the client because
handler::get_auto_increment() doesn't allow a way to return the
specific error for why it failed.
rb://18
------------------------------------------------------------------------
r2722 | sunny | 2008-10-04 02:48:04 +0300 (Sat, 04 Oct 2008) | 18 lines
Changed paths:
M /branches/5.1/dict/dict0mem.c
M /branches/5.1/handler/ha_innodb.cc
M /branches/5.1/include/dict0mem.h
M /branches/5.1/include/row0mysql.h
M /branches/5.1/mysql-test/innodb-autoinc.result
M /branches/5.1/mysql-test/innodb-autoinc.test
M /branches/5.1/row/row0mysql.c
branches/5.1: This bug has always existed but was masked by other errors. The
fix for bug# 38839 triggered this bug. When the offset and increment are > 1
we need to calculate the next value taking into consideration the two
variables. Previously we simply assumed they were 1 particularly offset was
never used. MySQL does its own calculation and that's probably why it seemed
to work in the past. We would return what we thought was the correct next
value and then MySQL would recalculate the actual value from that and return
it to the caller (e.g., handler::write_row()). Several new tests have been
added that try and catch some edge cases. The tests exposed a wrap around
error in MySQL next value calculation which was filed as bug#39828. The tests
will need to be updated once MySQL fix that bug.
One good side effect of this fix is that dict_table_t size has been
reduced by 8 bytes because we have moved the autoinc_increment field to
the row_prebuilt_t structure. See review-board for a detailed discussion.
rb://3
------------------------------------------------------------------------
more generic. The previous pattern could fail if other test cases were run
before this one. Since r2716, the MySQL server is not restarted for this test.
(Bug #36285, rb://9).
innodb-index.test, innodb-index.result: Set innodb_lock_wait_timeout as
a session variable instead of relying on the global value.
innodb-index-master.opt: Remove.
innodb-timeout.test: Test that setting the innodb_lock_wait_timeout
works as advertised.
thd_lock_wait_timeout(): New function, to retrieve the lock wait timeout
for a given MySQL client connection (thd), or the global value (thd==NULL).
srv_lock_wait_timeout, innobase_lock_wait_timeout: Remove.
Replace MYSQL_SYSVAR_LONG(lock_wait_timeout)
with MYSQL_THDVAR_ULONG(lock_wait_timeout).
------------------------------------------------------------------------
r2702 | sunny | 2008-09-30 11:41:56 +0300 (Tue, 30 Sep 2008) | 13 lines
branches/5.1: Since handler::get_auto_increment() doesn't allow us
to return the cause of failure we have to inform MySQL using the
sql_print_warning() function to return the cause for autoinc failure.
Previously we simply printed the error code, this patch prints the
text string representing the following two error codes:
DB_LOCK_WAIT_TIMEOUT
DB_DEADLOCK.
Bug#35498 Cannot get table test/table1 auto-inccounter value in ::info
Approved by Marko.
------------------------------------------------------------------------
rb://18
Change the patch to fix the failing mysql-test index_merge_innodb.
The previous variant is inappropriate because myisam results are different
(2 instead of 4) and then the index_merge_myisam test fails.
page_zip_hexdump_func(): New function, to dump a block of data.
ut_print_buf() would dump everything on a single line, which is hard
to read.
page_zip_hexdump(): Wrapper macro for page_zip_hexdump_func().
page_zip_validate(): dump page_zip, page_zip->data, page, temp_page if !valid.
in r2631. Include the node pointer field in the size calculation.
rec_get_converted_size_comp_prefix(): New function, to compute the storage
size of the prefix of an ordinary record in COMPACT format.
rec_get_converted_size_comp(): Use rec_get_converted_size_comp_prefix().
Add a patch to fix the failing mysql-test index_merge_innodb. The test
started failing after an optimization, made in r2625, which results in
a different number of rows being returned by EXPLAIN.
The following files are from MySQL source tree without any changes.
They will be changed for building Windows plugin. The original files
will be used as the base for diff purpose.
* CMakeLists.txt
* sql/CMakeLists.txt
* win/configure.js
fields that are related to the records stored in the page.
page_zip_copy() is a fall-back method in certain B-tree operations
(tree compression, splitting or merging nodes). The contents of a
page may fit in the compressed page frame when it has been modified in
a certain sequence, but not when the page is recompressed. Sometimes,
copying all or part of the records to an empty page could fail because
of compression overflow. In such cases, we copy the compressed and
uncompressed pages bit for bit and delete any unwanted records from
the copy. (Deletion is guaranteed to succeed.) The method
page_zip_copy() is invoked very rarely.
In one case, page_zip_copy() was called in btr_lift_page_up() to move
the records to the root page of the B-tree. Because page_zip_copy()
copied all B-tree page header fields, it overwrote the file segment
header fields PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP. This is the
probable cause of the corruption that was reported as Mantis issue #63
and others.
Return earlier when this function is called on an index that
is being created. Luckily, mtr_start() does not allocate any
resources. Thus, there was no memory leak.
buf_block_dbg_add_level(block, level): Define as an empty macro when
UNIV_SYNC_DEBUG is not defined. Remove #ifdef UNIV_SYNC_DEBUG around
all invocations.
The memory leak was due to wrong parameters passed into VirtualFree()
call. So, the call fails with Windows error 87. MEM_DECOMMIT can NOT be
used along with MEM_RELEASE. And if the parameter is MEM_RELEASE, the
size parameter must be 0. The function frees the entire region that is
reserved in the initial allocation call to VirtualAlloc.
This issue was introduced by r984.
Approved by: Heikki (on IM)
ha_thd() whenever possible.
EQ_CURRENT_THD(thd): New predicate, for use in assertions.
innobase_drop_database(): Tolerate current_thd == NULL, so that the
Windows plugin will work. In the Windows plugin, it will be
impossible to skip foreign key checks in this function. However,
DROP DATABASE will drop each table (that MySQL knows about) individually
before calling this function. Thus, the foreign key checks can be disabled
also in the Windows plugin, unless some .frm files are missing.
the maximum record size will never exceed the B-tree page size limit.
For uncompressed tables, there should always be enough space for two
records in an empty B-tree page. For compressed tables, there should
be enough space for storing two node pointer records or one data
record in an empty page in uncompressed format.
dict_build_table_def_step(): Remove the inaccurate check for table row
size.
dict_index_too_big_for_tree(): New function: check if the index
records would be too big for a B-tree page.
dict_index_add_to_cache(): Add the parameter "strict". Invoke
dict_index_too_big_for_tree() if it is set.
trx_is_strict(), thd_is_strict(): New functions, for determining if
innodb_strict_mode is enabled for the current transaction.
dict_create_index_step(): Pass the new parameter strict of
dict_index_add_to_cache() as trx_is_strict(trx). All other callers
pass it as FALSE.
innodb.test: Enable innodb_strict_mode before attempting to create a
table with a too big record size.
innodb-zip.test: Remove the test of inserting random data. Add tests
for checking that the maximum record lengths are enforced at table
creation time.
bug#39483 InnoDB hang on adaptive hash because of out of order ::open()
call by MySQL
Forward port of r2629
Under some conditions MySQL calls ::open with search_latch leading
to a deadlock as we try to acquire dict_sys->mutex inside ::open
breaking the latching order. The fix is to release search_latch.
Reviewed by: Heikki
Add the parameter struct charset_info_st* cs, so that the call
thd_charset(current_thd) can be avoided. The macro current_thd has no
defined value in the Windows plugin.
ha_innodb.cc: Declare strict_mode as PLUGIN_VAR_OPCMDARG, because we
do want to be able to disable innodb_strict_mode. This is a non-functional
change, because PLUGIN_VAR_NOCMDARG seems to accept an argument as well.
innodb-zip.test: Do not store innodb_strict_mode. It is a session variable.
Add a test case for innodb_strict_mode=off.
there will always be enough space for two node pointer records in an
empty B-tree page. This was reported as Mantis issue #73.
page_zip_rec_needs_ext(): Add the parameter n_fields, for accurate
estimation of the compressed size of the data dictionary information.
Given that this function is only invoked for records on leaf pages,
require that there be enough space for one record in the compressed
page. We check elsewhere that there will be enough room for two node
pointer records on higher-level pages.
btr_cur_optimistic_insert(): Ensure that there will be enough room for
two node pointer records on an empty non-leaf page. The rule for
leaf-page records will be enforced by the callers of
page_zip_rec_needs_ext().
btr_cur_pessimistic_insert(): Remove the insufficient check that the
leaf page record should be compressible by itself. Instead, now we
require that two node pointer records fit on a non-leaf page, and one
record will fit in uncompressed form on the leaf page.
page_zip_write_header(), page_zip_write_rec(): Re-enable the debug
assertions that were violated by the insufficient check in
btr_cur_pessimistic_insert().
innodb_bug36172.test: Use a larger compressed page size.
btr_search_drop_page_hash_index(): Add const qualifiers to the local
variables page, rec, and index, to ensure that they are not modified
by this function.
page_get_infimum_offset(), page_get_supremum_offset(): New functions.
page_get_infimum_rec(), page_get_supremum_rec(): Replaced by
const-preserving macros that invoke the accessor functions.
help in tracking down issue #63 (memory corruption). UNIV_BTR_DEBUG
is currently enabled in univ.i.
btr_root_fseg_validate(): New function, for validating a file segment
header on a B-tree root page.
btr_root_block_get(), btr_free_but_not_root(),
btr_root_raise_and_insert(), btr_discard_only_page_on_level():
Check PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP on the root page with
btr_root_fseg_validate().
btr_root_raise_and_insert(): Move the assertion
dict_index_get_page(index) == page_get_page_no(root)
inside UNIV_BTR_DEBUG. It was previously enabled by UNIV_DEBUG.
btr_free_root(): Check PAGE_BTR_SEG_TOP on the root page with
btr_root_fseg_validate().
Add a test case to check that mysqld does not crash when running ANALYZE TABLE
with different values for innodb_stats_sample_pages.
Suggested by: Marko
Approved by: Marko
Limit the number of the pages that are sampled so it is never greater
than the total number of pages in the index.
The parameter that specifies the number of pages to test is global for
all tables. By limiting it this way we allow the user to set it "high"
to suit "large" tables and to avoid unnecessary work for "small" tables
(e.g. doing 100 dives in a table that has 5 pages, obviously testing
some pages more than once).
Suggested by: Ken
Approved by: Marko