------------------------------------------------------------------------
r2702 | sunny | 2008-09-30 11:41:56 +0300 (Tue, 30 Sep 2008) | 13 lines
branches/5.1: Since handler::get_auto_increment() doesn't allow us
to return the cause of failure we have to inform MySQL using the
sql_print_warning() function to return the cause for autoinc failure.
Previously we simply printed the error code, this patch prints the
text string representing the following two error codes:
DB_LOCK_WAIT_TIMEOUT
DB_DEADLOCK.
Bug#35498 Cannot get table test/table1 auto-inccounter value in ::info
Approved by Marko.
------------------------------------------------------------------------
rb://18
Change the patch to fix the failing mysql-test index_merge_innodb.
The previous variant is inappropriate because myisam results are different
(2 instead of 4) and then the index_merge_myisam test fails.
page_zip_hexdump_func(): New function, to dump a block of data.
ut_print_buf() would dump everything on a single line, which is hard
to read.
page_zip_hexdump(): Wrapper macro for page_zip_hexdump_func().
page_zip_validate(): dump page_zip, page_zip->data, page, temp_page if !valid.
in r2631. Include the node pointer field in the size calculation.
rec_get_converted_size_comp_prefix(): New function, to compute the storage
size of the prefix of an ordinary record in COMPACT format.
rec_get_converted_size_comp(): Use rec_get_converted_size_comp_prefix().
Add a patch to fix the failing mysql-test index_merge_innodb. The test
started failing after an optimization, made in r2625, which results in
a different number of rows being returned by EXPLAIN.
The following files are from MySQL source tree without any changes.
They will be changed for building Windows plugin. The original files
will be used as the base for diff purpose.
* CMakeLists.txt
* sql/CMakeLists.txt
* win/configure.js
fields that are related to the records stored in the page.
page_zip_copy() is a fall-back method in certain B-tree operations
(tree compression, splitting or merging nodes). The contents of a
page may fit in the compressed page frame when it has been modified in
a certain sequence, but not when the page is recompressed. Sometimes,
copying all or part of the records to an empty page could fail because
of compression overflow. In such cases, we copy the compressed and
uncompressed pages bit for bit and delete any unwanted records from
the copy. (Deletion is guaranteed to succeed.) The method
page_zip_copy() is invoked very rarely.
In one case, page_zip_copy() was called in btr_lift_page_up() to move
the records to the root page of the B-tree. Because page_zip_copy()
copied all B-tree page header fields, it overwrote the file segment
header fields PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP. This is the
probable cause of the corruption that was reported as Mantis issue #63
and others.
Return earlier when this function is called on an index that
is being created. Luckily, mtr_start() does not allocate any
resources. Thus, there was no memory leak.
buf_block_dbg_add_level(block, level): Define as an empty macro when
UNIV_SYNC_DEBUG is not defined. Remove #ifdef UNIV_SYNC_DEBUG around
all invocations.
The memory leak was due to wrong parameters passed into VirtualFree()
call. So, the call fails with Windows error 87. MEM_DECOMMIT can NOT be
used along with MEM_RELEASE. And if the parameter is MEM_RELEASE, the
size parameter must be 0. The function frees the entire region that is
reserved in the initial allocation call to VirtualAlloc.
This issue was introduced by r984.
Approved by: Heikki (on IM)
ha_thd() whenever possible.
EQ_CURRENT_THD(thd): New predicate, for use in assertions.
innobase_drop_database(): Tolerate current_thd == NULL, so that the
Windows plugin will work. In the Windows plugin, it will be
impossible to skip foreign key checks in this function. However,
DROP DATABASE will drop each table (that MySQL knows about) individually
before calling this function. Thus, the foreign key checks can be disabled
also in the Windows plugin, unless some .frm files are missing.
the maximum record size will never exceed the B-tree page size limit.
For uncompressed tables, there should always be enough space for two
records in an empty B-tree page. For compressed tables, there should
be enough space for storing two node pointer records or one data
record in an empty page in uncompressed format.
dict_build_table_def_step(): Remove the inaccurate check for table row
size.
dict_index_too_big_for_tree(): New function: check if the index
records would be too big for a B-tree page.
dict_index_add_to_cache(): Add the parameter "strict". Invoke
dict_index_too_big_for_tree() if it is set.
trx_is_strict(), thd_is_strict(): New functions, for determining if
innodb_strict_mode is enabled for the current transaction.
dict_create_index_step(): Pass the new parameter strict of
dict_index_add_to_cache() as trx_is_strict(trx). All other callers
pass it as FALSE.
innodb.test: Enable innodb_strict_mode before attempting to create a
table with a too big record size.
innodb-zip.test: Remove the test of inserting random data. Add tests
for checking that the maximum record lengths are enforced at table
creation time.
bug#39483 InnoDB hang on adaptive hash because of out of order ::open()
call by MySQL
Forward port of r2629
Under some conditions MySQL calls ::open with search_latch leading
to a deadlock as we try to acquire dict_sys->mutex inside ::open
breaking the latching order. The fix is to release search_latch.
Reviewed by: Heikki
Add the parameter struct charset_info_st* cs, so that the call
thd_charset(current_thd) can be avoided. The macro current_thd has no
defined value in the Windows plugin.
ha_innodb.cc: Declare strict_mode as PLUGIN_VAR_OPCMDARG, because we
do want to be able to disable innodb_strict_mode. This is a non-functional
change, because PLUGIN_VAR_NOCMDARG seems to accept an argument as well.
innodb-zip.test: Do not store innodb_strict_mode. It is a session variable.
Add a test case for innodb_strict_mode=off.
there will always be enough space for two node pointer records in an
empty B-tree page. This was reported as Mantis issue #73.
page_zip_rec_needs_ext(): Add the parameter n_fields, for accurate
estimation of the compressed size of the data dictionary information.
Given that this function is only invoked for records on leaf pages,
require that there be enough space for one record in the compressed
page. We check elsewhere that there will be enough room for two node
pointer records on higher-level pages.
btr_cur_optimistic_insert(): Ensure that there will be enough room for
two node pointer records on an empty non-leaf page. The rule for
leaf-page records will be enforced by the callers of
page_zip_rec_needs_ext().
btr_cur_pessimistic_insert(): Remove the insufficient check that the
leaf page record should be compressible by itself. Instead, now we
require that two node pointer records fit on a non-leaf page, and one
record will fit in uncompressed form on the leaf page.
page_zip_write_header(), page_zip_write_rec(): Re-enable the debug
assertions that were violated by the insufficient check in
btr_cur_pessimistic_insert().
innodb_bug36172.test: Use a larger compressed page size.
btr_search_drop_page_hash_index(): Add const qualifiers to the local
variables page, rec, and index, to ensure that they are not modified
by this function.
page_get_infimum_offset(), page_get_supremum_offset(): New functions.
page_get_infimum_rec(), page_get_supremum_rec(): Replaced by
const-preserving macros that invoke the accessor functions.
help in tracking down issue #63 (memory corruption). UNIV_BTR_DEBUG
is currently enabled in univ.i.
btr_root_fseg_validate(): New function, for validating a file segment
header on a B-tree root page.
btr_root_block_get(), btr_free_but_not_root(),
btr_root_raise_and_insert(), btr_discard_only_page_on_level():
Check PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP on the root page with
btr_root_fseg_validate().
btr_root_raise_and_insert(): Move the assertion
dict_index_get_page(index) == page_get_page_no(root)
inside UNIV_BTR_DEBUG. It was previously enabled by UNIV_DEBUG.
btr_free_root(): Check PAGE_BTR_SEG_TOP on the root page with
btr_root_fseg_validate().
Add a test case to check that mysqld does not crash when running ANALYZE TABLE
with different values for innodb_stats_sample_pages.
Suggested by: Marko
Approved by: Marko
Limit the number of the pages that are sampled so it is never greater
than the total number of pages in the index.
The parameter that specifies the number of pages to test is global for
all tables. By limiting it this way we allow the user to set it "high"
to suit "large" tables and to avoid unnecessary work for "small" tables
(e.g. doing 100 dives in a table that has 5 pages, obviously testing
some pages more than once).
Suggested by: Ken
Approved by: Marko
Merge 2605:2617 from branches/5.1:
------------------------------------------------------------------------
r2609 | sunny | 2008-08-24 01:19:05 +0300 (Sun, 24 Aug 2008) | 12 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
M /branches/5.1/mysql-test/innodb-autoinc.result
M /branches/5.1/mysql-test/innodb-autoinc.test
branches/5.1: Fix for MySQL Bug#38839. Reset the statement level last
value field in prebuilt. This field tracks the last value in an autoincrement
interval. We use this value to check whether we need to update a table's
AUTOINC counter, if the value written to a table is less than this value
then we avoid updating the table's AUTOINC value in order to reduce
mutex contention. If it's not reset (e.g., after a DELETE statement) then
there is the possibility of missing updates to the table's AUTOINC counter
resulting in a subsequent duplicate row error message under certain
conditions (see the test case for details).
Bug #38839 - auto increment does not work properly with InnoDB after update
------------------------------------------------------------------------
r2617 | vasil | 2008-09-09 15:46:17 +0300 (Tue, 09 Sep 2008) | 47 lines
Changed paths:
M /branches/5.1/mysql-test/innodb.result
branches/5.1:
Merge a change from MySQL (fix the failing innodb test):
------------------------------------------------------------
revno: 2646.12.1
committer: Mattias Jonsson <mattiasj@mysql.com>
branch nick: wl4176_2-51-bugteam
timestamp: Mon 2008-08-11 20:02:03 +0200
message:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... complains that
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
I have verified that OPTIMIZE TABLE actually rebuilds the table
and calls ANALYZE.
Approved by: Heikki
foreign key constraint, find a truly equivalent index for it.
If none is available, refuse to drop the index. MySQL can drop
an index when creating a "stronger" index.
This was reported as Mantis issue #70 and MySQL Bug #38786.
innodb-index.test: Add a test case.
dict_foreign_find_equiv_index(): New function, to replace the
incorrectly written function dict_table_find_equivalent_index().
dict_table_replace_index_in_foreign_list(): Simplify the implementation.
in fast index creation. In r1399, we wrote undo log records about
creating indexes. The special undo log records were deemed
unnecessary later, but this special handling was not removed then.
row_merge_create_index(): Do not assign index->id.
dict_build_index_def_step(): Unconditionally assign index->id.