SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
on select from I_S.INNODB_CHANGED_PAGES
Analysis: limit_lsn_range_from_condition() incorrectly parses
start_lsn and/or end_lsn conditions.
Fix from SergeyP. Added some test cases.
FAILING ASSERTION: FLEN == LEN
Problem:
Broken invariant triggered when building a unique index on a
binary column and the input data contains duplicate keys. This was broken
in debug builds only.
Fix:
Fixed length of the binary datatype can be greater than length of
the shorter prefix on which index is being created.
FAILING ASSERTION: FLEN == LEN
Problem:
Broken invariant triggered when building a unique index on a
binary column and the input data contains duplicate keys. This was broken
in debug builds only.
Fix:
Fixed length of the binary datatype can be greater than length of
the shorter prefix on which index is being created.
mysql-test/suite/innodb/r/row_lock.result:
Test case for MDEV-5629
mysql-test/suite/innodb/t/row_lock.test:
Test case for MDEV-5629
sql/filesort.cc:
Don't call unlock_row() in case of errors
Problem:
In the clustered index, when an update operation is done the overall
scenario (after rb#4479) is as follows:
1. Delete mark the old record that is to be updated.
2. The old record disowns the blobs.
3. Insert the new record into clustered index.
4. For non-updated blobs, new record must own it. Verified by assert.
5. For non-updated blobs, in new record marked as inherited.
Scenario involving DB_LOCK_WAIT:
If step 3 times out, then we will skip 1 and 2 and will continue from
step 3. This skipping is achieved by the UPD_NODE_INSERT_BLOB state.
In this case, step 4 is not correct. Because of step 1, the new
record need not own the blobs. Hence the assert failure.
Solution:
The assert in step 4 is removed. Instead code is added to ensure that
the record owns the blob.
Note:
This is a regression caused by rb#4479.
rb#4571 approved by Marko
Problem:
In the clustered index, when an update operation is done the overall
scenario (after rb#4479) is as follows:
1. Delete mark the old record that is to be updated.
2. The old record disowns the blobs.
3. Insert the new record into clustered index.
4. For non-updated blobs, new record must own it. Verified by assert.
5. For non-updated blobs, in new record marked as inherited.
Scenario involving DB_LOCK_WAIT:
If step 3 times out, then we will skip 1 and 2 and will continue from
step 3. This skipping is achieved by the UPD_NODE_INSERT_BLOB state.
In this case, step 4 is not correct. Because of step 1, the new
record need not own the blobs. Hence the assert failure.
Solution:
The assert in step 4 is removed. Instead code is added to ensure that
the record owns the blob.
Note:
This is a regression caused by rb#4479.
rb#4571 approved by Marko
AUTO_INCREMENT_INCREMENT
Problem:
=======
When auto_increment_increment system variable decreases,
immediate next value of auto increment column is not affected.
Solution:
========
Get the previous inserted value of auto increment column by
subtracting the previous auto_increment_increment from next
auto increment value. After that calculate the current autoinc value
using newly changed auto_increment_increment variable.
Approved by Sunny [rb#4394]
AUTO_INCREMENT_INCREMENT
Problem:
=======
When auto_increment_increment system variable decreases,
immediate next value of auto increment column is not affected.
Solution:
========
Get the previous inserted value of auto increment column by
subtracting the previous auto_increment_increment from next
auto increment value. After that calculate the current autoinc value
using newly changed auto_increment_increment variable.
Approved by Sunny [rb#4394]
Problem:
The function row_upd_changes_ord_field_binary() is used to decide whether to
use row_upd_clust_rec_by_insert() or row_upd_clust_rec(). The function
row_upd_changes_ord_field_binary() does not make use of charset information.
Based on binary comparison it decides that r1 and r2 differ in their ordering
fields.
In the function row_upd_clust_rec_by_insert(), an update is done by delete +
insert. These operations internally make use of cmp_dtuple_rec_with_match()
to compare records r1 and r2. This comparison takes place with the use of
charset information.
This means that it is possible for the deleted record to be reused in the
subsequent insert. In the given scenario, the characters 'a' and 'A' are
considered equal in the my_charset_latin1. When this happens, the ownership
information of externally stored blobs are not correctly handled.
Solution:
When an update is done by delete followed by insert, disown the relevant
externally stored fields during the delete marking itself (within the same
mtr). If the insert succeeds, then nothing with respect to blob ownership
needs to be done. If the insert fails, then the disown done earlier will be
removed when the operation is rolled back.
rb#4479 approved by Marko.
Problem:
The function row_upd_changes_ord_field_binary() is used to decide whether to
use row_upd_clust_rec_by_insert() or row_upd_clust_rec(). The function
row_upd_changes_ord_field_binary() does not make use of charset information.
Based on binary comparison it decides that r1 and r2 differ in their ordering
fields.
In the function row_upd_clust_rec_by_insert(), an update is done by delete +
insert. These operations internally make use of cmp_dtuple_rec_with_match()
to compare records r1 and r2. This comparison takes place with the use of
charset information.
This means that it is possible for the deleted record to be reused in the
subsequent insert. In the given scenario, the characters 'a' and 'A' are
considered equal in the my_charset_latin1. When this happens, the ownership
information of externally stored blobs are not correctly handled.
Solution:
When an update is done by delete followed by insert, disown the relevant
externally stored fields during the delete marking itself (within the same
mtr). If the insert succeeds, then nothing with respect to blob ownership
needs to be done. If the insert fails, then the disown done earlier will be
removed when the operation is rolled back.
rb#4479 approved by Marko.
Fix :
-------
Created separate suites called innodb_zip ans i_innodb_zip that contain all compression tests.
Running the new suites with following compression-related parameters :
* innodb_compression_level = {1/9}
* innodb_log_compressed_pages = {ON/OFF}
Fix :
-------
Created separate suites called innodb_zip ans i_innodb_zip that contain all compression tests.
Running the new suites with following compression-related parameters :
* innodb_compression_level = {1/9}
* innodb_log_compressed_pages = {ON/OFF}
INDEX_READ_MAP HAD NO MATCH
If index_read_map is called for exact search and no matching records
exists it will position the cursor on the next record, but still having the
relative position to BTR_PCUR_ON.
This will make a call for index_next to read yet another next record,
instead of returning the record the cursor points to.
Fixed by setting pcur->rel_pos = BTR_PCUR_BEFORE if an exact
[prefix] search is done, but failed.
Also avoids optimistic restoration if rel_pos != BTR_PCUR_ON,
since btr_cur may be different than old_rec.
rb#3324, approved by Marko and Jimmy
INDEX_READ_MAP HAD NO MATCH
If index_read_map is called for exact search and no matching records
exists it will position the cursor on the next record, but still having the
relative position to BTR_PCUR_ON.
This will make a call for index_next to read yet another next record,
instead of returning the record the cursor points to.
Fixed by setting pcur->rel_pos = BTR_PCUR_BEFORE if an exact
[prefix] search is done, but failed.
Also avoids optimistic restoration if rel_pos != BTR_PCUR_ON,
since btr_cur may be different than old_rec.
rb#3324, approved by Marko and Jimmy
The problem was that xtradb has innodb_purge_threads default 1 (plain
innodb defaults to 0).
The test sets a special debug variable and relies on it to force
purge to happen. But when using background purge threads, this does
not work, the debug code is not made to handle this, so occasionally
the test times out waiting for the purge to occur.
Fix by explicitly setting innodb_purgee_threads=0 for this test.
The official Debian Wheezy MySQL packages have versions like 5.5.30+dfsg-xxx.
Such version is larger than 5.5.30-yyy, so apt prefers it.
So use instead 5.5.30+maria-yyy, which is larger and can be pulled in
automatically by apt.
Also included are a couple of fixes for test failures in buildbot.
innodb_bug12400341.test is disabled for valgrind daily test.
It might be affected by the previous test's undo slots existing,
because of slower execution.
innodb_bug12400341.test is disabled for valgrind daily test.
It might be affected by the previous test's undo slots existing,
because of slower execution.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
mysql-test/suite/innodb/r/innodb_mysql.result:
Changed explain output: The explain now contains "Using where" since we
have restored the select pointer after doing the filesort operation.
sql/sql_select.cc:
Change create_sort_index() so that it always sets the pointer to
the select object to NULL. This is done in order to avoid that the
select->quick object can be used when execution the main part of
the select operation.
sql/sql_select.h:
New member in JOIN_TAB: saved_select. Used by create_sort_index to
make a backup copy of the select pointer.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
mysql-test/suite/innodb/r/innodb_mysql.result:
Changed explain output: The explain now contains "Using where" since we
have restored the select pointer after doing the filesort operation.
sql/sql_select.cc:
Change create_sort_index() so that it always sets the pointer to
the select object to NULL. This is done in order to avoid that the
select->quick object can be used when execution the main part of
the select operation.
sql/sql_select.h:
New member in JOIN_TAB: saved_select. Used by create_sort_index to
make a backup copy of the select pointer.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
- Added --verbose to BUILD scripts to get make to write out compile commands.
- Detect if AM_EXTRA_MAKEFLAGS=VERBOSE=1 was used with build scripts.
- Don't write warnings about replication variables when doing bootstrap.
- Fixed that mysql_cond_wait() and mysql_cond_timedwait() will report original source file in case of errors.
- Ignore some compiler warnings
BUILD/FINISH.sh:
Detect if AM_EXTRA_MAKEFLAGS=VERBOSE=1 or --verbose was used
BUILD/SETUP.sh:
Added --verbose to print out the full compile lines
Updated help message
client/mysqltest.cc:
Fixed that one can use 'replace' with cat_file
cmake/configure.pl:
If --verbose is used, get make to write out compile commands
debian/dist/Debian/rules:
Added $AM_EXTRA_MAKEFLAGS to get VERBOSE=1 on buildbot builds
debian/dist/Ubuntu/rules:
Added $AM_EXTRA_MAKEFLAGS to get VERBOSE=1 on buildbot builds
include/my_pthread.h:
Made set_timespec_time_nsec() more portable.
include/mysql/psi/mysql_thread.h:
Fixed that mysql_cond_wait() and mysql_cond_timedwait() will report original source file in case of errors.
mysql-test/suite/innodb/r/auto_increment_dup.result:
Fixed wrong DBUG_SYNC
mysql-test/suite/innodb/t/auto_increment_dup.test:
Fixed wrong DBUG_SYNC
mysql-test/suite/perfschema/include/upgrade_check.inc:
Make test more portable for changes in *.sql files
mysql-test/suite/perfschema/r/pfs_upgrade.result:
Updated test results
mysql-test/valgrind.supp:
Ignore running Aria checkpoint thread
scripts/mysqlaccess.sh:
Changed reference of bugs database
Ensure that also client-server group is read.
sql/handler.cc:
Added missing syncpoint
sql/mysqld.cc:
Don't write warnings about replication variables when doing bootstrap
sql/mysqld.h:
Don't write warnings about replication variables when doing bootstrap
sql/rpl_rli.cc:
Don't write warnings about replication variables when doing bootstrap
sql/sql_insert.cc:
Don't mask SERVER_SHUTDOWN in insert_delayed
This is done to be able to distingush between shutdown and interrupt errors
support-files/compiler_warnings.supp:
Ignore some compiler warnings in xtradb,innobase, oqgraph, yassl, string3.h
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
LP bug #1035225 / MySQL bug #66301: INSERT ... ON DUPLICATE KEY UPDATE +
innodb_autoinc_lock_mode=1 is broken
The problem was that when certain INSERT ... ON DUPLICATE KEY UPDATE
were executed concurrently on a table containing an AUTO_INCREMENT
column as a primary key, InnoDB would correctly reserve non-overlapping
AUTO_INCREMENT intervals for each statement, but when the server
encountered the first duplicate key error on the secondary key in one of
the statements and performed an UPDATE, it also updated the internal
AUTO_INCREMENT value to the one from the existing row that caused a
duplicate key error, even though the AUTO_INCREMENT value was not
specified explicitly in the UPDATE clause. It would then proceed with
using AUTO_INCREMENT values the range reserved previously by another
statement, causing duplicate key errors on the AUTO_INCREMENT column.
Fixed by changing write_record() to ensure that in case of a duplicate
key error the internal AUTO_INCREMENT counter is only updated when the
AUTO_INCREMENT value was explicitly updated by the UPDATE
clause. Otherwise it is restored to what it was before the duplicate key
error, as that value is unused and can be reused for subsequent
successfully inserted rows.
sql/sql_insert.cc:
Don't update next_insert_id to the value of a row found during ON DUPLICATE KEY UPDATE.
sql/sql_parse.cc:
Added DBUG_SYNC
sql/table.h:
Added next_number_field_updated flag to detect changing of auto increment fields.
Moved fields a bit to get bool fields after each other (better alignment)
create table t1 (a smallint primary key auto_increment);
insert into t1 values(32767);
insert into t1 values(NULL);
ERROR 1062 (23000): Duplicate entry '32767' for key 'PRIMARY
Now on always gets error HA_ERR_AUTOINC_RANGE=167 "Out of range value for column", independent of
store engine, SQL Mode or number of inserted rows. This is an unique error that is easier to test for in replication.
Another bug fix is that we now get an error when trying to insert a too big auto-generated value, even in non-strict mode.
Before one get insted the max column value inserted.
This patch also fixes some issues with inserting negative numbers in an auto-increment column.
Fixed the ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave.
This ensures that replication works between an old server to a new slave for auto-increment overflow errors.
Added SQLSTATE errors for handler errors
Smaller bug fixes:
* Added warnings for duplicate key errors when using INSERT IGNORE
* Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0"
* Allow one to see how cmake is called by using --just-print --just-configure
BUILD/FINISH.sh:
--just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake.
cmake/configure.pl:
--just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake.
include/CMakeLists.txt:
Added handler_state.h
include/handler_state.h:
SQLSTATE for handler error messages.
Required for HA_ERR_AUTOINC_ERANGE, but solves also some other cases.
mysql-test/extra/binlog_tests/binlog.test:
Fixed old wrong behaviour
Added more tests
mysql-test/extra/binlog_tests/binlog_insert_delayed.test:
Reset binary log to only print what's necessary in show_binlog_events
mysql-test/extra/rpl_tests/rpl_auto_increment.test:
Update to new error codes
mysql-test/extra/rpl_tests/rpl_insert_delayed.test:
Ignore warnings as this depends on how the test is run
mysql-test/include/strict_autoinc.inc:
On now gets an error on overflow
mysql-test/r/auto_increment.result:
Update results after fixing error message
mysql-test/r/auto_increment_ranges_innodb.result:
Test new behaviour
mysql-test/r/auto_increment_ranges_myisam.result:
Test new behaviour
mysql-test/r/commit_1innodb.result:
Added warnings for duplicate key error
mysql-test/r/create.result:
Added warnings for duplicate key error
mysql-test/r/insert.result:
Added warnings for duplicate key error
mysql-test/r/insert_select.result:
Added warnings for duplicate key error
mysql-test/r/insert_update.result:
Added warnings for duplicate key error
mysql-test/r/mix2_myisam.result:
Added warnings for duplicate key error
mysql-test/r/myisam_mrr.result:
Added warnings for duplicate key error
mysql-test/r/null_key.result:
Added warnings for duplicate key error
mysql-test/r/replace.result:
Update to new error codes
mysql-test/r/strict_autoinc_1myisam.result:
Update to new error codes
mysql-test/r/strict_autoinc_2innodb.result:
Update to new error codes
mysql-test/r/strict_autoinc_3heap.result:
Update to new error codes
mysql-test/r/trigger.result:
Added warnings for duplicate key error
mysql-test/r/xtradb_mrr.result:
Added warnings for duplicate key error
mysql-test/suite/binlog/r/binlog_innodb_row.result:
Updated result
mysql-test/suite/binlog/r/binlog_row_binlog.result:
Out of range data for auto-increment is not inserted anymore
mysql-test/suite/binlog/r/binlog_statement_insert_delayed.result:
Updated result
mysql-test/suite/binlog/r/binlog_stm_binlog.result:
Out of range data for auto-increment is not inserted anymore
mysql-test/suite/binlog/r/binlog_unsafe.result:
Updated result
mysql-test/suite/innodb/r/innodb-autoinc.result:
Update to new error codes
mysql-test/suite/innodb/r/innodb-lock.result:
Updated results
mysql-test/suite/innodb/r/innodb.result:
Updated results
mysql-test/suite/innodb/r/innodb_bug56947.result:
Updated results
mysql-test/suite/innodb/r/innodb_mysql.result:
Updated results
mysql-test/suite/innodb/t/innodb-autoinc.test:
Update to new error codes
mysql-test/suite/maria/maria3.result:
Updated result
mysql-test/suite/maria/mrr.result:
Updated result
mysql-test/suite/optimizer_unfixed_bugs/r/bug43617.result:
Updated result
mysql-test/suite/rpl/r/rpl_auto_increment.result:
Update to new error codes
mysql-test/suite/rpl/r/rpl_insert_delayed,stmt.rdiff:
Updated results
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
Updated results
mysql-test/t/auto_increment.test:
Update to new error codes
mysql-test/t/auto_increment_ranges.inc:
Test new behaviour
mysql-test/t/auto_increment_ranges_innodb.test:
Test new behaviour
mysql-test/t/auto_increment_ranges_myisam.test:
Test new behaviour
mysql-test/t/replace.test:
Update to new error codes
mysys/my_getopt.c:
Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0"
sql/handler.cc:
Ignore negative values for signed auto-increment columns
Always give an error if we get an overflow for an auto-increment-column (instead of inserting the max value)
Ensure that the row number is correct for the out-of-range-value error message.
******
Fixed wrong printing of column namn for "Out of range value" errors
Fixed that INSERT_ID is correctly replicated also for out-of-range autoincrement values
Fixed that print_keydup_error() can also be used to generate warnings
******
Return HA_ERR_AUTOINC_ERANGE (167) instead of ER_WARN_DATA_OUT_OF_RANGE for auto-increment overflow
sql/handler.h:
Allow INSERT IGNORE to continue also after out-of-range inserts.
Fixed that print_keydup_error() can also be used to generate warnings
sql/log_event.cc:
Added DBUG_PRINT
Fixed the ER_AUTOINC_READ_FAILED, ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave.
This ensures that replication works between an old server to a new slave for auto-increment overflow errors.
sql/sql_insert.cc:
Add warnings for duplicate key errors when using INSERT IGNORE
sql/sql_state.c:
Added handler errors
sql/sql_table.cc:
Update call to print_keydup_error()
storage/innobase/handler/ha_innodb.cc:
Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
storage/xtradb/handler/ha_innodb.cc:
Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.