* remove new InnoDB-specific ER_ and HA_ERR_ codes
* renamed few old ER_ and HA_ERR_ error messages to be less MyISAM-specific
* remove duplicate enum definitions (durability_properties, icp_result)
* move new mysql-test include files to their owner suite
* rename xtradb.rdiff files to *-disabled
* remove mistakenly committed helper perl module
* remove long obsolete handler::ha_statistic_increment() method
* restore the standard C xid_t structure to not have setters and getters
* remove xid_t::reset that was cleaning too much
* move MySQL-5.7 ER_ codes where they belong
* fir innodb to include service_wsrep.h not internal wsrep headers
* update tests and results
Contains also:
MDEV-10549 mysqld: sql/handler.cc:2692: int handler::ha_index_first(uchar*): Assertion `table_share->tmp_table != NO_TMP_TABLE || m_lock_type != 2' failed. (branch bb-10.2-jan)
Unlike MySQL, InnoDB still uses THR_LOCK in MariaDB
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
enable tests that were fixed in MDEV-10549
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
fix main.innodb_mysql_sync - re-enable online alter for partitioned innodb tables
Contains also
MDEV-10547: Test multi_update_innodb fails with InnoDB 5.7
The failure happened because 5.7 has changed the signature of
the bool handler::primary_key_is_clustered() const
virtual function ("const" was added). InnoDB was using the old
signature which caused the function not to be used.
MDEV-10550: Parallel replication lock waits/deadlock handling does not work with InnoDB 5.7
Fixed mutexing problem on lock_trx_handle_wait. Note that
rpl_parallel and rpl_optimistic_parallel tests still
fail.
MDEV-10156 : Group commit tests fail on 10.2 InnoDB (branch bb-10.2-jan)
Reason: incorrect merge
MDEV-10550: Parallel replication can't sync with master in InnoDB 5.7 (branch bb-10.2-jan)
Reason: incorrect merge
- fixes in innodb to skip wsrep processing (like kill victim) when running in native mysql mode
- similar fixes in mysql server side
- forcing tc_log_dummy in native mysql mode when no binlog used. wsrep hton messes up handler counter
and used to lead in using tc_log_mmap instead. Bad news is that tc_log_mmap does not seem to work at all
Fixes a deadlock between applier and its victim transaction.
The deadlock would manifest when a BF victim was waiting for some lock
and was signaled to rollback, and the same time its wait
timeout expired. In such cases the victim would return from
lock_wait_suspend_thread() with error DB_LOCK_WAIT_TIMEOUT, as opposed to
DB_DEADLOCK. As a result only the last statement of the victim would rollback,
and eventually it would deadlock with the applier.
In wsrep BF we have already took lock_sys and trx
mutex either on wsrep_abort_transaction() or
before wsrep_kill_victim(). In replication we
could own lock_sys mutex taken in
lock_deadlock_check_and_resolve().
Analysis: We are alreading holing lock_sys mutex when we call thd::awake.
This could lead mutex deadlock if trx->current_lock_mutex_owner is not
correctly set.
Fix: Make sure that trx->current_lock_mutex_owner is correctly set.
PROBLEM
Whenever we insert in unique secondary index we take shared
locks on all possible duplicate record present in the table.
But while during a replace on the unique secondary index ,
we take exclusive and locks on the all duplicate record.
When the records are deleted, they are first delete marked
and later purged by the purge thread. While purging the
record we call the lock_update_delete() which in turn calls
lock_rec_inherit_to_gap() to inherit locks of the deleted
records. In repeatable read mode we inherit all the locks
from the record to the next record but in the read commited
mode we skip inherting them as gap type locks. We make a
exception here if the lock on the records is in shared mode
,we assume that it is set during insert for unique secondary
index and needs to be inherited to stop constraint violation.
We didnt handle the case when exclusive locks are set during
replace, we skip inheriting locks of these records and hence
causing constraint violation.
FIX
While inheriting the locks,check whether the transaction is
allowed to do TRX_DUP_REPLACE/TRX_DUP_IGNORE, if true
inherit the locks.
[ Revewied by Jimmy #rb9709]
Parallel replication (in 10.0 / "conservative" mode) relies on binlog group
commits to group transactions that can be safely run in parallel on the
slave. The --binlog-commit-wait-count and --binlog-commit-wait-usec options
exist to increase the number of commits per group. But in case of conflicts
between transactions, this can cause unnecessary delay and reduced througput,
especially on a slave where commit order is fixed.
This patch adds a heuristics to reduce this problem. When transaction T1 goes
to commit, it will first wait for N transactions to queue up for a group
commit. However, if we detect that another transaction T2 is waiting for a row
lock held by T1, then we will skip the wait and let T1 commit immediately,
releasing locks and let T2 continue.
On a slave, this avoids the unfortunate situation where T1 is waiting for T2
to join the group commit, but T2 is waiting for T1 to release locks, causing
no work to be done for the duration of the --binlog-commit-wait-usec timeout.
(The heuristic seems reasonable on the master as well, so it is enabled for
all transactions, not just replication transactions).
Implement a new mode for parallel replication. In this mode, all transactions
are optimistically attempted applied in parallel. In case of conflicts, the
offending transaction is rolled back and retried later non-parallel.
This is an early-release patch to facilitate testing, more changes to user
interface / options will be expected. The new mode is not enabled by default.
Merge Facebook commit cd063ab930
authored by Peng Tian from https://github.com/facebook/mysql-5.6
Introduced a new configuration variable innodb_fatal_semaphore_wait_threshold,
it makes the fatal semaphore timeout configurable. Modified original commit
so that no MariaDB server files are changed, instead introduced a new
InnoDB/XtraDB configuration variable.
Its default/min/max vlaues are 600/1/2^32-1 in seconds (it was hardcoded
as 600, now its default value is 600, so the default behavior of this diff
should be no change).
Merged Facebooks commit 6e06bbfa315ffb97d713dd6e672d6054036ddc21
authored by Inaam Rana from https://github.com/facebook/mysql-5.6.
Fixes MySQL bug http://bugs.mysql.com/bug.php?id=72123
lock_timeout thread works in a tight loop waking up every second
and checking for lock_wait_timeout. In addition, when a mysql
thread is forced to wait on a lock, it signals the lock_timeout thread
as well. This call is not required. In a heavily contended workload
each thread going to wait will signal the lock_timeout thread making
it work all the time. As lock_timeout thread scans the array of
waiting threads under lock_sys::wait_mutex which is already very
hot in contneded loads, these extra scans can cause significanct
performance regression.
Also, in various codepaths lock_timeout thread is signalled where
actual intention was to signal the innodb monitor thread.
(lock != ctx->wait_lock)
References: lp:1364840 lp:1280896 - reverted a part of fix for
lp:1280896 (updating a unique key can cause parallel applying to hang )
in revision #4105. This "BF (brute force) lock skipping" caused
regression which surfaced in randgen test for bug lp:1364840
Merged lp:maria/maria-10.0-galera up to revision 3880.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Merged lp:maria/maria-10.0-galera up to revision 3879.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Analysis: Problem is that we execute galera code when we are actually
executing asyncronoush replication.
Fix: Do not execute galera code if wsrep provider is not set.
After-review changes.
For this patch in 10.0, we do not introduce a new public storage engine API,
we just fix the InnoDB/XtraDB issues. In 10.1, we will make a better public
API that can be used for all storage engines (MDEV-6429).
Eliminate the background thread that did deadlock kills asynchroneously.
Instead, we ensure that the InnoDB/XtraDB code can handle doing the kill from
inside the deadlock detection code (when thd_report_wait_for() needs to kill a
later thread to resolve a deadlock).
(We preserve the part of the original patch that introduces dedicated mutex
and condition for the slave init thread, to remove the abuse of
LOCK_thread_count for start/stop synchronisation of the slave init thread).
replication causing replication to fail.
Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.
The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.
With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.
Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.
Also some misc. fixes found during development and testing:
- Remove thd_rpl_is_parallel(), it is not used or needed.
- Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
worker thread is killed to resolve a deadlock with fixed commit
ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
can be ignored if the query otherwise completed successfully, and this
could cause the deadlock kill to be lost, so that the deadlock was not
correctly resolved.
- Fix random test failure due to missing wait_for_binlog_checkpoint.inc.
- Make sure that deadlock or other temporary errors during parallel
replication are not printed to the the error log; there were some places
around the replication code with extra error logging. These conditions can
occur occasionally and are handled automatically without breaking
replication, so they should not pollute the error log.
- Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
the end of a transaction, to be able to detect and resolve deadlocks due to
commit ordering. But this value was also used as a flag to mark whether
record_gtid() had been called, by being set to zero, losing the value. Now,
introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
valid for the entire duration of the transaction.
- Fix one place where the code to handle ignored errors called reset_killed()
unconditionally, even if no error was caught that should be ignored. This
could cause loss of a deadlock kill signal, breaking deadlock detection and
resolution.
- Fix a couple of missing mysql_reset_thd_for_next_command(). This could
cause a prior error condition to remain for the next event executed,
causing assertions about errors already being set and possibly giving
incorrect error handling for following event executions.
- Fix code that cleared thd->rgi_slave in the parallel replication worker
threads after each event execution; this caused the deadlock detection and
handling code to not be able to correctly process the associated
transactions as belonging to replication worker threads.
- Remove useless error code in slave_background_kill_request().
- Fix bug where wfc->wakeup_error was not cleared at
wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
error condition to wrongly propagate to a later wait_for_prior_commit(),
causing spurious ER_PRIOR_COMMIT_FAILED errors.
- Do not put the binlog background thread into the processlist. It causes
too many result differences in mtr, but also it probably is not useful
for users to pollute the process list with a system thread that does not
really perform any user-visible tasks...
replication causing replication to fail.
In parallel replication, we run transactions from the master in parallel, but
force them to commit in the same order they did on the master. If we force T1
to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get
a deadlock when T2 waits until T1 has committed.
Usually, we do not run T1 and T2 in parallel if there is a chance that they
can have conflicting locks like this, but there are certain edge cases where
it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was
that this would cause replication to hang, eventually getting a lock timeout
and causing the slave to stop with error.
With this patch, InnoDB will report back to the upper layer whenever a
transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel
replication transactions, and T2 needs to commit later than T1, we can thus
detect the deadlock; we then kill T2, setting a flag that causes it to catch
the kill and convert it to a deadlock error; this error will then cause T2 to
roll back and release its locks (so that T1 can commit), and later T2 will be
re-tried and eventually also committed.
The kill happens asynchroneously in a slave background thread; this is
necessary, as the reporting from InnoDB about lock waits happen deep inside
the locking code, at a point where it is not possible to directly call
THD::awake() due to mutexes held.
Deadlock is assumed to be (very) rarely occuring, so this patch tries to
minimise the performance impact on the normal case where no deadlocks occur,
rather than optimise the handling of the occasional deadlock.
Also fix transaction retry due to deadlock when it happens after a transaction
already signalled to later transactions that it started to commit. In this
case we need to undo this signalling (and later redo it when we commit again
during retry), so following transactions will not start too early.
Also add a missing thd->send_kill_message() that got triggered during testing
(this corrects an incorrect fix for MySQL Bug#58933).
Analysis: In wsrep case there is two transactions possible trx and
conflicting lock owning transaction. Code was handling trx mutexin
correctly but not for c_lock->trx case.
Fixed by taking clock-trx mutex when needed and releasing trx
mutex if it is taken at lower levels.
Update InnoDB to 5.6.14
Apply MySQL-5.6 hack for MySQL Bug#16434374
Move Aria-only HA_RTREE_INDEX from my_base.h to maria_def.h (breaks an assert in InnoDB)
Fix InnoDB memory leak
FREED LOCK
ANALYIS
-------
In 5.5 code the lock_rec_block_validate() is called after releasing
the kernel mutex. There is a chance that the lock might be invalid so,
we are getting the valgrind error on invalid read on lock->index.
FIX
---
Fix would be to copy the lock->index when we are holding the kernel mutex
and then pass it to the lock_rec_block_validate(). This implementation
is present in 5.1 code.
[ Approved by sunny rb.no.oracle.com/rb/r/2152/ ]
Problem:
During the index intersect access method, the SQL layer will access one row,
that satisfies a set of conditions, using an index i1. And then it will try to
access the same row, with other set of conditions using the next index i2. If
the fetch from i2 fails (we are talking about an error situation here and not
simply an unmatched row situation), then it will unlock the row accessed via
i1. This will work in all situations except deadlock error.
When a deadlock happens, InnoDB will rollback the transaction. InnoDB intimates
the SQL layer about this through the THD::transaction_rollback_request member.
But this is not currently used by the SQL layer.
Solution:
When an error happens, the SQL layer must check the
THD::transaction_rollback_request member, before calling handler::unlock_row().
We have also added a debug assert in ha_innobase::unlock_row() checking that
it must be called only when the transaction is in active state.
rb#1773 approved by Marko and Sunny.
IN LOCK_VALIDATE()
rb://917
approved by: Marko Makela
In lock_validate() the limit is used to release the kernel_mutex during
the validation, to obey the latching order.
If we do the limit++ then we are rechecking the same lock most times on
each iteration because limit is being incremented by one and
<space, page_no> will nearly always be > limit. If we set the limit
correctly to (space, page+1) then we are actually making progress
during the iteration.
IN LOCK_VALIDATE()
rb://917
approved by: Marko Makela
In lock_validate() the limit is used to release the kernel_mutex during
the validation, to obey the latching order.
If we do the limit++ then we are rechecking the same lock most times on
each iteration because limit is being incremented by one and
<space, page_no> will nearly always be > limit. If we set the limit
correctly to (space, page+1) then we are actually making progress
during the iteration.
print page dump
buf_page_print(): Remove the ut_ad(0) from the beginning. Add two flags
(enum buf_page_print_flags) that can be bitwise-ORed together:
BUF_PAGE_PRINT_NO_CRASH:
Do not crash debug builds at the end of buf_page_print().
BUF_PAGE_PRINT_NO_FULL:
Do not print the full page dump. This can be useful when adding
diagnostic printout to flushing or to the doublewrite buffer.
trx_sys_doublewrite_init_or_restore_page(): Replace exit(1) with ut_error,
so that we can get a core dump if this extraordinary condition happens.
rb:924 approved by Sunny Bains
print page dump
buf_page_print(): Remove the ut_ad(0) from the beginning. Add two flags
(enum buf_page_print_flags) that can be bitwise-ORed together:
BUF_PAGE_PRINT_NO_CRASH:
Do not crash debug builds at the end of buf_page_print().
BUF_PAGE_PRINT_NO_FULL:
Do not print the full page dump. This can be useful when adding
diagnostic printout to flushing or to the doublewrite buffer.
trx_sys_doublewrite_init_or_restore_page(): Replace exit(1) with ut_error,
so that we can get a core dump if this extraordinary condition happens.
rb:924 approved by Sunny Bains
This fix does not remove the underlying cause of the assertion
failure. It just works around the problem, allowing a corrupted
secondary index to be fixed by DROP INDEX and CREATE INDEX (or in the
worst case, by re-creating the table).
ibuf_delete(): If the record to be purged is the last one in the page
or it is not delete-marked, refuse to purge it. Instead, write an
error message to the error log and let a debug assertion fail.
ibuf_set_del_mark(): If the record to be delete-marked is not found,
display some more information in the error log and let a debug
assertion fail.
row_undo_mod_del_unmark_sec_and_undo_update(),
row_upd_sec_index_entry(): Let a debug assertion fail when the record
to be delete-marked is not found.
buf_page_print(): Add ut_ad(0) so that corruption will be more
prominent in stress testing with debug binaries. Add ut_ad(0) here and
there where corruption is noticed.
btr_corruption_report(): Display some data on page_is_comp() mismatch.
btr_assert_not_corrupted(): A wrapper around btr_corruption_report().
Assert that page_is_comp() agrees with the table flags.
rb:911 approved by Inaam Rana
This fix does not remove the underlying cause of the assertion
failure. It just works around the problem, allowing a corrupted
secondary index to be fixed by DROP INDEX and CREATE INDEX (or in the
worst case, by re-creating the table).
ibuf_delete(): If the record to be purged is the last one in the page
or it is not delete-marked, refuse to purge it. Instead, write an
error message to the error log and let a debug assertion fail.
ibuf_set_del_mark(): If the record to be delete-marked is not found,
display some more information in the error log and let a debug
assertion fail.
row_undo_mod_del_unmark_sec_and_undo_update(),
row_upd_sec_index_entry(): Let a debug assertion fail when the record
to be delete-marked is not found.
buf_page_print(): Add ut_ad(0) so that corruption will be more
prominent in stress testing with debug binaries. Add ut_ad(0) here and
there where corruption is noticed.
btr_corruption_report(): Display some data on page_is_comp() mismatch.
btr_assert_not_corrupted(): A wrapper around btr_corruption_report().
Assert that page_is_comp() agrees with the table flags.
rb:911 approved by Inaam Rana
Fix a deadlock in the initial patch. lock_validate() must not hold the
lock system mutex while s-latching a block, because some functions,
such as lock_rec_convert_impl_to_expl(), may be already holding an x-latch
on the block that lock_validate() is interested in while attempting to
acquire the lock system mutex.
This deadlock was not caught by UNIV_SYNC_DEBUG because of
buf_block_dbg_add_level(block, SYNC_NO_ORDER_CHECK).
lock_clust_rec_some_has_impl(), row_get_rec_trx_id(),
lock_rec_queue_validate(), lock_table_other_has_incompatible(),
lock_table_has_to_wait_in_queue(), lock_table_queue_validate():
Add const qualifiers.
row_get_trx_id_offset(): Add const qualifiers. Keep the parameter rec
only in UNIV_DEBUG builds. Inline the function.
lock_rec_validate_page(): Take the buffer block as a parameter, to
avoid a buf_page_get_gen() call in most cases.
lock_rec_validate_page_low(): A version of lock_rec_validate_page()
that assumes that the lock system mutexes are already being held.
lock_rec_get_next_on_page_const(): A const variant of
lock_rec_get_next_on_page().
lock_validate(): Do not release the lock system mutex while
buffer-fixing the block for the lock_rec_validate_page() call.
Releasing the mutex apparently caused the assertion failure.
rb:665 approved by Sunny Bains
Fix a deadlock in the initial patch. lock_validate() must not hold the
lock system mutex while s-latching a block, because some functions,
such as lock_rec_convert_impl_to_expl(), may be already holding an x-latch
on the block that lock_validate() is interested in while attempting to
acquire the lock system mutex.
This deadlock was not caught by UNIV_SYNC_DEBUG because of
buf_block_dbg_add_level(block, SYNC_NO_ORDER_CHECK).
lock_clust_rec_some_has_impl(), row_get_rec_trx_id(),
lock_rec_queue_validate(), lock_table_other_has_incompatible(),
lock_table_has_to_wait_in_queue(), lock_table_queue_validate():
Add const qualifiers.
row_get_trx_id_offset(): Add const qualifiers. Keep the parameter rec
only in UNIV_DEBUG builds. Inline the function.
lock_rec_validate_page(): Take the buffer block as a parameter, to
avoid a buf_page_get_gen() call in most cases.
lock_rec_validate_page_low(): A version of lock_rec_validate_page()
that assumes that the lock system mutexes are already being held.
lock_rec_get_next_on_page_const(): A const variant of
lock_rec_get_next_on_page().
lock_validate(): Do not release the lock system mutex while
buffer-fixing the block for the lock_rec_validate_page() call.
Releasing the mutex apparently caused the assertion failure.
rb:665 approved by Sunny Bains
It is filling the error log when testing the debug version of the server.
The printout only seems to be useful when debugging a crash, not when
testing an instrumented version of the server.
It is filling the error log when testing the debug version of the server.
The printout only seems to be useful when debugging a crash, not when
testing an instrumented version of the server.
Fix compiler warning:
lock/lock0lock.c: In function 'lock_print_info_all_transactions':
lock/lock0lock.c:4299:10: error: variable 'page' set but not used [-Werror=unused-but-set-variable]
Fix compiler warning:
lock/lock0lock.c: In function 'lock_print_info_all_transactions':
lock/lock0lock.c:4299:10: error: variable 'page' set but not used [-Werror=unused-but-set-variable]
------------------------------------------------------------
revno: 3495
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Wed 2010-06-02 13:37:14 +0300
message:
Bug#53674: InnoDB: Error: unlock row could not find a 4 mode lock on the record
In semi-consistent read, only unlock freshly locked non-matching records.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
enum db_err: Add DB_SUCCESS_LOCKED_REC for indicating a successful
operation where a record lock was created.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
------------------------------------------------------------
revno: 3495
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Wed 2010-06-02 13:37:14 +0300
message:
Bug#53674: InnoDB: Error: unlock row could not find a 4 mode lock on the record
In semi-consistent read, only unlock freshly locked non-matching records.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
enum db_err: Add DB_SUCCESS_LOCKED_REC for indicating a successful
operation where a record lock was created.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
In semi-consistent read, only unlock freshly locked non-matching records.
Define DB_SUCCESS_LOCKED_REC for indicating a successful operation
where a record lock was created.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
In semi-consistent read, only unlock freshly locked non-matching records.
Define DB_SUCCESS_LOCKED_REC for indicating a successful operation
where a record lock was created.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
------------------------------------------------------------
revno: 3421
revision-id: marko.makela@oracle.com-20100426131029-1ffja69h6n88q6bo
parent: marko.makela@oracle.com-20100426112609-f7lgl8crw4x4sfkk
committer: Marko M?kel? <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Mon 2010-04-26 16:10:29 +0300
message:
lock_rec_queue_validate(): Disable a bogus check that
a transaction that holds a lock on a clustered index record
also holds a lock on the secondary index record.
modified:
storage/innobase/lock/lock0lock.c 2@cee13dc7-1704-0410-992b-c9b4543f1246:trunk%2Flock%2Flock0lock.c
storage/innodb_plugin/lock/lock0lock.c 2@16c675df-0fcb-4bc9-8058-dcc011a37293:trunk%2Flock%2Flock0lock.c
------------------------------------------------------------
------------------------------------------------------------
revno: 3421
revision-id: marko.makela@oracle.com-20100426131029-1ffja69h6n88q6bo
parent: marko.makela@oracle.com-20100426112609-f7lgl8crw4x4sfkk
committer: Marko M?kel? <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Mon 2010-04-26 16:10:29 +0300
message:
lock_rec_queue_validate(): Disable a bogus check that
a transaction that holds a lock on a clustered index record
also holds a lock on the secondary index record.
modified:
storage/innobase/lock/lock0lock.c 2@cee13dc7-1704-0410-992b-c9b4543f1246:trunk%2Flock%2Flock0lock.c
storage/innodb_plugin/lock/lock0lock.c 2@16c675df-0fcb-4bc9-8058-dcc011a37293:trunk%2Flock%2Flock0lock.c
------------------------------------------------------------
Detailed revision comments:
r6545 | jyang | 2010-02-03 03:57:32 +0200 (Wed, 03 Feb 2010) | 8 lines
branches/5.1: Fix bug #49001, "SHOW INNODB STATUS deadlock info
incorrect when deadlock detection aborts". Print the correct
lock owner when recursive function lock_deadlock_recursive()
exceeds its maximum depth LOCK_MAX_DEPTH_IN_DEADLOCK_CHECK.
rb://217, approved by Marko.
Detailed revision comments:
r6545 | jyang | 2010-02-03 03:57:32 +0200 (Wed, 03 Feb 2010) | 8 lines
branches/5.1: Fix bug #49001, "SHOW INNODB STATUS deadlock info
incorrect when deadlock detection aborts". Print the correct
lock owner when recursive function lock_deadlock_recursive()
exceeds its maximum depth LOCK_MAX_DEPTH_IN_DEADLOCK_CHECK.
rb://217, approved by Marko.