buf0buf.cc line 2642.
Analysis: innodb_compression_algorithm is a global variable and
can change while we are building page compressed page. This could
lead page corruption.
Fix: Cache innodb_compression_algorithm on local variable before
doing any compression or page formating to avoid concurrent
change. Improved page verification on debug builds.
InnoDB: Failing assertion: mutex_own(&buf_pool->LRU_list_mutex)
and
InnoDB: Assertion failure in thread 2868898624 in file buf0lru.c line 1077
InnoDB: Failing assertion: mutex_own(&buf_pool->LRU_list_mutex)
Analysis: Function buf_LRU_free_block might release LRU_list_mutex on
same cases to avoid mutex order problems, we need to take it back
before accessing list.
Analysis: InnoDB writes also files that do not contain FIL-header.
This could lead incorrect analysis on os_fil_read_func function
when it tries to see is page page compressed based on FIL_PAGE_TYPE
field on FIL-header. With bad luck uncompressed page that does
not contain FIL-headed, the byte on FIL_PAGE_TYPE position could
indicate that page is page comrpessed.
Fix: Upper layer must indicate is file space page compressed
or not. If this is not yet known, we need to read the FIL-header
and find it out. Files that we know that are not page compressed
we can always just provide FALSE.
config.h.cmake:
define NOMINMAX, otherwise Windows system headers define min() and max() macros
sql/slave.cc:
mi->report() has one more argument in MariaDB
storage/xtradb/buf/buf0flu.cc:
xtradb fixes for windows, again
tool chain
This is an addition to the original patch. On Windows
InterlockedExchange implies full memory barrier, whereas
only acquire/release barriers required.
buf0flu.cc line 549.
Analysis: If buf_page_get_state(bpage) == BUF_BLOCK_REMOVE_HASH then
buf_page_in_file(bpage) might not be true.
Fix: ut_a(buf_page_in_file(bpage) || buf_page_get_state(bpage) == BUF_BLOCK_REMOVE_HASH);
4229: MDEV-5670: Assertion failure in file buf0lru.c line 2355
Add more status information if repeatable.
4230: MDEV-5673: Crash while parallel dropping multiple tables under heavy load
Improve long semaphore wait output to include all semaphore waits
and try to find out if there is a sequence of waiters.
4233: Fix compiler errors on product build.
4237: Fix too agressive long semaphore wait output and add guard against introducing
compression failures on insert buffer.
4238: Fix test failure caused by simulated compression failure on
IBUF_DUMMY table.
This problem affects only debug builds on PPC64.
There are at least two race conditions around
rw_lock_debug_mutex_enter and rw_lock_debug_mutex_exit:
- rw_lock_debug_waiters was loaded/stored without setting
appropriate locks/memory barriers.
- there is a gap between calls to os_event_reset() and
os_event_wait() and in such case we're supposed to pass
return value of the former to the latter.
Fixed by replacing self-cooked spinlocks with system mutexes.
These days system mutexes offer much better performance. OTOH
performance is not that critical for debug builds.
If mysql.innodb_table_stats or mysql.innodb_index_stats is not found or has
unexpected structure output that error only once and no other error for
every table trying to use them. If they do exists, then print fetch or
recalculation errors only once / table or index.
line 8473
In case InnoDB index is not found, print the MySQL and InnoDB index
name we were trying to find and all MySQL and InnoDB index names there
is for this table.
ha_innodb.cc line 8473
If index is not found from InnoDB make sure we print what we
were trying to find and all mysql and InnoDB index names there
is for this table.
chain
InnoDB mutex_exit() function calls __sync_test_and_set() to release
the lock. According to manual this function is supposed to create
"acquire" memory barrier whereas in fact we need "release" memory
barrier at mutex_exit().
The problem isn't repeatable with gcc because it creates
"acquire-release" memory barrier for __sync_test_and_set().
ATC creates just "acquire" barrier.
Fixed by creating proper barrier at mutex_exit() by using
__sync_lock_release() instead of __sync_test_and_set().
Merge the patches into MariaDB 10.0 main.
With this patch, parallel replication will now automatically retry a
transaction that fails due to deadlock or other temporary error, same as
single-threaded replication.
We catch deadlocks with InnoDB transactions due to enforced commit order. If
T1 must commit before T2 in parallel replication and T1 ends up waiting for T2
inside InnoDB, we kill T2 and retry it later to resolve the deadlock
automatically.
After-review changes.
For this patch in 10.0, we do not introduce a new public storage engine API,
we just fix the InnoDB/XtraDB issues. In 10.1, we will make a better public
API that can be used for all storage engines (MDEV-6429).
Eliminate the background thread that did deadlock kills asynchroneously.
Instead, we ensure that the InnoDB/XtraDB code can handle doing the kill from
inside the deadlock detection code (when thd_report_wait_for() needs to kill a
later thread to resolve a deadlock).
(We preserve the part of the original patch that introduces dedicated mutex
and condition for the slave init thread, to remove the abuse of
LOCK_thread_count for start/stop synchronisation of the slave init thread).
in file buf0mtflu.cc line 570.
Analysis: Real timing bug, we should take the mutex before we
try to send those shutdown messages, that would make sure
that threads doing a unfinished flush (they have acquired
this mutex) have time to do their work before we add shutdown
messages to work queue. Currently, we just add those shutdown
messages to work queue and code assumes that at flush, there
is constant number of items to be processed and thus
leading to assertion.
Analysis: For some reason table stats for a table pointed from a index
is not initialized. Added additional warning output on this situation
and table stats initialization. This is better than asserting.
than with InnoDB plugin
Fix: os0file.h in XtraDB had OS_AIO_N_PENDING_IOS_PER_THREAD 256
when on InnoDB it is OS_AIO_N_PENDING_IOS_PER_THREAD 32. Changed
XtraDB also to use 32.
* Introduce a set of PLUGIN_xxx cmake options with values
NO, STATIC, DYNAMIC, AUTO, YES (abort if plugin is not compiled)
* Deprecate redundant and ambiguous WITH_xxx, WITH_PLUGIN_xxx,
WITH_xxx_STORAGE_ENGINE, WITHOUT_xxx, WITHOUT_PLUGIN_xxx,
WITHOUT_xxx_STORAGE_ENGINE
* Actually check whether a plugin is disabled (DISABLED keyword was
always present, but it was ignored until now).
* Support conditionally disabled plugins - keyword ONLY_IF
* Use ONLY_IF for conditionally skipping plugins, instead of
doing MYSQL_ADD_PLUGIN conditionally as before. Because if
MYSQL_ADD_PLUGIN isn't done at all, PLUGIN_xxx=YES cannot work.
mark path-related variables (AIO_LIBRARY, ODBC_LIBRARY, ODBC_INCLUDE_DIR,
Thrift_LIBS, Thrift_INCLUDE_DIRS, CRYPTO_LIBRARY, OPENSSL_LIBRARIES,
OPENSSL_ROOT_DIR, OPENSSL_INCLUDE_DIR) as advanced - paths are
automatically discovered by cmake.
mark few choice variables (ENABLED_LOCAL_INFILE, WITHOUT_SERVER,
DISABLE_SHARED) as not advanced - they are user choices, not automatically
configured values.
remove unused BACKUP_TEST variable.
Auto-generate the allowed list of values for enum/set/flagset options
in --help output. But don't do that when the help text already has them.
Also, remove lists of values from help strings of various options, where
they were simply listed without any additional information.
than with InnoDB plugin
Fix: os0file.h in XtraDB had OS_AIO_N_PENDING_IOS_PER_THREAD 256
when on InnoDB it is OS_AIO_N_PENDING_IOS_PER_THREAD 32. Changed
XtraDB also to use 32.
Analysis: Based on crashed the buffer pool instance identifier is
not correct on block to be freed. Add LRU list mutex holding
on functions calling free and add additional safety checks.
replication causing replication to fail.
Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.
The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.
With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.
Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.
Also some misc. fixes found during development and testing:
- Remove thd_rpl_is_parallel(), it is not used or needed.
- Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
worker thread is killed to resolve a deadlock with fixed commit
ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
can be ignored if the query otherwise completed successfully, and this
could cause the deadlock kill to be lost, so that the deadlock was not
correctly resolved.
- Fix random test failure due to missing wait_for_binlog_checkpoint.inc.
- Make sure that deadlock or other temporary errors during parallel
replication are not printed to the the error log; there were some places
around the replication code with extra error logging. These conditions can
occur occasionally and are handled automatically without breaking
replication, so they should not pollute the error log.
- Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
the end of a transaction, to be able to detect and resolve deadlocks due to
commit ordering. But this value was also used as a flag to mark whether
record_gtid() had been called, by being set to zero, losing the value. Now,
introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
valid for the entire duration of the transaction.
- Fix one place where the code to handle ignored errors called reset_killed()
unconditionally, even if no error was caught that should be ignored. This
could cause loss of a deadlock kill signal, breaking deadlock detection and
resolution.
- Fix a couple of missing mysql_reset_thd_for_next_command(). This could
cause a prior error condition to remain for the next event executed,
causing assertions about errors already being set and possibly giving
incorrect error handling for following event executions.
- Fix code that cleared thd->rgi_slave in the parallel replication worker
threads after each event execution; this caused the deadlock detection and
handling code to not be able to correctly process the associated
transactions as belonging to replication worker threads.
- Remove useless error code in slave_background_kill_request().
- Fix bug where wfc->wakeup_error was not cleared at
wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
error condition to wrongly propagate to a later wait_for_prior_commit(),
causing spurious ER_PRIOR_COMMIT_FAILED errors.
- Do not put the binlog background thread into the processlist. It causes
too many result differences in mtr, but also it probably is not useful
for users to pollute the process list with a system thread that does not
really perform any user-visible tasks...
replication causing replication to fail.
In parallel replication, we run transactions from the master in parallel, but
force them to commit in the same order they did on the master. If we force T1
to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get
a deadlock when T2 waits until T1 has committed.
Usually, we do not run T1 and T2 in parallel if there is a chance that they
can have conflicting locks like this, but there are certain edge cases where
it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was
that this would cause replication to hang, eventually getting a lock timeout
and causing the slave to stop with error.
With this patch, InnoDB will report back to the upper layer whenever a
transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel
replication transactions, and T2 needs to commit later than T1, we can thus
detect the deadlock; we then kill T2, setting a flag that causes it to catch
the kill and convert it to a deadlock error; this error will then cause T2 to
roll back and release its locks (so that T1 can commit), and later T2 will be
re-tried and eventually also committed.
The kill happens asynchroneously in a slave background thread; this is
necessary, as the reporting from InnoDB about lock waits happen deep inside
the locking code, at a point where it is not possible to directly call
THD::awake() due to mutexes held.
Deadlock is assumed to be (very) rarely occuring, so this patch tries to
minimise the performance impact on the normal case where no deadlocks occur,
rather than optimise the handling of the occasional deadlock.
Also fix transaction retry due to deadlock when it happens after a transaction
already signalled to later transactions that it started to commit. In this
case we need to undo this signalling (and later redo it when we commit again
during retry), so following transactions will not start too early.
Also add a missing thd->send_kill_message() that got triggered during testing
(this corrects an incorrect fix for MySQL Bug#58933).
This patch allows up to 64K pages for tables with DYNAMIC, COMPACT
and REDUNDANT row types. Tables with COMPRESSED row type allows
still only <= 16K page size. Note that single row size must be
still <= 16K and max key length is not affected.
on select from I_S.INNODB_CHANGED_PAGES
Analysis: limit_lsn_range_from_condition() incorrectly parses
start_lsn and/or end_lsn conditions.
Fix from SergeyP. Added some test cases.
Analysis: Can't disable the error message because you may get database
started with incorrect log file size.
Fix: Thus only improve the error message to give more information
to users.
on select from I_S.INNODB_CHANGED_PAGES
Analysis: limit_lsn_range_from_condition() incorrectly parses
start_lsn and/or end_lsn conditions.
Fix from SergeyP. Added some test cases.
option leaves the database in inconsistent state,
Analysis: Problem was that atomic writes variable had incorrect
type on same places leading to fact that e.g. OFF option was
not regognized. Furthermore, some error check code was missing
from both InnoDB and XtraDB engines. Finally, when table is
created we have already created the .ibd file and if we can't
set atomic writes it stays there.
Fix: Fix atomic writes variable type to ulint as it should be.
Fix: Add proper error code checking on os errors on both InnoDB
and XtraDB
Fix: Remove the .idb file when atomic writes can't be enabled
to a new table.