Analysis: Problem is that SQL-layer calls handler API after storage
engine has already returned error state. InnoDB does internal
rollback when it notices transaction error (e.g. lock wait timeout,
deadlock, etc.) and after this transaction is not naturally in
correct state to continue.
Fix: Do not continue fetch operations if transaction is not started.
This is an addendum to the fix for MDEV-7026. The ARM memory model is
similar to that of PowerPC and thus needs the same semantics with
respect to memory barriers. That is, os_atomic_test_and_set_*_release()
must be a store with a release barrier followed by a full
barrier. Unlike x86 using __sync_lock_test_and_set() which is
implemented as “exclusive load with acquire barriers + exclusive store”
is insufficient in contexts where os_atomic_test_and_set_*_release()
macros are used.
Analysis: after a red-black-tree lookup we use node withouth
checking did lookup succeed or not. This lead to situation
where NULL-pointer was used.
Fix: Add additional check that found node from red-back-tree
is valid.
Analysis: On master when executing (single/multi) row INSERTs/REPLACEs
InnoDB fallback to old style autoinc locks (table locks)
only if another transaction has already acquired the AUTOINC lock.
Instead on slave as we are executing log_events and sql_command
is not correctly set, InnoDB does not use new style autoinc
locks when it could.
Fix: Use new style autoinc locks also when
thd_sql_command(user_thd) == SQLCOM_END i.e. this is RBR event.
causes server crash
Analysis: If wrong data types used on foreign constraint there
was possibility that foreign->id is NULL when incorrect
foreign constraint was removed from the dictionary cache.
Fix: Add guard foreign->id != NULL before trying to lookup
or remove the foreign constraint from dictionary cache.
Tested using user database where problem was repeatable.
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
Use traditional statistics estimation by default (innodb-stats-traditional=true).
There could be performance regression for customers if there is a lot of
open table operations.
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))
The bug was that full memory barrier was missing in the code that ensures that
a waiter on an InnoDB mutex will not go to sleep unless it is guaranteed to be
woken up again by another thread currently holding the mutex. This made
possible a race where a thread could get stuck waiting for a mutex that is in
fact no longer locked. If that thread was also holding other critical locks,
this could stall the entire server. There is an error monitor thread than can
break the stall, it runs about once per second. But if the error monitor
thread itself got stuck or was not running, then the entire server could hang
infinitely.
This was introduced on i386/amd64 platforms in 5.5.40 and 10.0.13 by an
incorrect patch that tried to fix the similar problem for PowerPC.
This commit reverts the incorrect PowerPC patch, and instead implements a fix
for PowerPC that does not change i386/amd64 behaviour, making PowerPC work
similarly to i386/amd64.
Analysis: InnoDB error monitor is responsible to call every second
sync_arr_wake_threads_if_sema_free() to wake up possible hanging
threads if they are missed in mutex_signal_object. This is not
possible if error monitor itself is on mutex/semaphore wait. We
should avoid all unnecessary mutex/semaphore waits on error monitor.
Currently error monitor calls function buf_flush_stat_update()
that calls log_get_lsn() function and there we will try to get
log_sys mutex. Better, solution for error monitor is that in
buf_flush_stat_update() we will try to get lsn with
mutex_enter_nowait() and if we did not get mutex do not update
the stats.
Fix: Use log_get_lsn_nowait() function on buf_flush_stat_update()
function. If returned lsn is 0, we do not update flush stats.
log_get_lsn_nowait() will use mutex_enter_nowait() and if
we get mutex we return a correct lsn if not we return 0.
When UNIV_INTERN is missing in built-in XtraDB, this causes the
innodb_plugin to call the XtraDB version of the function instead
of its own (seen in --embedded-server test failure in Buildbot).
This in turn causes bad things to happen in case of difference
between XtranDB and innodb_plugin.
MDEV-6483 - Deadlock around rw_lock_debug_mutex on PPC64
This problem affects only debug builds on PPC64.
There are at least two race conditions around
rw_lock_debug_mutex_enter and rw_lock_debug_mutex_exit:
- rw_lock_debug_waiters was loaded/stored without setting
appropriate locks/memory barriers.
- there is a gap between calls to os_event_reset() and
os_event_wait() and in such case we're supposed to pass
return value of the former to the latter.
Fixed by replacing self-cooked spinlocks with system mutexes.
These days system mutexes offer much better performance. OTOH
performance is not that critical for debug builds.
MDEV-6450 - MariaDB crash on Power8 when built with advance tool
chain
InnoDB mutex_exit() function calls __sync_test_and_set() to release
the lock. According to manual this function is supposed to create
"acquire" memory barrier whereas in fact we need "release" memory
barrier at mutex_exit().
The problem isn't repeatable with gcc because it creates
"acquire-release" memory barrier for __sync_test_and_set().
ATC creates just "acquire" barrier.
Fixed by creating proper barrier at mutex_exit() by using
__sync_lock_release() instead of __sync_test_and_set().
Part of this work is based on Stewart Smitch's memory barrier and lower priori
patches for power8.
- Added memory syncronization for innodb & xtradb for power8.
- Added HAVE_WINDOWS_MM_FENCE to CMakeList.txt
- Added os_isync to fix a syncronization problem on power
- Added log_get_lsn_nowait which is now used srv_error_monitor_thread to ensur
if log mutex is locked.
All changes done both for InnoDB and Xtradb
InnoDB: Failing assertion: mutex_own(&buf_pool->LRU_list_mutex)
and
InnoDB: Assertion failure in thread 2868898624 in file buf0lru.c line 1077
InnoDB: Failing assertion: mutex_own(&buf_pool->LRU_list_mutex)
Analysis: Function buf_LRU_free_block might release LRU_list_mutex on
same cases to avoid mutex order problems, we need to take it back
before accessing list.