MariaDB will likely never support MySQL-style encryption for
InnoDB, because we cannot link with the Oracle encryption plugin.
This is preparation for merging MDEV-11623.
fil_space_crypt_cleanup(): Call mutex_free() to pair with
fil_space_crypt_init().
fil_space_destroy_crypt_data(): Call mutex_free() to pair with
fil_space_create_crypt_data() and fil_space_read_crypt_data().
fil_crypt_threads_cleanup(): Call mutex_free() to pair with
fil_crypt_threads_init().
fil_space_free_low(): Invoke fil_space_destroy_crypt_data().
fil_close(): Invoke fil_space_crypt_cleanup(), just like
fil_init() invoked fil_space_crypt_init().
Datafile::shutdown(): Set m_crypt_info=NULL without dereferencing
the pointer. The object will be freed along with the fil_space_t
in fil_space_free_low().
Remove some unnecessary conditions (ut_free(NULL) is OK).
srv_shutdown_all_bg_threads(): Shut down the encryption threads
by calling fil_crypt_threads_end().
srv_shutdown_bg_undo_sources(): Do not prematurely call
fil_crypt_threads_end(). Many pages can still be written by
change buffer merge, rollback of incomplete transactions, and
purge, especially in slow shutdown (innodb_fast_shutdown=0).
innobase_shutdown_for_mysql(): Call fil_crypt_threads_cleanup()
also when innodb_read_only=1, because the threads will have been
created also in that case.
sync_check_close(): Re-enable the invocation of sync_latch_meta_destroy()
to free the mutex list.
Threads may fall asleep forever while acquiring InnoDB rw-lock on Power8. This
regression was introduced along with InnoDB atomic operations fixes.
The problem was that proper memory order wasn't enforced between "writers"
store and "lock_word" load:
my_atomic_store32((int32*) &lock->waiters, 1);
...
local_lock_word = lock->lock_word;
Locking protocol is such that store to "writers" must be completed before load
of "lock_word". my_atomic_store32() was expected to enforce memory order because
it issues strongest MY_MEMORY_ORDER_SEQ_CST memory barrier.
According to C11:
- any operation with this memory order is both an acquire operation and a
release operation
- for atomic store order must be one of memory_order_relaxed
memory_order_release or memory_order_seq_cst. Otherwise the behavior is
undefined.
That is it doesn't say explicitly that this expectation is wrong, but there are
indications that acquire (which is actually supposed to guarantee memory order
in this case) may not be issued along with MY_MEMORY_ORDER_SEQ_CST.
A good fix for this is to encode waiters into lock_word, but it is a bit too
intrusive. Instead we change atomic store to atomic fetch-and-store, which
does memory load and is guaranteed to issue acquire.
Simplify away recursive flag: it is not necessary for rw-locks to operate
properly. Now writer_thread != 0 means recursive.
As we only need correct value of writer_thread only in writer_thread itself
it is rather safe to load and update it non-atomically.
This patch also fixes potential reorder of "writer_thread" and "recursive"
loads (aka MDEV-7148), which was reopened along with InnoDB thread fences
simplification. Previous versions are unaffected, because they have os_rmb
in rw_lock_lock_word_decr(). It wasn't observed at the moment of writing
though.
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
Clean-up periodic mutex/rwlock waiters wake up. This was a hack needed to
workaround broken mutexes/rwlocks implementation. We must have sane
implementations now and don't need these anymore: release thread is
guaranteed to wake up waiters.
Removed redundant ifdef that has equivalent code in both branches.
Removed os0atomic.h and os0atomic.ic: not used anymore.
Clean-up unused cmake checks.
No point to issue RELEASE memory barrier in os_thread_create_func(): thread
creation is full memory barrier.
No point to issue os_wmb in rw_lock_set_waiter_flag() and
rw_lock_reset_waiter_flag(): this is deadcode and it is unlikely operational
anyway. If atomic builtins are unavailable - memory barriers are most certainly
unavailable too.
RELEASE memory barrier is definitely abused in buf_pool_withdraw_blocks(): most
probably it was supposed to commit volatile variable update, which is not what
memory barriers actually do. To operate properly it needs corresponding ACQUIRE
barrier without an associated atomic operation anyway.
ACQUIRE memory barrier is definitely abused in log_write_up_to(): most probably
it was supposed to synchronize dirty read of log_sys->write_lsn. To operate
properly it needs corresponding RELEASE barrier without an associated atomic
operation anyway.
Removed a bunch of ACQUIRE memory barriers from InnoDB rwlocks. They're
meaningless without corresponding RELEASE memory barriers.
Valid usage example of memory barriers without an associated atomic operation:
http://en.cppreference.com/w/cpp/atomic/atomic_thread_fence
Contains also:
MDEV-10549 mysqld: sql/handler.cc:2692: int handler::ha_index_first(uchar*): Assertion `table_share->tmp_table != NO_TMP_TABLE || m_lock_type != 2' failed. (branch bb-10.2-jan)
Unlike MySQL, InnoDB still uses THR_LOCK in MariaDB
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
enable tests that were fixed in MDEV-10549
MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
fix main.innodb_mysql_sync - re-enable online alter for partitioned innodb tables
Contains also
MDEV-10547: Test multi_update_innodb fails with InnoDB 5.7
The failure happened because 5.7 has changed the signature of
the bool handler::primary_key_is_clustered() const
virtual function ("const" was added). InnoDB was using the old
signature which caused the function not to be used.
MDEV-10550: Parallel replication lock waits/deadlock handling does not work with InnoDB 5.7
Fixed mutexing problem on lock_trx_handle_wait. Note that
rpl_parallel and rpl_optimistic_parallel tests still
fail.
MDEV-10156 : Group commit tests fail on 10.2 InnoDB (branch bb-10.2-jan)
Reason: incorrect merge
MDEV-10550: Parallel replication can't sync with master in InnoDB 5.7 (branch bb-10.2-jan)
Reason: incorrect merge
Problem was that std::vector was allocated using calloc instead of
new, this caused vector constructor not being called and vector
metadata not initialized.
Problem was that std::vector was allocated using calloc instead of
new, this caused vector constructor not being called and vector
metadata not initialized.
There is a bug in Visual Studio 2010
Visual Studio has a feature "Checked Iterators". In a debug build, every
iterator operation is checked at runtime for errors, e g, out of range.
Disable this "Checked Iterators" for Windows and Debug if defined.
Problem was that static array was used for storing thread mutex sync levels.
Fixed by using std::vector instead.
Does not contain test case to avoid too big memory/disk space usage
on buildbot VMs.
Re-applied lost in the merge revision:
commit ed313e8a92
Author: Sergey Vojtovich <svoj@mariadb.org>
Date: Mon Dec 1 14:58:29 2014 +0400
MDEV-7148 - Recurring: InnoDB: Failing assertion: !lock->recursive
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
MDEV-7399: Add support for INFORMATION_SCHEMA.INNODB_MUTEXES
MDEV-7618: Improve semaphore instrumentation
Introduced two new information schema tables to monitor mutex waits
and semaphore waits. Added a new configuration variable
innodb_intrument_semaphores to add thread_id, file name and
line of current holder of mutex/rw_lock.
The bug was that full memory barrier was missing in the code that ensures that
a waiter on an InnoDB mutex will not go to sleep unless it is guaranteed to be
woken up again by another thread currently holding the mutex. This made
possible a race where a thread could get stuck waiting for a mutex that is in
fact no longer locked. If that thread was also holding other critical locks,
this could stall the entire server. There is an error monitor thread than can
break the stall, it runs about once per second. But if the error monitor
thread itself got stuck or was not running, then the entire server could hang
infinitely.
This was introduced on i386/amd64 platforms in 5.5.40 and 10.0.13 by an
incorrect patch that tried to fix the similar problem for PowerPC.
This commit reverts the incorrect PowerPC patch, and instead implements a fix
for PowerPC that does not change i386/amd64 behaviour, making PowerPC work
similarly to i386/amd64.
Merged Facebooks commit 6e06bbfa315ffb97d713dd6e672d6054036ddc21
authored by Inaam Rana from https://github.com/facebook/mysql-5.6.
Fixes MySQL bug http://bugs.mysql.com/bug.php?id=72123
lock_timeout thread works in a tight loop waking up every second
and checking for lock_wait_timeout. In addition, when a mysql
thread is forced to wait on a lock, it signals the lock_timeout thread
as well. This call is not required. In a heavily contended workload
each thread going to wait will signal the lock_timeout thread making
it work all the time. As lock_timeout thread scans the array of
waiting threads under lock_sys::wait_mutex which is already very
hot in contneded loads, these extra scans can cause significanct
performance regression.
Also, in various codepaths lock_timeout thread is signalled where
actual intention was to signal the innodb monitor thread.
MDEV-6483 - Deadlock around rw_lock_debug_mutex on PPC64
This problem affects only debug builds on PPC64.
There are at least two race conditions around
rw_lock_debug_mutex_enter and rw_lock_debug_mutex_exit:
- rw_lock_debug_waiters was loaded/stored without setting
appropriate locks/memory barriers.
- there is a gap between calls to os_event_reset() and
os_event_wait() and in such case we're supposed to pass
return value of the former to the latter.
Fixed by replacing self-cooked spinlocks with system mutexes.
These days system mutexes offer much better performance. OTOH
performance is not that critical for debug builds.