Provide some statistics about asynchronous IO reads and writes:
- number of pending operations
- number of completion callbacks that are currently being executed
- number of completion callbacks that are currently queued
(due to restriction on number of IO threads)
- total number of IOs finished
- total time to wait for free IO slot
- total number of completions that were queued.
Also revert tpool InnoDB perfschema instrumentation (MDEV-31048)
That instrumentation of cache mutex did not bring any revelation (
the mutex is taken for a couple of instructions), and made it impossible
to use tpool outside of the server (e.g in mariadbimport/dump)
When the constant OS_AIO_N_PENDING_IOS_PER_THREAD is changed from 256 to 1
and the server is run with the minimum parameters
innodb_read_io_threads=1 and innodb_write_io_threads=2, two hangs
were observed.
tpool::cache<T>::put(T*): Ensure that get() in io_slots::acquire()
will be woken up when the cache previously was empty.
buf_pool_t::io_buf_t::reserve(): Schedule a possibly partial doublewrite
batch so that os_aio_wait_until_no_pending_writes() has a chance of
returning. Add a Boolean parameter and pass wait_for_reads=false inside
buf_page_decrypt_after_read(), because those calls will be executed
inside a read completion callback, and therefore
os_aio_wait_until_no_pending_reads() would block indefinitely.
tpool::cache::m_mtx: Add PERFORMANCE_SCHEMA instrumentation
(wait/synch/mutex/innodb/tpool_cache_mutex). This covers the
InnoDB read_slots and write_slots for asynchronous data page I/O.
Removed use std::vector's ba push_back(), pop_back() to make it more
obvious that memory in the vectors won't be reallocated.
Also, "borrowed" elements can be debugged a little better now,
they are put into the start of the m_cache vector.
1. Fix places where data race warnings were relevant.
tls_worker_data::m_state should be modified under mutex protection,
since both maintainence timer and current worker set this flag.
2. Suppress warnings that are legitimate, yet harmless.
Apparently, the dirty reads in waitable_task::get_ref_count() or
write_slots->pending_io_count()
Avoiding race entirely without side-effects here is tricky,
and the effects of race is harmless.
The worst thing that can happen due to race is an extra wait notification,
under rare circumstances.
- wait notification, tpool_wait_begin/tpool_wait_end - to notify the
threadpool that current thread is going to wait
Use it to wait for IOs to complete and also when purge waits for workers.
The library is capable of
- asynchronous execution of tasks (and optionally waiting for them)
- asynchronous file IO
This is implemented using libaio on Linux and completion ports on
Windows. Elsewhere, async io is "simulated", which means worker threads
are performing synchronous IO.
- timers, scheduling work asynchronously in some point of the future.
Also periodic timers are implemented.