Fixed inlining flags. Remove /Ob1 added by CMake for RelWithDebInfo.
(the actual compiler default is /Ob2 if optimizations are enabled)
Allow to define custom /Ob flag with new variable MSVC_INLINE, if desired
- regression got revealed while running tpcc workload.
- as part of MDEV-25919 changes logic for statistics computation was revamped.
- if the table has changed to certain threshold then table is added to
statistics recomputation queue (dict_stats_recalc_pool_add)
- after the table is added to queue the background statistics thread is
notified
- during revamp the condition to notify background statistics threads was
wrongly updated to check if the queue/vector is empty when it should
check if there is queue/vector has entries to process.
- vec.begin() == vec.end() : only when vector is empty
- also accessing these iterator outside the parallely changing vector is not
safe
- fix now tend to notify background statistics thread if the logic adds
an entry to the queue/vector.
row_sel_sec_rec_is_for_clust_rec() treats empty BLOB prefix field in
secondary index as a field equal to any external BLOB field in clustered
index. Row_sel_get_clust_rec_for_mysql::operator() doesn't zerro out
clustered record pointer in row_search_mvcc(), and row_search_mvcc()
thinks that delete-marked secondary index record has visible for
"CHECK TABLE"'s read view old-versioned clustered index record, and
row_scan_index_for_mysql() counts it as a row.
The fix is to execute row_sel_sec_rec_is_for_blob() in
row_sel_sec_rec_is_for_clust_rec() if clustered field contains BLOB's
reference.
Remove unnecessary #ifdefs and dead code in Spider.
ha_spider::start_bulk_insert without uint flags is not used because
handler::ha_start_bulk_insert calls start_bulk_insert for each storage engine
with flags. Therefore, dead code and #ifdef related to
SPIDER_HANDLER_START_BULK_INSERT_HAS_FLAGS have been removed.
The asserion failure was caused by this query
select /*id=1*/ from t1
where
col= ( select /*id=2*/ from ... where corr_cond1
union
select /*id=4*/ from ... where corr_cond2)
Here,
- select with id=2 was correlated due to corr_cond1.
- select with id=4 was initially correlated due to corr_cond2, but then
the optimizer optimized away the correlation, making the select with id=4
uncorrelated.
However, since select with id=2 remained correlated, the execution had to
re-compute the whole UNION. When it tried to execute select with id=4, it
hit an assertion (join buffer already free'd).
This is because select with id=4 has freed its execution structures after
it has been executed once. The select is uncorrelated, so it did not expect
it would need to be executed for the second time.
Fixed this by adding this logic in
st_select_lex::optimize_unflattened_subqueries():
If a member of a UNION is correlated, mark all its members as
correlated, so that they are prepared to be executed multiple times.
Fixed tpool::pread() and tpool::pwrite() to return SSIZE_T on Windows,
so that huge numbers are not converted to negatives.
Also, make sure to never attempt reading/writing more bytes than
DWORD can accomodate (4G)
A prominent bottleneck in mtr_t::commit() is log_sys.mutex between
log_sys.append_prepare() and log_close().
User-visible change: The minimum innodb_log_file_size will be
increased from 1MiB to 4MiB so that some conditions can be
trivially satisfied.
log_sys.latch (log_latch): Replaces log_sys.mutex and
log_sys.flush_order_mutex. Copying mtr_t::m_log to
log_sys.buf is protected by a shared log_sys.latch.
Writes from log_sys.buf to the file system will be protected
by an exclusive log_sys.latch.
log_sys.lsn_lock: Protects the allocation of log buffer
in log_sys.append_prepare().
sspin_lock: A simple spin lock, for log_sys.lsn_lock.
Thanks to Vladislav Vaintroub for suggesting this idea, and for
reviewing these changes.
mariadb-backup: Replace some use of log_sys.mutex with recv_sys.mutex.
buf_pool_t::insert_into_flush_list(): Implement sorting of flush_list
because ordering is otherwise no longer guaranteed. Ordering by LSN
is needed for the proper operation of redo log checkpoints.
log_sys.append_prepare(): Advance log_sys.lsn and log_sys.buf_free by
the length, and return the old values. Also increment write_to_buf,
which was previously done in log_close().
mtr_t::finish_write(): Obtain the buffer pointer from
log_sys.append_prepare().
log_sys.buf_free: Make the field Atomic_relaxed,
to simplify log_flush_margin(). Use only loads and stores
to avoid costly read-modify-write atomic operations.
buf_pool.flush_list_requests: Replaces
export_vars.innodb_buffer_pool_write_requests
and srv_stats.buf_pool_write_requests.
Protected by buf_pool.flush_list_mutex.
buf_pool_t::insert_into_flush_list(): Do not invoke page_cleaner_wakeup().
Let the caller do that after a batch of calls.
recv_recover_page(): Invoke a minimal part of
buf_pool.insert_into_flush_list().
ReleaseBlocks::modified: A number of pages added to buf_pool.flush_list.
ReleaseBlocks::operator(): Merge buf_flush_note_modification() here.
log_t::set_capacity(): Renamed from log_set_capacity().
In commit 685d958e38 (MDEV-14425),
the log parsing in mariadb-backup --backup was rewritten.
The parameter STORE_IF_EXISTS that is being passed to recv_sys.parse_mtr()
or recv_sys.parse_pmem() instead of STORE_NO caused unnecessary additional
memory allocation for redo log records.
Removed all dependencies of command line arguments based on positions in
an array (this kind of code should never have been written).
Instead use option names, which are stable.
Reviewer: Sergei Golubchik
Remove the dead-code, in Spider, which is related to the Spider's
Oracle OCI support. The code has been disabled for a long time and
it is unlikely that the code will be enabled.
mtr_t::is_block_dirtied(), mtr_t::memo_push(): Never set m_made_dirty
for pages of the temporary tablespace. Ever since
commit 5eb539555b
we never add those pages to buf_pool.flush_list.
mtr_t::commit(): Implement part of mtr_t::prepare_write() here,
and avoid acquiring log_sys.mutex if no log is written.
During IMPORT TABLESPACE fixup, we do not write log, but we must
add pages to buf_pool.flush_list and for that, be prepared
to acquire log_sys.flush_order_mutex.
mtr_t::do_write(): Replaces mtr_t::prepare_write().