Removing the ORDER BY clause from the UNION when UNION is inside an IN/ALL/ANY/EXISTS subquery.
The rewrites are done for subqueries but this rewrite is not done for the fake_select of
the UNION.
The issue here is when records are read from the temporary file
(filesort result in this case) via a cache(rr_from_cache).
The cache is initialized with init_rr_cache.
For correlated subquery the cache allocation is happening at each execution
of the subquery but the deallocation happens only once and that was
when the query execution was done.
So generally for subqueries we do two types of cleanup
1) Full cleanup: we should free all resources of the query(like temp tables).
This is done generally when the query execution is complete or the subquery
re-execution is not needed (case with uncorrelated subquery)
2) Partial cleanup: Minor cleanup that is required if
the subquery needs recalculation. This is done for all the structures that
need to be allocated for each execution (example SORT_INFO for filesort
is allocated for each execution of the correlated subquery).
The fix here would be free the cache used by rr_from_cache in the partial
cleanup phase.
lock_rec_has_to_wait_in_queue(): Remove an obviously redundant assertion
that was added in commit a8ec45863b
and also enclose a Galera-specific condition in #ifdef WITH_WSREP.
Problem:- rpl_parallel2 was failing non-deterministically
Analysis:-
When FLUSH TABLES WITH READ LOCK is executed, it will allow all worker
threads to complete their ongoing transactions and then it will pause them.
At this state FTWRL will proceed to acquire global read lock. FTWRL first
blocks threads from starting new commits, then upgrades the lock to block
commit of existing transactions.
Step1:
FLUSH TABLES WITH READ LOCK - Blocks new commits
Step2:
* STOP SLAVE command enables 'force_abort=1' which unblocks workers,
they continue to execute events.
* T1: Waits in 'record_gtid' call to update 'gtid_slave_pos' table with
its current GTID, but it is blocked becuase of Step1.
* T2: Holds COMMIT lock and waits for T1 to commit.
Step3:
FLUSH TABLES WITH READ LOCK - Waiting to get BLOCK_COMMIT.
This results in deadlock. When STOP SLAVE command allows paused workers to
proceed, workers should skip the execution of all further events, similar
to 'conservative' parallel mode.
Solution:-
We will assign 1 to skip_event_group when we are aborted in do_ftwrl_wait.
rpl_parallel_entry->pause_sub_id is only reset when force_abort is off in
rpl_pause_after_ftwrl.
and inaccurately
Analysis: The list of all privileges is 118 characters wide. However, the
format of error message was: "%-.32s command denied to user...". get_length()
sets the maximum width to 32 characters. As a result, only first 32
characters of list of privilege are stored.
Fix: Changing the format to "%-.100T..." so that get_length() sets width to
100. Hence, first 100 characters of the list of privilege are stored and the
type specifier 'T' appends '...' so that truncation can be seen.
Diagnostics_area::sql_errno upon query from I_S with LIMIT ROWS EXAMINED
open_normal_and_derived_table() fails because the query was already killed
as rows examined by the query are more than the limit. However, this isn't a
real error.
Fix: Check if there is actually an error before calling thd->sql_errno()
and later send a warning in handle_select() if no real error.
lock_rec_has_to_wait
wsrep_kill_victim
lock_rec_create_low
lock_rec_add_to_queue
DeadlockChecker::select_victim()
THD can't change from normal transaction to BF (brute force) transaction
here, thus there is no need to syncronize access in wsrep_thd_is_BF
function.
lock_rec_has_to_wait_in_queue
Add condition that lock is not NULL and add assertions if we are in
strong state.
/home/buildbot/buildbot/build/storage/xtradb/mtr/mtr0mtr.cc:97:37: error: invalid access to non-static data member ‘fil_space_t::latch’ of NULL object [-Werror=invalid-offsetof]
The purpose of the InnoDB doublewrite buffer is to make InnoDB
tolerant against cases where the server was killed in the middle
of a page write. (In Linux, killing a process may interrupt a
write system call, typically on a 4096-byte boundary.)
There may exist multiple copies of a page number in the doublewrite
buffer. Recovery should choose the latest valid copy of the page.
By design, the FIL_PAGE_LSN must not precede the latest checkpoint LSN
nor be later than the end of the recovered log.
For page_compressed and encrypted pages, we were missing proper
consistency checks. In the 10.4 data set generated for in MDEV-23231,
the data file contained a valid page_compressed page, and an
identical copy of that page was also present in the doublewrite
buffer. But, recovery would incorrectly consider the page invalid
and restore an uncompressed copy of the same page that had been
written before the log checkpoint. (In fact, no redo log was to
be applied to that page.)
buf_dblwr_process(): Validate the FIL_PAGE_LSN in the doublewrite
buffer pages, and always skip page 0, because those pages should
have been recovered by Datafile::restore_from_doublewrite() if
necessary.
Datafile::restore_from_doublewrite(): Choose the latest applicable
page from the doublewrite buffer.
recv_dblwr_t::find_page(): Also validate encrypted or
page_compressed pages.
recv_dblwr_t::validate_page(): New function to validate a page,
either a copy in a data file or in the doublewrite buffer.
Also validate encrypted or page_compressed pages.
This is joint work with Thirunarayanan Balathandayuthapani.
row_vers_impl_x_locked_low(): clust_offsets may point to memory
that is allocated by mem_heap_alloc() and may have been freed.
For initializing clust_offsets, try to use the stack-allocated
buffer instead of a pointer that may point to freed memory.
This fixes a regression that was introduced in
commit f0aa073f2b (MDEV-20950).
Call mark_columns_per_binlog_row_image before find_row() to set up table->vcol_set early,
so the virtual column value will be updated after record read (ha_rnd_pos/ha_index_next/etc)
by table->update_virtual_fields() call
Problem:
========
In row_merge_drop_indexes(), InnoDB drops only the index from
dictionary and frees the index pages but it maintains the index
object if the table is being used by other DML threads. It sets
the online status of the index to ONLINE_INDEX_ABORTED_DROPPED.
Removing the index from dictionary doesn't remove the
corressponding ahi entries of the index. When block is being
reused, InnoDB tries to remove ahi entries for the block and
it fails if index online status is ONLINE_INDEX_ABORTED_DROPPED.
Fix:
====
MDEV-22456 allows the index ahi entries to be dropped lazily.
so checking online status in btr_search_drop_page_hash_index()
is meaningless and should be removed.
The only change between Percona XtraDB Server 5.6.48-88.0
and 5.6.49-89.0 (apart from the version number change) was
percona/percona-server@25ec240920
which we had already addressed in
commit 7c03edf2fe and
commit c0fca2863b.
Do not collect EITS statistics for this statement:
ALTER TABLE t ANALYZE PARTITION p
EITS stats are currently global, not per-partition.
Collecting global stats when we are asked to process just one partition
causes issues for DBAs.
Fix prefix key comparison in partitioning. Comparions must
take into account no more than prefix_len characters.
It used to compare prefix_len*mbmaxlen bytes.
I run perf top during ./mtr testing and constantly see times()
function there. It's so slow, that it has no sense to run it
in a loop too many times.
This patch speeds up -suite=innodb for me from 218s to 208s.
9s of times() function!
mysql/mysql-server@e00ad49edc
which introduced WL#6326 to MySQL 5.7.2 added a buffer page
acquisition to CHECK TABLE code (solely for the purpose of
obeying the changed latching order), but failed to check that
a parent page actually exists. It would not necessarily exist in a
corrupted index where a parent page is missing pointer records
to child pages.
Largely based on MySQL commit
75271e51d6
MySQL Ref:
BUG#24566529: BACKPORT BUG#23575445 TO 5.6
(cut)
Also, the PTR_SANE macro which tries to check if a pointer
is invalid (used when printing pointer values in stack traces)
gave false negatives on OSX/FreeBSD. On these platforms we
now simply check if the pointer is non-null. This also removes
a sbrk() deprecation warning when building on OS X. (It was
before only disabled with building using XCode).
Removed execinfo path of MySQL patch that was already included.
sbrk doesn't exist on FreeBSD aarch64.
Removed HAVE_BSS_START based detection and replaced with __linux__
as it doesn't exist on OSX, Solaris or Windows. __bss_start
exists on mutiple Linux architectures.
Tested on FreeBSD and Linux x86_64. Being in FreeBSD ports for 2
years implies a good testing there on all FreeBSD architectures there
too. MySQL-8.0.21 code is functionally identical to original commit.
RocksDB fails to build on arm64: undefined reference to
`crc32c_arm64(unsigned int, unsigned char const*, unsigned int)'
MariaDB uses storage/rocksdb/build_rocksdb.cmake to compile RocksDB.
Said cmake missed adding crc32c_arm64 compilation target so if
machine native architecture supported crc32 then complier would enable
usage of function defined in crc32c_arm64 causing the listed error.
Added crc32c_arm64 complition target.
closes#1642