See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
MDEV-19243 introduced a regression on Windows.
In (supposedly rare) case, where environment variable TZ was set,
@@system_time_zone no longer derives from TZ. Instead, it incorrecty
refers to system default time zone, eventhough UTC time conversion
takes TZ into account.
The fix is to restore TZ-aware handling (timezone name derives from
tzname), if TZ is set.
Adding debug output for key and keyseg flags at ha_myisam::open() time.
So now there are three points of debug output:
1. In the very end of mysql_prepare_create_table()
2. In ha_myisam::create(), after the table2myisam() call
3. In ha_myisan::open(), after the mi_open() call
mi_create(), which is is called between 2 and 3, modifies flags for
some data types, so the output in 2 and 3 is different.
Fix error message to contain correct errno. This commit was
tested interactively because mtr will notice if you provide
wrong wsrep_provider in config and you may not change
wsrep_provider dynamically.
If repl.max_ws_size is set too low following CREATE TABLE could fail
during commit. In this case wsrep_commit_empty should allow rolling
it back if provider state is s_aborted.
Furhermore, original ER_ERROR_DURING_COMMIT does not really tell anything
clear for user. Therefore, this commit adds a new error
ER_TOO_BIG_WRITESET. This will change some test cases output.
Problem was that in ALTER TABLE execution variables were set
to 1 even when wsrep_auto_increment_control is OFF. We should
set them only when wsrep_auto_increment_control is ON.
In test user has set WSREP_ON=OFF this causes streaming replication
recovery to fail and this caused call to unireg_abort(). However,
this call is not necessary and we can let transaction to fail. Naturally,
if real user does this he needs to bootstrap his cluster.
This test is not slow, but it reliably produces an EXPLAIN difference
(number of rows) on the Valgrind builder. A possible explanation could be
that the purge threads are not being scheduled. Valgrind runs all threads
in a single thread.
When f.ex. table is partitioned by HASH(a) and we rename column `a' to
`b' partitioning filter stays unchanged: HASH(a). That's the wrong
behavior.
The patch updates partitioning filter in accordance to the new columns
names. That includes partition/subpartition expression and
partition/subpartition field list.
For "const char *" replace() and after() accepted const as "T *" and
passed forward "void *". This cannot be cast implicitly, so we better
use "const void *" instead of "void *" in the input interface. This
way we avoid problems with using List for any const type.
btr_search_guess_on_hash() would only acquire an index page latch if it
is invoked with ahi_latch=NULL. If it's invoked from
row_sel_try_search_shortcut_for_mysql() with ahi_latch!=NULL, a page
will not be latched, and row_search_mvcc() will get a pointer to the
record, which can be changed by some other transaction before the record
was stored in result buffer with row_sel_store_mysql_rec() call.
ahi_latch argument of btr_cur_search_to_nth_level_func() and
btr_pcur_open_with_no_init_func() is used only for
row_sel_try_search_shortcut_for_mysql().
btr_cur_search_to_nth_level_func(..., ahi_latch !=0, ...) is invoked
only from btr_pcur_open_with_no_init_func(..., ahi_latch !=0, ...),
which, in turns, is invoked only from
row_sel_try_search_shortcut_for_mysql().
I suppose that separate case with ahi_latch!=0 was intentionally
implemented to protect row_sel_store_mysql_rec() call in
row_search_mvcc() just after row_sel_try_search_shortcut_for_mysql()
call. After the ahi_latch was moved from row_seach_mvcc() to
row_sel_try_search_shortcut_for_mysql(), there is no need in it at all
if btr_search_guess_on_hash() latches a page unconditionally. And if
btr_search_guess_on_hash() latched the page, any access to the record in
row_sel_try_search_shortcut_for_mysql() after btr_pcur_open_with_no_init()
call will be protected with the page latch.
The fix is to remove ahi_latch argument from
btr_pcur_open_with_no_init_func(), btr_cur_search_to_nth_level_func()
and btr_search_guess_on_hash().
There will not be test, as to test it we need to freeze some SELECT
execution in the point between row_sel_try_search_shortcut_for_mysql()
and row_sel_store_mysql_rec() calls in row_search_mvcc(), and to change
the record in some other transaction to let row_sel_store_mysql_rec() to
store changed record in result buffer. Buf we can't do this with the
fix, as the page will be latched in btr_search_guess_on_hash() call.
Let us disable Valgrind on tests that would fail because a
server shutdown or a STOP SLAVE command would take longer,
causing the test harness to forcibly and silently kill the server
due to an exceeded timeout.
row_purge_get_partial(): Replaces trx_undo_rec_get_partial_row().
Also copy the purge_node_t::ref to the purge_node_t::row.
In this way, the clustered index key fields will always be
available, even if thanks to
commit d384ead0f0 (MDEV-14799)
they would no longer be repeated in the remaining part of the
undo log record.
This commit adds automation that will reduce the possibility
of user errors when customizing wsrep_notify.sh (in particular
caused by user-specified parameters). Now all leading and trailing
spaces are removed from the user-specified parameters and automatic
port and host address substitution has been added to scripts, as
well as automatic password substitution to the client command line,
only if it is specified in the wsrep_notify.sh and not as empty
strings. Also added support for automatic substitution of the all
SSL-related parameters and improved parsing for ipv6 addresses
(to allow "[...]" notation for ipv6 addresses). Also added a
test to check if the wsrep notify script will works with SSL.
trx->mysql_thd can be zeroed-out between thd_get_thread_id() and
thd_query_safe() calls in fill_trx_row(). trx_disconnect_prepared() zeroes out
trx->mysql_thd. And this can cause null pointer dereferencing in
fill_trx_row().
fill_trx_row() is invoked from fetch_data_into_cache() under trx_sys.mutex.
Bug fix is in reseting trx_t::mysql_thd in trx_disconnect_prepared() under
trx_sys.mutex lock too.
MTR test case can't be created for the fix, as we need to wait for
trx_t::mysql_thd reseting in fill_trx_row() after trx_t::mysql_thd was
checked for null while trx_sys.mutex is held. But trx_t::mysql_thd must be
reset in trx_disconnect_prepared() under trx_sys.mutex. There will be deadlock.