If a transaction T1 needs to wait for a transaction T2, T2's commit will
skip the normal binlog_commit_wait_usec delay, in order not to needlessly
stall throughput.
This works by checking if T2 is already ready to commit. If so, it is woken
up. If not, we set a flag in T2 so that when it gets ready to commit, it
will do so immediately.
But there was a potential race due to insufficient locking, if T2 gets ready
to commit just at the point where T1 does the check. If the race hits, the
wakeup (and early commit) of T2 might be lost.
The race is only theoretical (from code inspection, no known test case), but
seems best to fix it anyway, by properly locking LOCK_prepare_ordered around
the check.
The assertion is there to catch cases where we rollback while
mark_start_commit() is active. This can allow following event groups
to be replicated too early, causing conflicts.
But in this case, we have an _explicit_ ROLLBACK event in the binlog,
which should not assert.
We fix this by delaying the mark_start_commit() in the explicit
ROLLBACK case. It seems safest to delay this in ROLLBACK case anyway,
and there should be no reason to try to optimise this corner case.
Problem: Not all permanent Item_direct_view_ref was in permanent list of used items of the view.
Solution: Detect creating permenent view/derived table reference and put them in the permanent list at once.
Test failed because it hit net_write_timeout. It might happen in
different circumstances, and that's not what the testcase tests,
so the timeout is now set to a bigger value.
- Make semi-join optimizer not to choose LooseScan
when 1) the index is not covered and 2) full index
scan will be required.
- Make sure that the code in make_join_select() that may change
full index scan into a range scan is not invoked when the table
uses full scan.
of more than 45G with a key_cache_block_size of 1024 or less.
The problem was that some of the arguments to my_multi_malloc() got to be
more than 4G.
Fix:
- Inntroduced my_multi_malloc_large() that can handle big regions.
- Changed MyISAM and Aria key caches to use my_multi_malloc_large().
I didn't change the default my_multi_malloc() as this would be a too big
patch and we don't allocate 4G blocks anywhere else.
This bug is essentially another variant of MDEV-7458.
If a transaction conflict caused a deadlock kill of T2 in record_gtid()
during commit, the code would do a rollback _before_ running
rgi->unmark_start_commit(). This creates a race where following transactions
could start too early (before T2 has completed its transaction retry). This
in turn could lead to replication failure, if there was a conflict that
caused eg. duplicate key error or similar.
The fix is to remove these rollbacks (in Query_log_event::do_apply_event()
and Xid_log_event::do_apply_event(). They seem out-of-place; code in
log_event.cc generally does not roll back on error, this is handled higher
up.
In addition, because of the extreme difficulty of reproducing bugs like
MDEV-7458 and MDEV-8302, this patch adds some extra precations to try to
detect (in debug builds) or prevent (in release builds) similar bugs.
ha_rollback_trans() will now call unmark_start_commit() if needed (and
assert in debug build when a caller does rollback without unmark first).
We also add an extra check for thd->killed() so that we avoid doing
mark_start_commit() if we already have a pending deadlock kill.
And we add a missing unmark_start_commit() call in the error case, found by
the above assertion.
The problem was with Materialized_cursor and temporary table it uses.
Temorary table's fields had Field::orig_table pointing to the tables
that were used in the query that produced data for the cursor.
When "FETCH INTO sp_var" statement is executed, those original tables
were already closed. However, copying from Materialized_cursor's table
into SP variable may cause field_conv() to be invoked which calls
field->type() which may access field->orig_table (for certain field types).
Fixed by setting Materialized_cursor->table->field[i]->orig_table to point
to Materialized_cursor->table. (this is how it is done for regular base
tables)
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
Correct fix for this bug.
The problem was that Item_func_group_concat() was calling
setup_order(), passing args as the second argument,
ref_pointer_array. While ref_pointer_array should have free
space at the end, as setup_order() can append elements to it.
In this particular case args[] elements were overwritten when
setup_order() was pushing new elements into ref_pointer_array.
Alternative fix that doesn't cause view.test crash in --ps:
Remember when Item_ref was fixed right in the constructor
and did not have a full Item_ref::fix_fields() call. Later
in PS/SP, after Item_ref::cleanup, we use this knowledge
to avoid doing full fix_fields() for items that were never
supposed to be fix_field'ed.
Simplify the test case.
execution of PS
GROUP_CONCAT() with ORDER BY column position may crash server on PS reexecution.
The problem was that arguments array of GROUP_CONCAT() was adjusted to point to
temporary elements (resolved ORDER BY fields) during first execution.
This patch expands rev. 08763096cb to restore original arguments array as well.
There is several different ways to incorrectly define
foreign key constraint. In many cases earlier MariaDB
versions the error messages produced by these cases
are not very clear and helpful. This patch improves
the warning messages produced by foreign key parsing.