The code was using the wrong variable when comparing the binlog name
for the UNTIL position. This could cause the comparison to fail after
binlog rotation, in turn causing the UNTIL clause to not trigger slave
stop.
If a transaction T1 needs to wait for a transaction T2, T2's commit will
skip the normal binlog_commit_wait_usec delay, in order not to needlessly
stall throughput.
This works by checking if T2 is already ready to commit. If so, it is woken
up. If not, we set a flag in T2 so that when it gets ready to commit, it
will do so immediately.
But there was a potential race due to insufficient locking, if T2 gets ready
to commit just at the point where T1 does the check. If the race hits, the
wakeup (and early commit) of T2 might be lost.
The race is only theoretical (from code inspection, no known test case), but
seems best to fix it anyway, by properly locking LOCK_prepare_ordered around
the check.
The assertion is there to catch cases where we rollback while
mark_start_commit() is active. This can allow following event groups
to be replicated too early, causing conflicts.
But in this case, we have an _explicit_ ROLLBACK event in the binlog,
which should not assert.
We fix this by delaying the mark_start_commit() in the explicit
ROLLBACK case. It seems safest to delay this in ROLLBACK case anyway,
and there should be no reason to try to optimise this corner case.
Problem: Not all permanent Item_direct_view_ref was in permanent list of used items of the view.
Solution: Detect creating permenent view/derived table reference and put them in the permanent list at once.
- Make semi-join optimizer not to choose LooseScan
when 1) the index is not covered and 2) full index
scan will be required.
- Make sure that the code in make_join_select() that may change
full index scan into a range scan is not invoked when the table
uses full scan.
This bug is essentially another variant of MDEV-7458.
If a transaction conflict caused a deadlock kill of T2 in record_gtid()
during commit, the code would do a rollback _before_ running
rgi->unmark_start_commit(). This creates a race where following transactions
could start too early (before T2 has completed its transaction retry). This
in turn could lead to replication failure, if there was a conflict that
caused eg. duplicate key error or similar.
The fix is to remove these rollbacks (in Query_log_event::do_apply_event()
and Xid_log_event::do_apply_event(). They seem out-of-place; code in
log_event.cc generally does not roll back on error, this is handled higher
up.
In addition, because of the extreme difficulty of reproducing bugs like
MDEV-7458 and MDEV-8302, this patch adds some extra precations to try to
detect (in debug builds) or prevent (in release builds) similar bugs.
ha_rollback_trans() will now call unmark_start_commit() if needed (and
assert in debug build when a caller does rollback without unmark first).
We also add an extra check for thd->killed() so that we avoid doing
mark_start_commit() if we already have a pending deadlock kill.
And we add a missing unmark_start_commit() call in the error case, found by
the above assertion.
The problem was with Materialized_cursor and temporary table it uses.
Temorary table's fields had Field::orig_table pointing to the tables
that were used in the query that produced data for the cursor.
When "FETCH INTO sp_var" statement is executed, those original tables
were already closed. However, copying from Materialized_cursor's table
into SP variable may cause field_conv() to be invoked which calls
field->type() which may access field->orig_table (for certain field types).
Fixed by setting Materialized_cursor->table->field[i]->orig_table to point
to Materialized_cursor->table. (this is how it is done for regular base
tables)
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
Correct fix for this bug.
The problem was that Item_func_group_concat() was calling
setup_order(), passing args as the second argument,
ref_pointer_array. While ref_pointer_array should have free
space at the end, as setup_order() can append elements to it.
In this particular case args[] elements were overwritten when
setup_order() was pushing new elements into ref_pointer_array.
Alternative fix that doesn't cause view.test crash in --ps:
Remember when Item_ref was fixed right in the constructor
and did not have a full Item_ref::fix_fields() call. Later
in PS/SP, after Item_ref::cleanup, we use this knowledge
to avoid doing full fix_fields() for items that were never
supposed to be fix_field'ed.
Simplify the test case.
execution of PS
GROUP_CONCAT() with ORDER BY column position may crash server on PS reexecution.
The problem was that arguments array of GROUP_CONCAT() was adjusted to point to
temporary elements (resolved ORDER BY fields) during first execution.
This patch expands rev. 08763096cb to restore original arguments array as well.
[Attempt #] Make the code that handles "Prepare" phase for multi-table
UPDATE statements handle non-merged semijoins. It can encounter them when
a prepared statement is executed for the second time.
with plugin-load-add that are already registered at mysql.plugin
- issue just one error message, without this extra warning
- don't abuse ER_UDF_EXISTS, instead add a proper error message for plugins
- report started initialization for each plugin source
Fix was to add a test in Query_log_event::Query_log_event() if we are using
CREATE ... SELECT and in this case use trans cache, like we do on the master.
This avoid using (with doesn't have checksum)
Other things:
- Removed dummy call my_checksum(0L, NULL, 0)
- More DBUG_PRINT
- Cleaned up Log_event::need_checksum() to make it more readable (similar as in MySQL 5.6)
- Renamed variable that was hiding another one in create_table_imp()
The old code used pthread_setspecific() to store temporary data used by the thread.
This is not safe when used with thread pool, as the thread may change for the transaction.
The fix is to save the data in THD, which is guaranteed to be properly freed.
I also fixed the code so that we don't do a malloc() for every transaction.
field.cc
- Fixed warning about overlapping memory copy (backport from 10.0)
Item_subselect.cc
- Fixed core dump in main.view
- Problem was that thd->lex->current_select->master_unit()->item was not set, which caused crash in maxr_as_dependent
sql/mysqld.cc
- Got error on shutdown as we where freeing mutex before all THD objects was freed
(~THD uses some mutex). Fixed by during shutdown freeing THD inside mutex.
sql/log.cc
- log_space_lock and LOCK_log where locked in inconsistenly. Fixed by not having a log_space_lock around purge_logs.
sql/slave.cc
- Remove unnecessary log_space_lock
- Move cond_broadcast inside lock to ensure we don't miss the signal
FUNCTIONS/PRIVILEGES DIFFERENTLY'
Fix for bug#11759114 - '51401: GRANT TREATS NONEXISTENT
FUNCTIONS/PRIVILEGES DIFFERENTLY'.
The problem was that attempt to grant EXECUTE or ALTER
ROUTINE privilege on stored procedure which didn't exist
succeed instead of returning an appropriate error like
it happens in similar situation for stored functions or
tables.
The code which handles granting of privileges on individual
routine calls sp_exist_routines() function to check if routine
exists and assumes that the 3rd parameter of the latter
specifies whether it should check for existence of stored
procedure or function. In practice, this parameter had
completely different meaning and, as result, this check was
not done properly for stored procedures.
This fix addresses this problem by bringing sp_exist_routines()
signature and code in line with expectation of its caller.
Conflicts:
mysql-test/r/grant.result
mysql-test/t/grant.test
sql/sp.cc
When macro is expanded in an expression like ~QPLAN_QC_NO (e.g. in the
Query_cache::send_result_to_client() function in sql/sql_cache.cc) then without
the parenthesis the expression will be evaluated to a wrong value.
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
- Removing use of calls to current_thd
- More DBUG_PRINT
- Code style changes
- Made some local functions static
Ensure that calls to print_keyuse are locked with mutex to get all lines in same debug packet
SELECT ... WHERE XX IN (SELECT YY)
this was transformed to something like:
SELECT ... WHERE IF_EXISTS(SELECT ... HAVING XX=YY)
The bug was that for normal execution XX was fixed in the original outer SELECT context while in PS it was fixed in the sub query context and this confused the optimizer.
Fixed by ensuring that XX is always fixed in the outer context.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
The --gtid-ignore-duplicates option was not working correctly with row-based
replication. When a row event was completed, but before committing, there
was a small window where another multi-source SQL thread could wrongly try
to re-execute the same transaction, without properly ignoring the duplicate
GTID. This would lead to duplicate key error or out-of-order GTID error or
similar.
Thanks to Matt Neth for reporting this and giving an easy way to reproduce
the issue.