Test failed because it hit net_write_timeout. It might happen in
different circumstances, and that's not what the testcase tests,
so the timeout is now set to a bigger value.
- Make semi-join optimizer not to choose LooseScan
when 1) the index is not covered and 2) full index
scan will be required.
- Make sure that the code in make_join_select() that may change
full index scan into a range scan is not invoked when the table
uses full scan.
This bug is essentially another variant of MDEV-7458.
If a transaction conflict caused a deadlock kill of T2 in record_gtid()
during commit, the code would do a rollback _before_ running
rgi->unmark_start_commit(). This creates a race where following transactions
could start too early (before T2 has completed its transaction retry). This
in turn could lead to replication failure, if there was a conflict that
caused eg. duplicate key error or similar.
The fix is to remove these rollbacks (in Query_log_event::do_apply_event()
and Xid_log_event::do_apply_event(). They seem out-of-place; code in
log_event.cc generally does not roll back on error, this is handled higher
up.
In addition, because of the extreme difficulty of reproducing bugs like
MDEV-7458 and MDEV-8302, this patch adds some extra precations to try to
detect (in debug builds) or prevent (in release builds) similar bugs.
ha_rollback_trans() will now call unmark_start_commit() if needed (and
assert in debug build when a caller does rollback without unmark first).
We also add an extra check for thd->killed() so that we avoid doing
mark_start_commit() if we already have a pending deadlock kill.
And we add a missing unmark_start_commit() call in the error case, found by
the above assertion.
The problem was with Materialized_cursor and temporary table it uses.
Temorary table's fields had Field::orig_table pointing to the tables
that were used in the query that produced data for the cursor.
When "FETCH INTO sp_var" statement is executed, those original tables
were already closed. However, copying from Materialized_cursor's table
into SP variable may cause field_conv() to be invoked which calls
field->type() which may access field->orig_table (for certain field types).
Fixed by setting Materialized_cursor->table->field[i]->orig_table to point
to Materialized_cursor->table. (this is how it is done for regular base
tables)
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
Alternative fix that doesn't cause view.test crash in --ps:
Remember when Item_ref was fixed right in the constructor
and did not have a full Item_ref::fix_fields() call. Later
in PS/SP, after Item_ref::cleanup, we use this knowledge
to avoid doing full fix_fields() for items that were never
supposed to be fix_field'ed.
Simplify the test case.
execution of PS
GROUP_CONCAT() with ORDER BY column position may crash server on PS reexecution.
The problem was that arguments array of GROUP_CONCAT() was adjusted to point to
temporary elements (resolved ORDER BY fields) during first execution.
This patch expands rev. 08763096cb to restore original arguments array as well.
There is several different ways to incorrectly define
foreign key constraint. In many cases earlier MariaDB
versions the error messages produced by these cases
are not very clear and helpful. This patch improves
the warning messages produced by foreign key parsing.
[Attempt #] Make the code that handles "Prepare" phase for multi-table
UPDATE statements handle non-merged semijoins. It can encounter them when
a prepared statement is executed for the second time.
with plugin-load-add that are already registered at mysql.plugin
- issue just one error message, without this extra warning
- don't abuse ER_UDF_EXISTS, instead add a proper error message for plugins
- report started initialization for each plugin source
Fix was to add a test in Query_log_event::Query_log_event() if we are using
CREATE ... SELECT and in this case use trans cache, like we do on the master.
This avoid using (with doesn't have checksum)
Other things:
- Removed dummy call my_checksum(0L, NULL, 0)
- More DBUG_PRINT
- Cleaned up Log_event::need_checksum() to make it more readable (similar as in MySQL 5.6)
- Renamed variable that was hiding another one in create_table_imp()
FUNCTIONS/PRIVILEGES DIFFERENTLY'
Fix for bug#11759114 - '51401: GRANT TREATS NONEXISTENT
FUNCTIONS/PRIVILEGES DIFFERENTLY'.
The problem was that attempt to grant EXECUTE or ALTER
ROUTINE privilege on stored procedure which didn't exist
succeed instead of returning an appropriate error like
it happens in similar situation for stored functions or
tables.
The code which handles granting of privileges on individual
routine calls sp_exist_routines() function to check if routine
exists and assumes that the 3rd parameter of the latter
specifies whether it should check for existence of stored
procedure or function. In practice, this parameter had
completely different meaning and, as result, this check was
not done properly for stored procedures.
This fix addresses this problem by bringing sp_exist_routines()
signature and code in line with expectation of its caller.
Conflicts:
mysql-test/r/grant.result
mysql-test/t/grant.test
sql/sp.cc
Analysis: At check_trx_exists function InnoDB allocates
a new trx if no trx is found from thd but this newly
allocated trx is not registered to thd. This is unsafe,
because nothing prevents InnoDB plugin from being uninstalled
while there's active transaction. This can cause crashes, hang
and any other odd behavior. It may also corrupt stack, as
functions pointers are not available after dlclose.
Fix: The fix is to use thd_set_ha_data() when
manipulating per-connection handler data. It does appropriate
plugin locking.
Problem was that test just takes too long time in slow I/O and triggers
testcase timeout. Reduced the number of operations and inserts to make
test shorter.
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
Analysis; Problem is that InnoDB does not have support for generating
CURRENT_TIMESTAMP or constant default.
Fix: Add additional check if column has changed from NULL -> NOT NULL
and column default has changed. If this is is first column definition
whose SQL type is TIMESTAMP and it is defined as NOT NULL and
it has either constant default or function default we must use
"Copy" method for alter table.
SELECT ... WHERE XX IN (SELECT YY)
this was transformed to something like:
SELECT ... WHERE IF_EXISTS(SELECT ... HAVING XX=YY)
The bug was that for normal execution XX was fixed in the original outer SELECT context while in PS it was fixed in the sub query context and this confused the optimizer.
Fixed by ensuring that XX is always fixed in the outer context.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
The --gtid-ignore-duplicates option was not working correctly with row-based
replication. When a row event was completed, but before committing, there
was a small window where another multi-source SQL thread could wrongly try
to re-execute the same transaction, without properly ignoring the duplicate
GTID. This would lead to duplicate key error or out-of-order GTID error or
similar.
Thanks to Matt Neth for reporting this and giving an easy way to reproduce
the issue.
Problem:
If we add a referential integrity constraint with a duplicate
name, an error occurs. The foreign key object would not have
been added to the dictionary cache. In the error path, there
is an attempt to remove this foreign key object. Since this
object is not there, the search returns a NULL result.
De-referencing the null object results in this crash.
Solution:
If the search to the foreign key object failed, then don't
attempt to access it.
rb#9309 approved by Marko.
in ha_delete_table()
* only convert ENOENT and HA_ERR_NO_SUCH_TABLE to warnings
* only return real error codes (that is, not ENOENT and
not HA_ERR_NO_SUCH_TABLE)
* intercept HA_ERR_ROW_IS_REFERENCED to generate backward
compatible ER_ROW_IS_REFERENCED
in mysql_rm_table_no_locks()
* no special code to handle HA_ERR_ROW_IS_REFERENCED
* no special code to handle ENOENT and HA_ERR_NO_SUCH_TABLE
* return multi-table error ER_BAD_TABLE_ERROR <table list> only
when there were many errors, not when there were many
tables to drop (but only one table generated an error)
When RENAME TABLE is executed, it apparently does not check whether the engine
is available (unlike ALTER TABLE .. RENAME, which does). It means that if the
engine in question was not loaded on some reason, the table might become
unusable, since the engine won't know about the change.
With this patch RENAME TABLE fails if storage engine is not available.