partially cherry-pick from mysql/5.6.
No test case (mysql/5.6 test case is useless, the correct
test case uses too much memory)
commit e061985813db54948f99892d89f7e076242473a5
Author: <Dao-Gang.Qu@sun.com>
Date: Tue Jun 1 15:02:22 2010 +0800
Bug #49931 Incorrect type in read_log_event error
Bug #49932 mysqlbinlog max_allowed_packet hard coded to 1GB
If somehow the COMMIT or XID event in an event group was missing, the code in
parallel replication to handle this was not sufficient, leading to server
deadlock.
In parallel replication, don't rollback inside ha_commit_trans() in case of
error.
The rollback will be done later, but the parallel replication code needs to
run unmark_start_commit() before the rollback to properly control the
sequencing of transactions.
I did not manage to come up with a reliable automatic test case for this, but
I tested it manually.
cherry-pick the upstream fix
commit d4ba10184cd7bde9c31c610e664ecd0c93605c46
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date: Wed Jul 2 11:34:11 2014 +0530
Bug#17453826:ASSERTION ERROR WHEN SETTING FUTURE BINLOG
FILE/POS WITH SEMISYNC
Problem:
========
When DMLs are in progress on the master stopping a slave and
setting ahead binlog name/pos will cause an assert on the
master.
...
Item_func::print() prints itself as name + "(" + arguments + ")".
Normally that works, but Item_func_interval internally implements its
arguments as one single Item_row. Item_row prints itself as
"(" + values + ")". As a result, "INTERVAL(1,2)" was being printed
as "INTERVAL((1,2))". Fixed with a custom Item_func_interval::print().
Redefine FT_KEYPART in a way that it does not conflict with Hash Join.
Hash join stores field->field_index in KEYUSE::keypart, so we must
use a value of FT_KEYPART that's greater than MAX_FIELDS.
The order of initialisation during server startup was incorrect. The slave
threads were started before the parallel replication worker thread pool was
initialised, allowing a race where uninitialised data could be accessed.
Do not use merge_for_insert for commands which use SELECT because optimizer can't work with such tables.
Fixes which makes multi-delete working with normally merged views.
When the distance in ST_BUFFER is too far negative the coordinates can run out of the operational
area. We should just return an empty geometry in this case.
Call mysql_derived_reinit() if we are reusing view.
This is needed as during a previous error condition the view may not have been reset
sql/sql_derived.cc:
More DBUG_PRINT
Always reset merged_for_insert (no reason to not do that)
sql/sql_derived.h:
Added prototype
sql/sql_insert.cc:
More DBUG_PRINT
Added DBUG_ASSERT
sql/sql_view.cc:
Call mysql_derived_reinit() if we are reusing view.
This is needed as during a previous error condition the view may not have been reset
sql/table.cc:
More DBUG_PRINT
Using a boolean flag for 'there is a RESET MASTER in progress' doesn't
work very well for multiple concurrent RESET MASTER statements.
Changed to a counter.
Fix MDL to report an error when a wait was killed, but preserve
the old documented behavior of GET_LOCK() where killing it is not an error.
Also remove race conditions in main.create_or_replace test
Stage "Filling schema table" is now properly shown in 'show processlist'
mysys/mf_keycache.c:
Simple cleanup with more comments
sql/lock.cc:
Return to original stage after mysql_lock_tables
Made 'Table lock' as a true stage
sql/sql_show.cc:
Restore original stage after get_schema_tables_result
- The code that tested if
WHERE expr=value AND expr=const
can be rewritten to:
WHERE const=value AND expr=const
was incomplete in case of STRING_RESULT.
- Moving the test into a new function, to reduce duplicate code.
FULLTEXT indexes do not permit index first lookups. By calling:
ha_index_first() with a garbage parameter, random data gets overwritten
that causes the table->field array to be corrupted. Subsequently, when
the field array is accessed, a segfault occurs.
By not allowing index statistics for FULLTEXT indexes, the problem is
resolved.
Fixing a wrong assymetric code in Arg_comparator::set_cmp_func().
It existed for a long time, but showed up in 10.0.14 after the fix
for "MDEV-6666 Malformed result for CONCAT(utf8_column, binary_string)".
column TIMESTAMP NOT NULL
Analysis: Problem was that value->is_null() function is called
even when user had explicitly set the value for timestamp
field. Calling this function had the side effect that
expression was evaluated twice.
Fix: (by Sergei Golubchik) check instead value->null_value.
The bug occurs when a transaction does a retry after all transactions have
done mark_start_commit() in a batch of group commit from the master. In this
case, the retrying transaction can unmark_start_commit() after the following
batch has already started running and de-allocated the GCO. Then after retry,
the transaction will re-do mark_start_commit() on a de-allocated GCO, and also
wakeup of later GCOs can be lost.
This was seen "in the wild" by a user, even though it is not known exactly
what circumstances can lead to retry of one transaction after all transactions
in a group have reached the commit phase.
The lifetime around GCO was somewhat clunky anyway. With this patch, a GCO
lives until rpl_parallel_entry::last_committed_sub_id has reached the last
transaction in the GCO. This guarantees that the GCO will still be alive when
a transaction does mark_start_commit(). Also, we now loop over the list of
active GCOs for wakeup, to ensure we do not lose a wakeup even in the
problematic case.
The test case tried to trigger a DEBUG_SYNC point at the end of a SELECT
SLEEP(5) statement. It did this by using EXECUTE 2, intending to trigger first
at the end of SET DEBUG_SYNC, and second at the end of the SELECT SLEEP(5).
However, in --ps-protocol mode, this does not work, because the SELECT is
executed in two steps (Prepare followed by Execute). Thus, the DEBUG_SYNC got
triggered too early, during the Prepare stage rather than Execute, and the
test case could race and information_schema.processlist see the thread in the
wrong state.
This patch fixes by changing the way the DEBUG_SYNC point is triggered. Now we
add a DBUG injection inside the code for SLEEP(5). This ensures that the
DEBUG_SYNC point is not activated until the SLEEP(5) is running, ensuring
that the following wait for completion will be effective.
Fix:
===
Backport Bug#11756194 to mysql-5.5. slave breaks if
'drop database' fails on master and mismatched tables on
slave.
'DROP TABLE <deleted tables>' was binlogged when
'DROP DATABASE' failed and at least one table was deleted
from the database. The log event would lead slave SQL thread
stop if some of the tables did not exist on slave.
After this patch, It is always binlogged with 'IF EXISTS'
option.
generate_derived_keys_for_table() did not work correctly in the case where
- it had a potential index on derived table
- however, TABLE::check_tmp_key() would disallow creation of this index
after looking at its future key parts (because of the key parts exceeding
max. index length)
- the code would leave a KEYUSE structure that refers to a non-existant index.
Depending on further optimizer calculations, this could cause a crash.
when restoring auto-inc value in INSERT ... ON DUPLICATE KEY UPDATE, take into account that
1. it may be changed in the UPDATE clause (old code did that)
2. it may be changed in the INSERT clause and then cause a dup key (old code missed that)