followup for 97c2a7354b - don't use thd->is_error(),
the error could've been set before TABLE_LIST::cleanup_items.
Use the error handler to count errors.
This fixes rpl.rpl_row_binlog_max_cache_size - it was failing when
ER_STMT_CACHE_FULL happened duing multi-table update. Because
multi_update::abort_result_set() calls do_updates() to update
as much as possible, so one cannot rely on thd->is_error() after that.
Fixed by making sure that the sort buffer would have atleast MERGEBUFF2 keys.
Also fixed MDEV-13457 by making sure that an empty tree is never dumped to the disk
it used current_thd->alloc() and allocated on the thd's execution arena,
not on table->expr_arena.
Remove THD::arena_for_cached_items that is temporarily set in
update_virtual_fields(), and replaces THD arena in get_datetime_value().
Instead set THD arena to table->expr_arena for the whole duration
of update_virtual_fields()
- reverted from tracking donor servicing thread. With xtrabackup SST,
xtrabackup thread will call FTWRL and node is desynced upfront
- Skipping desync in FTWRL if node is operating as donor
- enveloped FTWRL processing with wsrep desync/resync calls. This way FTWRL processing node
will not cause flow control to kick in
- donor servicing thread is unfortunate exception, we must let him to pause provider as part
of FTWRL phase, but not desync/resync as this is done as part of donor control on higher
level
- Fixed typos
- Added --core-on-failure to mysql-test-run
- More DBUG_PRINT in viosocket.c
- Don't forget CLIENT_REMEMBER_OPTIONS for compressed slave protocol
- Removed not used stage variables
(like DROP TABLE) has been scheduled before conflicting DDL's (like INSERT)
are commited.
What makes these bugs hard to detect is that in most cases any wrong
schduling are caught by MDL locks. It's only when there are timing issues
that the bugs (usually deadlocks) are noticed.
This patch adds a DBUG_ASSERT() that detects, in parallel replication,
if a DDL is scheduled before any depending DML'S are commited.
It does this be checking if there are any conflicting replication locks
when the DDL is about to wait for getting it's MDL lock.
I also did some minor code cleanups in sql_base.cc to make this code
similar to other related code.
Transaction replay causes the THD to re-apply the replication
events from execution, using the same path appliers do. While
applying the log events, the THD's timestamp is set to the
timestamp of the event.
Setting the timestamp explicitly causes function NOW() to
always the timestamp that was set. To avoid this behavior we
reset the timestamp after replaying is done.
Some statements are always replicated in STATEMENT binlog format.
So upon their execution, the current binlog format is temporarily
switched to STATEMENT even though the session's format is different.
This state, stored in THD's current_stmt_binlog_format, was getting
incorrectly masked by wsrep_forced_binlog_format, causing assertions
and unintended generation of row events.
Backported galera.galera_forced_binlog_format and added a test
specific to this case.
The main.merge test case was failing when tested using row based
binlog format.
While analyzing the issue it was found the following issues:
a) The server is calling binlog related code even when a statement will
not be binlogged;
b) The child table list was not present into table structure by the time
to generate the create table statement;
c) The tables in the child table list will not be opened yet when
generating table create info using row based replication;
d) CREATE TABLE LIKE TEMP_TABLE does not preserve original table storage
engine when using row based replication;
This patch addressed all above issues.
@ sql/sql_class.h
Added a function to determine if the binary log is disabled to
the current session. This is related with issue (a) above.
@ sql/sql_table.cc
Added code to skip binary logging related code if the statement
will not be binlogged. This is related with issue (a) above.
Added code to add the children to the query list of the table that
will have its CREATE TABLE generated. This is related with issue (b)
above.
Added code to force the storage engine to be generated into the
CREATE TABLE. This is related with issue (d) above.
@ storage/myisammrg/ha_myisammrg.cc
Added a test to skip a table getting info about a child table if the
child table is not opened. This is related to issue (c) above.
- Fixes query cache so that it is aware of wsrep_sync_wait.
Query cache would return (possibly stale) results to the
client, regardless of the value of wsrep_sync_wait.
- Includes the test case that reproduced the issue.
- Fixes query cache so that it is aware of wsrep_sync_wait.
Query cache would return (possibly stale) results to the
client, regardless of the value of wsrep_sync_wait.
- Includes the test case that reproduced the issue.
- Added testing if connection is killed to shortcut reading of connection data
This will allow us later in 10.2 to do a cleaner shutdown of slaves (less errors in the log)
- Add new status variables: Slaves_connected, Slaves_running and Slave_connections.
- Use MYSQL_SLAVE_NOT_RUN instead of 0 with slave_running.
- Don't print obvious extra warnings to the error log when slave is shut down normally.
As galera node (slave) received query log events from an async
replication master, it partially wrote the updates made to replication
state table (mysql.gtid_slave_pos) to galera transaction writeset post
TOI. As a result, the transaction handle, thus created within galera,
was never freed/purged as the corresponding trx did not commit.
Thus, it kept piling up for every query log event and was only reclaimed
upon server shutdown when the transaction map object got destructed.
Fixed by making sure that updates in replication slave state table
are not written to galera transaction writeset and thus, not replicated
to other nodes.
Problem:
At the end of first execution select_lex->prep_where is pointing to
a runtime created object (temporary table field). As a result
server exits trying to access a invalid pointer during second
execution.
Analysis:
While optimizing the join conditions for the query, after the
permanent transformation, optimizer makes a copy of the new
where conditions in select_lex->prep_where. "prep_where" is what
is used as the "where condition" for the query at the start of execution.
W.r.t the query in question, "where" condition is actually pointing
to a field in the temporary table. As a result, for the second
execution the pointer is no more valid resulting in server exit.
Fix:
At the end of the first execution, select_lex->where will have the
original item of the where condition.
Make prep_where the new place where the original item of select->where
has to be rolled back.
Fixed in 5.7 with the wl#7082 - Move permanent transformations from
JOIN::optimize to JOIN::prepare
Patch for 5.5 includes the following backports from 5.6:
Bugfix for Bug12603141 - This makes the first execute statement in the testcase
pass in 5.5
However it was noted later in in Bug16163596 that the above bugfix needed to
be modified. Although Bug16163596 is reproducible only with changes done for
Bug12582849, we have decided include the fix.
Considering that Bug12582849 is related to Bug12603141, the fix is
also included here. However this results in Bug16317817, Bug16317685,
Bug16739050. So fix for the above three bugs is also part of this patch.
In galera cluster, when myisam replication is enabled
(wsrep_replicate_myisam=ON), DML statements are replicated
in open_tables(). However, in case of prepared statements,
for an INSERT, open_tables() gets invoked twice. Once for
COM_STMT_PREPARE (to validate and prepare INSERT) and later
for COM_STMT_EXECUTE. As a result, the command gets replicated
twice. Same happens for REPLACE, UPDATE and DELETE commands.
Fixed by adding a check to not replicate during 'prepare'
phase. Also changed the order of conditions to make it more
efficient. Lastly, in order to support wsrep_dirty_reads, made
changes to allow COM_STMT_XXX commands to continue past initial
check even when wsrep is not ready.