when compressed tables are used.
Analysis: Number of flushed pages is incorrectly calculated at
buf_do_LRU_batch. This leads to problem when utility function
flushes dirty blocks from the end of the flush list of
all buffer pool instances in a loop until enough pages are flushed
or time limit is reached. As number of flushed pages is incorrectly
calculated, the loop mostly try to flush until time limit is
reached because the number of pages limit is not reached.
Fix: Fix the calculation of flushed pages (very short). This fix
was provided by Alexey Stroganov (Percona).
In parallel replication, there was an error case where we could call
my_error() in-between events. This causes the assertion, as the previous event
has reported ok status, but the following event has not yet reset the
diagnostics area. This happened when a worker thread detects that the SQL
driver thread is aborting, and when it gets an error from a prior commit at
the same time in wait_for_prior_commit().
Since this is already an error case, the code should be using
unregister_wait_for_prior_commit() instead of wait_for_prior_commit(). But
unregister is already done a bit later (from finish_event_group()), so just
removing the redundant call to wait_for_prior_commit() fixes the issue.
- MDEV-5689 ExtractValue(xml, 'substring(/x,/y)') crashes
- MDEV-5709 ExtractValue() with XPath variable references returns wrong result.
Description:
1. The main problem was that that nodeset_func->fix_fields() was
called in Item_func_xml_extractvalue::val_str() and
Item_func_xml_update::val_str(), which led in some cases to
execution of the XPath engine *before* having a parsed XML value.
Moved to Item_xml_str_func::fix_fields().
2. Cleanup: added a new method Item_xml_str_func::fix_fields() and moved
most of the code from Item_xml_str_func::fix_length_and_dec()
to Item_xml_str_func::fix_fields(), to follow the usual Item layout.
3. Cleanup: a parsed XML value is useless without the raw XML value
it was built from.
Previously the parsed and the raw values where stored in separate String
instances. It was hard to follow how they are synchronized.
Added a helper class XML which contains both parsed and raw values.
Makes things easier to read and modify.
4. MDEV-5709: const_item() could incorrectly return a "true"
result when XPath expression contains users/SP variable references.
Now nodeset_func->const_item() is also taken into account to
catch such cases.
5. Minor code enhancements.
A huge number in the "day" part of an interval made the code to return
a negative date erroneously. Adding a test to return an error on a too
large "day" value.
After constant table row substitution the where condition may be converted
to always true. The function calculate_cond_selectivity_for_table() should
take into account this possibility.
Due to how gap locks work, two transactions could group commit together on the
master, but get lock conflicts and then deadlock due to different thread
scheduling order on slave.
For now, remove these deadlocks by running the parallel slave in READ
COMMITTED mode. And let InnoDB/XtraDB allow statement-based binlogging for the
parallel slave in READ COMMITTED.
We are also investigating a different solution long-term, which is based on
relaxing the gap locks only between the transactions running in parallel for
one slave, but not against possibly external transactions.
When a transaction fails in parallel replication, it should signal the error
to any following transactions doing wait_for_prior_commit() on it. But the
code for this was incorrect, and would not correctly remember a prior error
when sending the signal. This caused corruption when slave stopped due to an
error.
Fix by remembering the error code when we first get an error, and passing the
saved error code to wakeup_subsequent_commits().
Thanks to nanyi607rao who reported this bug on
maria-developers@lists.launchpad.net and analysed the root cause.
Analysis: XtraDB merge regression, at the end of mutex_spin_wait before goto mutex_loop
there is missing
if (prio_mutex) {
os_atomic_decrement_ulint(&prio_mutex->high_priority_waiters, 1);
}
Hence we get unbalanced waiter count.
Thanks to Laurynas Biveinis for finding this.
Let TABLE_SHARE::tdc.free_tables, TABLE_SHARE::tdc.all_tables,
TABLE_SHARE::tdc.flushed and corresponding invariants be protected by
per-share TABLE_SHARE::tdc.LOCK_table_share instead of global LOCK_open.
Now if CREATE OR REPLACE fails but we have deleted a table already, we will generate a DROP TABLE in the binary log.
This fixes this issue.
In addition, for a failing CREATE OR REPLACE TABLE ... SELECT we don't generate a log of all the inserted rows, only the DROP TABLE.
I added code for not logging DROP TEMPORARY TABLE for tables where the CREATE TABLE was not logged. This code will be activated in 10.1
by removing the code protected by DONT_LOG_DROP_OF_TEMPORARY_TABLES.
mysql-test/suite/rpl/r/create_or_replace_mix.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_row.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_statement.result:
More test cases
mysql-test/suite/rpl/t/create_or_replace.inc:
More test cases
sql/log.cc:
Added binlog_reset_cache() to clear the binary log.
sql/log.h:
Added prototype
sql/sql_insert.cc:
If CREATE OR REPLACE TABLE ... SELECT fails:
- Don't log anything if nothing changed
- If table was deleted, log a DROP TABLE.
Remember if we table creation of temporary tables was logged.
sql/sql_table.cc:
Added log_drop_table()
Remember if we table creation of temporary tables was logged.
If CREATE OR REPLACE TABLE ... SELECT fails and a table was deleted, log a DROP TABLE.
sql/sql_table.h:
Added prototype
sql/sql_truncate.cc:
Remember if we table creation of temporary tables was logged.
sql/table.h:
Added table_creation_was_logged
mysql-test/r/create_or_replace2.result:
Added test case
mysql-test/t/create_or_replace.test:
Fixed comment
mysql-test/t/create_or_replace2.test:
Added test case
sql/sql_base.cc:
Safety fix:
Don't let threads with query_id=0 free temporary tables as this may free temporary tables not in use.
This is mostly the case for the slave io threads, as most other threads has thd->query_id != 0.
sql/sql_table.cc:
Added comment.
Ignore kill when opening temporary table for CREATE ... LIKE.
This fixed the original isue
Analysis: This was merge error on file fil0fil.cc. fil_system mutex was taken twice because of this.
Fix: Remove unnecessary mutex_enter and fixed the issue with slow posix_fallocate usage.
- If an UPDATE 1) modifies the key it is using, and 2) has ORDER BY ... LIMIT
which matches the key it is using, Then we should use "Using buffer", not
"Using filesort".
Automatic merge, except for server_audit.cc that had to be modified slightly
Changes to xtradb and innobase where ignored was these made no sence for 10.0
mysql-test/r/create_or_replace.result:
Added test of releasing of metadata locks
mysql-test/t/create_or_replace.test:
Added test of releasing of metadata locks
sql/handler.h:
Added marker if table was deleted as part of CREATE OR REPLACE
sql/sql_base.cc:
Added Locked_tables_list::unlock_locked_table()
sql/sql_class.h:
New prototypes
sql/sql_insert.cc:
Unlock metadata locks for deleted table in case of error. Also do unlock tables if this was the only locked table.
sql/sql_table.cc:
Unlock metadata locks for deleted table in case of error. Also do unlock tables if this was the only locked table.
Remove memory warnings if mysql client aborts early
Changed copyright for clients
client/mysql.cc:
Free memory if get_options fails, so that we don't get warnings from safemalloc
include/welcome_copyright_notice.h:
Added SkySQL to client copyrights
mysql-test/valgrind.supp:
Added suppressions for memory leaks from dlopen() for OpenSUSE 12.3
storage/oqgraph/mysql-test/oqgraph/regression_mdev5744.result:
Suppress warning
storage/oqgraph/mysql-test/oqgraph/regression_mdev5744.test:
Suppress warning
The problem was that a big record was allocated on the stack, which casued stack to run out.
Fixed by using my_safe_alloca() instead of my_alloca() when allocating records.
Now only records <= 16384 are allocated on the stack.
mysql-test/r/stack-crash.result:
Added test case
mysql-test/t/stack-crash.test:
Added test case
storage/maria/ma_blockrec.c:
Use my_safe_alloca() instead of my_alloca()
storage/maria/ma_dynrec.c:
Use my_safe_alloca() instead of my_alloca()
storage/maria/maria_def.h:
Added MARIA_MAX_RECORD_ON_STACK
storage/maria/maria_pack.c:
Use my_safe_alloca() instead of my_alloca()
Before, the arrival of same GTID twice in multi-source replication
would cause double-apply or in gtid strict mode an error.
Keep the behaviour, but add an option --gtid-ignore-duplicates which
allows to correctly handle duplicates, ignoring all but the first.
This relies on the user ensuring correct configuration so that
sequence numbers are strictly increasing within each replication
domain; then duplicates can be detected simply by comparing the
sequence numbers against what is already applied.
Only one master connection (but possibly multiple parallel worker
threads within that connection) is allowed to apply events within
one replication domain at a time; any other connection that
receives a GTID in the same domain either discards it (if it is
already applied) or waits for the other connection to not have
any events to apply.
Intermediate patch, as proof-of-concept for testing. The main limitation
is that currently it is only implemented for parallel replication,
@@slave_parallel_threads > 0.