Merge Facebook commit ca40b4417fd224a68de6636b58c92f133703fc68
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
Change the default value for innodb_log_compressed_pages to false
Logging these pages is a waste. We don't want this to be enabled.
One caution here: If the zlib version used by innodb is changed, but
the running version is still the previous version, and the running
version crashes, it is possible crash recovery could fail.
When crash recovery uses a zlib version at all different than the
version used by the crashed instance, it is possible that a redone
compression could fail, where the original did not, because the new
zlib version compresses the same data to a slightly larger size.
Because of the nature of compression, this is even possible when
upgrading to a version of zlib which actually peforms overall better
compression than the previous version.
If this happens, mysql will fail to recover, since a page split can
not be safely triggered during crash recovery.
So, either the exact zlib version must be controlled between builds,
or these rare recovery failures must be accepted. The cost of
logging these pages is quite high, so we consider this limitation to
be worthwhile.
This failure scenario can not happen if there was a clean shutdown.
This is only relevant to restarting crashed instances, or starting an
instance built via a hot backup too (XtraBackup).
you the progress of inplace alter table and row log buffer usage
- (x 100%, it's 4-digit. 10000 means 100.00%)
- Innodb_onlineddl_rowlog_rows
Shows how many rows are stored in row log buffer.
- Innodb_onlineddl_rowlog_pct_used
Shows row log buffer usage in percent ( *100%, it's 4-digit. 10000 means 100.00% ).
- Innodb_onlineddl_pct_progress
Shows the progress of inplace alter table. It might
be not so accurate because inplace alter is highly
depend on disk and buffer pool status.
But still it is useful and better than nothing.
- Add some log for inplace alter table
XtraDB/InnoDB will print some message before and
after doing some task.
file trx0trx.ic line 60
Problem was that trx might have not been started when we enter release
savepoint, this can happen when trx with savepoint is already aborted
or when we try to release non-existing savepoint.
Merged lp:maria/maria-10.0-galera up to revision 3880.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Merged lp:maria/maria-10.0-galera up to revision 3879.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
~40% bugfixed(*) applied
~40$ bugfixed reverted (incorrect or we're not buggy)
~20% bugfixed applied, despite us being not buggy
(*) only changes in the server code, e.g. not cmakefiles
line 8473
In case InnoDB index is not found, print the MySQL and InnoDB index
name we were trying to find and all MySQL and InnoDB index names there
is for this table.
ha_innodb.cc line 8473
If index is not found from InnoDB make sure we print what we
were trying to find and all mysql and InnoDB index names there
is for this table.
Merge the patches into MariaDB 10.0 main.
With this patch, parallel replication will now automatically retry a
transaction that fails due to deadlock or other temporary error, same as
single-threaded replication.
We catch deadlocks with InnoDB transactions due to enforced commit order. If
T1 must commit before T2 in parallel replication and T1 ends up waiting for T2
inside InnoDB, we kill T2 and retry it later to resolve the deadlock
automatically.
After-review changes.
For this patch in 10.0, we do not introduce a new public storage engine API,
we just fix the InnoDB/XtraDB issues. In 10.1, we will make a better public
API that can be used for all storage engines (MDEV-6429).
Eliminate the background thread that did deadlock kills asynchroneously.
Instead, we ensure that the InnoDB/XtraDB code can handle doing the kill from
inside the deadlock detection code (when thd_report_wait_for() needs to kill a
later thread to resolve a deadlock).
(We preserve the part of the original patch that introduces dedicated mutex
and condition for the slave init thread, to remove the abuse of
LOCK_thread_count for start/stop synchronisation of the slave init thread).
Analysis: For some reason table stats for a table pointed from a index
is not initialized. Added additional warning output on this situation
and table stats initialization. This is better than asserting.
Auto-generate the allowed list of values for enum/set/flagset options
in --help output. But don't do that when the help text already has them.
Also, remove lists of values from help strings of various options, where
they were simply listed without any additional information.
replication causing replication to fail.
Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.
The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.
With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.
Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.
Also some misc. fixes found during development and testing:
- Remove thd_rpl_is_parallel(), it is not used or needed.
- Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
worker thread is killed to resolve a deadlock with fixed commit
ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
can be ignored if the query otherwise completed successfully, and this
could cause the deadlock kill to be lost, so that the deadlock was not
correctly resolved.
- Fix random test failure due to missing wait_for_binlog_checkpoint.inc.
- Make sure that deadlock or other temporary errors during parallel
replication are not printed to the the error log; there were some places
around the replication code with extra error logging. These conditions can
occur occasionally and are handled automatically without breaking
replication, so they should not pollute the error log.
- Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
the end of a transaction, to be able to detect and resolve deadlocks due to
commit ordering. But this value was also used as a flag to mark whether
record_gtid() had been called, by being set to zero, losing the value. Now,
introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
valid for the entire duration of the transaction.
- Fix one place where the code to handle ignored errors called reset_killed()
unconditionally, even if no error was caught that should be ignored. This
could cause loss of a deadlock kill signal, breaking deadlock detection and
resolution.
- Fix a couple of missing mysql_reset_thd_for_next_command(). This could
cause a prior error condition to remain for the next event executed,
causing assertions about errors already being set and possibly giving
incorrect error handling for following event executions.
- Fix code that cleared thd->rgi_slave in the parallel replication worker
threads after each event execution; this caused the deadlock detection and
handling code to not be able to correctly process the associated
transactions as belonging to replication worker threads.
- Remove useless error code in slave_background_kill_request().
- Fix bug where wfc->wakeup_error was not cleared at
wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
error condition to wrongly propagate to a later wait_for_prior_commit(),
causing spurious ER_PRIOR_COMMIT_FAILED errors.
- Do not put the binlog background thread into the processlist. It causes
too many result differences in mtr, but also it probably is not useful
for users to pollute the process list with a system thread that does not
really perform any user-visible tasks...
This patch allows up to 64K pages for tables with DYNAMIC, COMPACT
and REDUNDANT row types. Tables with COMPRESSED row type allows
still only <= 16K page size. Note that single row size must be
still <= 16K and max key length is not affected.