Significantly reduce the amount of InnoDB, XtraDB and Mariabackup
code changes by defining pfs_os_file_t as something that is
transparently compatible with os_file_t.
because mysql->net.thd was reset to NULL in mysql_real_connect()
and thd_increment_bytes_received() didn't do anything.
Fix:
* set mysql->net.thd to current_thd instread.
* remove the test for non-null THD from a very often used
function thd_increment_bytes_received().
Applied lost in a merge revision 7f38a07:
MDEV-10043 - main.events_restart fails sporadically in buildbot (crashes upon
shutdown)
There was race condition between shutdown thread and event worker threads.
Shutdown thread waits for thread_count to become 0 in close_connections(). It
may happen so that event worker thread was started but didn't increment
thread_count by this time. In this case shutdown thread may miss wait for this
working thread and continue deinitialization. Worker thread in turn may continue
execution and crash on deinitialized data.
Fixed by incrementing thread_count before thread is actually created like it is
done for connection threads.
Also let event scheduler not to inc/dec running threads counter for symmetry
with other "service" threads.
When a deadlock kill is detected inside the storage engine, the kill
is not done immediately, to avoid calling back into the storage engine
kill_query method with various lock subsystem mutexes held. Instead the
kill is queued and done later by a slave background thread.
This patch in preparation for fixing TokuDB optimistic parallel
replication, as well as for removing locking hacks in InnoDB/XtraDB in
10.2.
Signed-off-by: Kristian Nielsen <knielsen at knielsen-hq.org>
- Fixed typos
- Added --core-on-failure to mysql-test-run
- More DBUG_PRINT in viosocket.c
- Don't forget CLIENT_REMEMBER_OPTIONS for compressed slave protocol
- Removed not used stage variables
Since wsrep_sync_wait & wsrep_causal_reads variables are related,
they are always kept in sync whenever one of them changes.
Same is tried on server start, where wsrep_sync_wait get updated
based on wsrep_causal_reads' value. But, since wsrep_causal_reads
is OFF by default, wsrep_sync_wait's value gets modified and loses
its WSREP_SYNC_WAIT_BEFORE_READ bit.
Fixed by syncing wsrep_sync_wait & wsrep_causal_reads values
individually on server start in mysqld_get_one_option() based
on command line arguments used.
* only copy args[0] to args[2] after fix_fields (when all item
substitutions have already happened)
* change QT_ITEM_FUNC_NULLIF_TO_CASE (that allows to print NULLIF
as CASE) to QT_ITEM_ORIGINAL_FUNC_NULLIF (that prohibits it).
So that NULLIF-to-CASE is allowed by default and only disabled
explicitly for SHOW VIEW|FUNCTION|PROCEDURE and mysql_make_view.
By default it is allowed (in particular in error messages and
debug output, that can happen anytime before or after optimizer).
GENERATED BY THE EXP() FUNCTION
When generating the error message for numeric overflow, pass a flag to
Item::print() that prevents it from expanding constant expressions and
parameters to the values they evaluate to.
For consistency, also pass the flag to Item::print() when
Item_func_spatial_collection::fix_length_and_dec() generates an error
message. It doesn't make any difference at the moment, since constant
expressions haven't been evaluated yet when this function is called.
Problem was that we used same condition variable with 2 different mutex.
Fixed by changing to use COND_rpl_thread_stop instead of COND_parallel_entry
for stopping threads.
Patch by Kristian Nielsen
Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from
starting new commits, then waits for running commits to complete. But
in-order parallel replication needs commits to happen in a particular
order, so this can easily deadlock.
To fix this problem, this patch introduces a way to temporarily pause
the parallel replication worker threads. Before starting FTWRL, we let
all worker threads complete in-progress transactions, and then
wait. Then we proceed to take the global read lock. Once the lock is
obtained, we unpause the worker threads. Now commits are blocked from
starting by the global read lock, so the deadlock will no longer occur.
Patch backported from MariaDB 10.1
- Ensure that we wait with cleanup() until slave thread has stopped.
- Added signal_thd_deleted() to signal close_connections() that all THD's has been freed.
Other things
- Removed not needed calls to THD_CHECK_SENTRY() when we are calling 'delete thd'.
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
- Added --start option to mysqld which don't prints notes to log on startup
This helps to find errors in configure options easier
- Dont write [Note] entries to log after we have abort the server
This makes it easier to find what went wrong
- Don't by default write out Changed limits for max_open_files as this didn't really change from anything the user gave us
- Dont write warnings about not using --explicit_defaults_for_timestamp (we don't have plans do depricate the old behaviour)
Description: The command FLUSH DES_KEY_FILE is expected to
reload the DES keys from the file that was specified with
the "--des-key-file" option at server startup. But it is not
behaving as expected.
Analysis: The des file reload is defined within a wrong
conditional directive, rendering the command ineffective.
Macro "OPENSSL" was used instead of "HAVE_OPENSSL" macro.
Fix: "OPENSSL" macro is changed to "HAVE_OPENSSL".
- If run with valgrind, mysqltest will now wait longer when syncronizing slave with master
- Ensure that we wait with cleanup() until slave thread has stopped.
- Added signal_thd_deleted() to signal close_connections() that all THD's has been freed.
- Check in handle_fatal_signal() that we don't use variables that has been freed.
- Increased some timeouts when run with --valgrind
Other things:
- Fixed wrong test in one_thread_per_connection_end() if galera is used.
- Removed not needed calls to THD_CHECK_SENTRY() when we are calling 'delete thd'.
When the slave processes the master restart format_description event,
parallel replication needs to complete any prior events before processing
the restart event (which closes temporary tables and such stuff).
This happens in wait_for_workers_idle(), however it was not waiting long
enough. The wait was using wait_for_prior_commit(), but at that points table
can still be open. This lead to assertion in this case.
So change wait_for_workers_idle() to wait until all worker threads have
reached finish_event_group(), at which point all tables should have been
closed.
including the big commit
commit 305130361bf72726de220f3d2b2787395e10be61
Author: Marc Alff <marc.alff@oracle.com>
Date: Tue Feb 10 11:31:32 2015 +0100
WL#8354 BACKPORT DIGEST IMPROVEMENTS TO MYSQL 5.6
(with the following commits) and related changes in sql/
Merge from Percona Server enforced use of a specific storage engine
authored by Stewart Smith.
Modified to be session variable and modifiable only by SUPER. Use
similar implementation as default_storage_engine.
Show total execution time (r_total_time_ms) for various parts of the
query:
1. time spent in SELECTs
2. time spent reading rows from storage engines
#2 currently gets the data from P_S.
Adjust the configuration options, as discussed on the
maria-developers@ mailing list.
The option to hint a transaction to not be replicated in parallel is
now called @@skip_parallel_replication, consistent with
@@skip_replication.
And the --slave-parallel-mode is now simplified to have just one of
the following values:
none
minimal
conservative
optimistic
aggressive
This reflects successively harder efforts to find opportunities to run
things in parallel on the slave. It allows to extend the server with
more automatic heuristics in the future without having to introduce a
new configuration option for each and every one.