Bug was that ReplSemiSyncMaster::commitTrx() was waiting on a condition
for state to change, but didn't take into account that one could have
disabled semi-sync during the wait.
Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from
starting new commits, then waits for running commits to complete. But
in-order parallel replication needs commits to happen in a particular
order, so this can easily deadlock.
To fix this problem, this patch introduces a way to temporarily pause
the parallel replication worker threads. Before starting FTWRL, we let
all worker threads complete in-progress transactions, and then
wait. Then we proceed to take the global read lock. Once the lock is
obtained, we unpause the worker threads. Now commits are blocked from
starting by the global read lock, so the deadlock will no longer occur.
Before, the Seconds_behind_master was updated already when an event
was queued for a worker thread to execute later. This might lead users
to interpret a low value as the slave being almost up to date with the
master, while in reality there might still be lots and lots of events
still queued up waiting to be applied by the slave.
See https://lists.launchpad.net/maria-developers/msg08958.html for
more detailed discussions.
While querying INFORMATION SCHEMA, check for a table's engine
only used table name, but not schema name; so, if there were different
rows with the same table name, a wrong one could be retrieved.
The result of the check affected the decision whether the contents
of the table should be dumped, and whether a DELAYED option can be used.
Fixed by adding a clause for table_schema to the query.
Patch backported from MariaDB 10.1
- Ensure that we wait with cleanup() until slave thread has stopped.
- Added signal_thd_deleted() to signal close_connections() that all THD's has been freed.
Other things
- Removed not needed calls to THD_CHECK_SENTRY() when we are calling 'delete thd'.
Not printing the value" with binlog-row-image=minimal"
Merged Rows_log_event::print_verbose_one_row() and log_event_print_value()
with MySQL 5.7
Added flush after writing of Table_map_log_event() to fix wrong order of
lines in output. This causes a lot of changes in some test results.
Fixed failure in tests when running optimized code
- Some assert() was using code that had to be executed
Fixed copying of some uninitialized data (fixed valgrind warning)
changes in query execution plans.
Fixed by introducing table->rpl_write_set which holds which columns should
be stored in the binary log.
Other things:
- Removed some not needed references to read_set and write_set to make
code really changing read_set and write_set easier to read
(in opt_range.cc)
- Added error handling of failed unpack_current_row()
- Added missing call to mark_columns_needed_for_insert() for DELAYED INSERT
- Removed not used functions in_read_set() and in_write_set()
- In rpl_record.cc, removed not used variable error
Patch from Daniel Black:
- Change the charset of mysql.column_stats.{min_value, max_value} from
utf8_bin varchar to varbinary
- Adjust the code that saves/reads the data accordingly.
- Also provide upgrade statement in mysql_system_tables_fix.sql
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
pre/CMakeLists.txt defines CMAKE_DEBUG_POSTFIX which causes a different
library name on Windows debug build (pcred.lib rather than pre.lib)
However MERGE_LIBRARIES macro that is used to create static embedded
library (out of other static libraries), can not handle per-configuration
library names. Thus the build fails with "pre.lib not found"
Fix is to remove unnecessary CMAKE_DEBUG_POSTFIX
Maintainer: Michal Hrusecky <Michal.Hrusecky@opensuse.org>
(modified by O. Bertrand --> adding and using the XSTR macro)
modified: storage/connect/tabxml.cpp
MDEV-8938 Server Crash on Update with joins
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Fix build failures caused by new C runtime library
- isnan, snprintf, struct timespec are now defined, attempt to
redefine them leads
- P_tmpdir, tzname are no more defined
- lfind() and lsearch() in lf_hash.c had to be renamed, declaration
conflicts with some C runtime functions with the same name declared in
a header included by stdlib.h
Also fix couple of annoying warnings :
- remove #define NOMINMAX from config.h to avoid "redefined" compiler
warnings(NOMINMAX is already in compile flags)
- disable incremental linker in Debug as well (feature not used much
and compiler crashes often)
Also simplify package building with Wix, require Wix 3.9 or later
(VS2015 is not compatible with old Wix 3.5/3.6)
When compiled with "-Wl,-Bsymbolic-functions" flags
(e.g. when building a .deb package on Ubuntu) with TokuDB and jemalloc,
mysqld crashed in toku_get_processor_frequency_cpuinfo() when
free()-ing a buffer returned by getline().
getline() uses libc malloc() internally, while free() is aliased
to jemalloc's free() in this configuration.
Fixing not to use getline(). Using a static buffer instead.
A comment in debian/mariadb-server-10.1.postinst says: "can safely run on
upgrades with existing databases". While this is true there're a few reasons not
to do that:
- it increases installation time (it has to run rather heavy mysqld multiple times)
- as well as it increases mysqld downtime
- it may fail if database has some plugin specific configs (see MDEV-8437)
- there should be no need to run this script on upgrade: they should be handled
by mysql_upgrade to
- RPM postin doesn't call it if database directory exists
Also postinst is not supposed to create database directories: let
mysql_install_db do that intead.
There was code that was supposed to "catch upgrades from previous versions where
the root password wasn't set". But it is wrong in many regards:
- it is supposed to be executed against running server, but at this point server
should be down, which makes this code no-op
- if the above is fixed, root password will be requested twice (initial root
password request + this one)
- it asks for a password only once, while "initial root password request" asks
twice (password + password verification)
- it may give false positive if unix socket based authentication is in effect
Removed this code since it didn't work for quite a while (at least since
mysql-5.1) and nobody cared about it.
There is no strong need to change password column: the only side effect is
that 4.0 -> 10.1 upgrades may get root/debian-sys-maint passwords stored in old
format. This should be perfectly acceptable, since all password at this point
are stored in old format.
Removed redundant attempt to create mysql.plugin table:
- original code was supposed to INSTALL some plugins:
INSERT INTO plugin VALUES ('innodb', 'ha_innodb.so'),
('federated', 'ha_federated.so'), ('blackhole', 'ha_blackhole.so'),
('archive', 'ha_archive.so');
- original code was supposed to fail if mysql.plugin exists:
The query sequence is supposed to be aborted if the CREATE TABLE fails due
to an already existent table in which case the admin might already have
chosen to remove one or more plugins.
- mysql.plugin must've been created by preceeding mysql_install_db anyway
During the process of guessing the IP address, if bind-address
is INADDR_ANY, mysqld should proceed with address specified via
wsrep_node_address or use one from network interfaces.
Patch contributed by darkain (pull#115).