In domain ID based filtering, a flag is used to filter-out
the events that belong to a particular domain. This flag gets
set when IO thread receives a GTID_EVENT for the domain on
filter list and its reset at the last event in the GTID group.
The resetting, however, was wrongly done before the decision to
write/filter the event from relay log is made. As a result, the
last event in the group will always pass through the filter.
Fixed by deferring the reset logic. Also added a test case.
when func1 calls func2 from DBUG_RETURN, dbug shows the trace as
| > func1
| < func1
| > func2
| < func2
because DBUG_LEAVE happens before func2(). Change that to invoke
DBUG_LEAVE when the local variable goes out of scope. This uses
gcc specific __attribute__((cleanup)).
Don't mark the SEQUENCE engine as XA-capable. The engine never
registers itself for any transaction, so it doesn't matter
whether it is XA-capable or not. The only effect of being
"XA-capable" is breaking the "number of XA-capable engines"
check of TC_LOG_MMAP.
because buildbot config invokes BUILD/compile-solaris-amd64
with the --extra-args=--without-plugin-innodb argument, but
BUILD/compile-solaris-amd64 doesn't take arguments and doesn't
invoke configure.pl, so this extra-args is lost.
Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from
starting new commits, then waits for running commits to complete. But
in-order parallel replication needs commits to happen in a particular
order, so this can easily deadlock.
To fix this problem, this patch introduces a way to temporarily pause
the parallel replication worker threads. Before starting FTWRL, we let
all worker threads complete in-progress transactions, and then
wait. Then we proceed to take the global read lock. Once the lock is
obtained, we unpause the worker threads. Now commits are blocked from
starting by the global read lock, so the deadlock will no longer occur.
Before, the Seconds_behind_master was updated already when an event
was queued for a worker thread to execute later. This might lead users
to interpret a low value as the slave being almost up to date with the
master, while in reality there might still be lots and lots of events
still queued up waiting to be applied by the slave.
See https://lists.launchpad.net/maria-developers/msg08958.html for
more detailed discussions.
While querying INFORMATION SCHEMA, check for a table's engine
only used table name, but not schema name; so, if there were different
rows with the same table name, a wrong one could be retrieved.
The result of the check affected the decision whether the contents
of the table should be dumped, and whether a DELAYED option can be used.
Fixed by adding a clause for table_schema to the query.
Not printing the value" with binlog-row-image=minimal"
Merged Rows_log_event::print_verbose_one_row() and log_event_print_value()
with MySQL 5.7
Added flush after writing of Table_map_log_event() to fix wrong order of
lines in output. This causes a lot of changes in some test results.
Fixed failure in tests when running optimized code
- Some assert() was using code that had to be executed
Fixed copying of some uninitialized data (fixed valgrind warning)
changes in query execution plans.
Fixed by introducing table->rpl_write_set which holds which columns should
be stored in the binary log.
Other things:
- Removed some not needed references to read_set and write_set to make
code really changing read_set and write_set easier to read
(in opt_range.cc)
- Added error handling of failed unpack_current_row()
- Added missing call to mark_columns_needed_for_insert() for DELAYED INSERT
- Removed not used functions in_read_set() and in_write_set()
- In rpl_record.cc, removed not used variable error
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
A comment in debian/mariadb-server-10.1.postinst says: "can safely run on
upgrades with existing databases". While this is true there're a few reasons not
to do that:
- it increases installation time (it has to run rather heavy mysqld multiple times)
- as well as it increases mysqld downtime
- it may fail if database has some plugin specific configs (see MDEV-8437)
- there should be no need to run this script on upgrade: they should be handled
by mysql_upgrade to
- RPM postin doesn't call it if database directory exists
Also postinst is not supposed to create database directories: let
mysql_install_db do that intead.
There was code that was supposed to "catch upgrades from previous versions where
the root password wasn't set". But it is wrong in many regards:
- it is supposed to be executed against running server, but at this point server
should be down, which makes this code no-op
- if the above is fixed, root password will be requested twice (initial root
password request + this one)
- it asks for a password only once, while "initial root password request" asks
twice (password + password verification)
- it may give false positive if unix socket based authentication is in effect
Removed this code since it didn't work for quite a while (at least since
mysql-5.1) and nobody cared about it.
There is no strong need to change password column: the only side effect is
that 4.0 -> 10.1 upgrades may get root/debian-sys-maint passwords stored in old
format. This should be perfectly acceptable, since all password at this point
are stored in old format.
Removed redundant attempt to create mysql.plugin table:
- original code was supposed to INSTALL some plugins:
INSERT INTO plugin VALUES ('innodb', 'ha_innodb.so'),
('federated', 'ha_federated.so'), ('blackhole', 'ha_blackhole.so'),
('archive', 'ha_archive.so');
- original code was supposed to fail if mysql.plugin exists:
The query sequence is supposed to be aborted if the CREATE TABLE fails due
to an already existent table in which case the admin might already have
chosen to remove one or more plugins.
- mysql.plugin must've been created by preceeding mysql_install_db anyway
During the process of guessing the IP address, if bind-address
is INADDR_ANY, mysqld should proceed with address specified via
wsrep_node_address or use one from network interfaces.
Patch contributed by darkain (pull#115).
Analysis: Lengths which are not UNIV_SQL_NULL, but bigger than the following
number indicate that a field contains a reference to an externally
stored part of the field in the tablespace. The length field then
contains the sum of the following flag and the locally stored len.
This was incorrectly set to
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_MAX)
When it should be
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_DEF)
Additionally, we need to disable support for > 16K page size for
row compressed tables because a compressed page directory entry
reserves 14 bits for the start offset and 2 bits for flags.
This limits the uncompressed page size to 16k. To support
larger pages page directory entry needs to be larger.