When rolling back and retrying a transaction in parallel replication, don't
release the domain ownership (for --gtid-ignore-duplicates) as part of the
rollback. Otherwise another master connection could grab the ownership and
double-apply the transaction in parallel with the retry.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
enable ssl + ssl_verify_server_cert in the internal client too
* fix replication tests to disable master_ssl_verify_server_cert
because accounts are passwordless - except rpl.rpl_ssl1
* fix federated/federatedx/connect to disable SSL_VERIFY_SERVER_CERT
because they cannot configure an ssl connection
* fix spider to disable ssl_verify_server_cert, if configuration
says so, as spider _can_ configure an ssl connection
* memory leak in embedded test-connect
Make all system tables in mysql directory of type
engine=Aria
Privilege tables are using transactional=1
Statistical tables are using transactional=0, to allow them
to be quickly updated with low overhead.
Help tables are also using transactional=0 as these are only
updated at init time.
Other changes:
- Aria store engine is now a required engine
- Update comment for Aria tables to reflect their new usage
- Fixed that _ma_reset_trn_for_table() removes unlocked table
from transaction table list. This was needed to allow one
to lock and unlock system tables separately from other
tables, for example when reading a procedure from mysql.proc
- Don't give a warning when using transactional=1 for engines
that is using transactions. This is both logical and also
to avoid warnings/errors when doing an alter of a privilege
table to InnoDB.
- Don't abort on warnings from ALTER TABLE for changes that
would be accepted by CREATE TABLE.
- New created Aria transactional tables are marked as not movable
(as they include create_rename_lsn).
- bootstrap.test was changed to kill orignal server, as one
can't anymore have two servers started at same time on same
data directory and data files.
- Disable maria.small_blocksize as one can't anymore change
aria block size after system tables are created.
- Speed up creation of help tables by using lock tables.
- wsrep_sst_resync now also copies Aria redo logs.
Intermediate commit.
Update multi_source.gtid_ignore_duplicates test to avoid a sporadic
failure following MDEV-12179 functionality.
The test case manually messes with the mysql.gtid_slave_pos table.
Make sure to clean up that at the end of the test, and suppress the
messages from the server about these changes.
The --gtid-ignore-duplicates option was not working correctly with row-based
replication. When a row event was completed, but before committing, there
was a small window where another multi-source SQL thread could wrongly try
to re-execute the same transaction, without properly ignoring the duplicate
GTID. This would lead to duplicate key error or out-of-order GTID error or
similar.
Thanks to Matt Neth for reporting this and giving an easy way to reproduce
the issue.
The problem was that my_hash_sort didn't properly delete end-space characters properly, so strings that should compare
identically was seen as different strings. (Space was handled correctly, but not NBSP)
This caused duplicate key errors when a heap table was converted to Aria as part of overflow in group by.
Fixed by removing all characters that compares as end space when creating a hash.
Other things:
- Fixed that --sorted_results also works for errors in mysqltest.
- Speed up hash by not comparing strings that has different hash.
- Speed up many my_hash_sort functions by using registers to calculate hash instead of pointers.
This was previously done for some functions, but not for all.
- Made a macro of the hash function, to simplify code and to be able to experiment with new hash functions.
client/mysqltest.cc:
Fixed that --sorted_results also works for error messages.
mysql-test/r/ctype_partitions.result:
New test to ensure that partitions on hash works
mysql-test/suite/multi_source/gtid.result:
Updated result
mysql-test/suite/multi_source/gtid.test:
Test that --sorted_result works for error messages
mysql-test/suite/multi_source/gtid_ignore_duplicates.result:
Updated result
mysql-test/suite/multi_source/gtid_ignore_duplicates.test:
Updated result
mysql-test/suite/multi_source/load_data.result:
Updated result
mysql-test/suite/multi_source/load_data.test:
Updated result
mysql-test/t/ctype_partitions.test:
New test to ensure that partitions on hash works
storage/heap/hp_write.c:
Speed up hash by not comparing strings that has different hash.
storage/maria/ma_check.c:
Extra debug
strings/ctype-bin.c:
Use macro for hash function
strings/ctype-latin1.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-mb.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-simple.c:
Use macro for hash function
Use same variable names as in other my_hash_sort functions.
Update my_hash_sort_simple() to properly remove end space (patch by Bar)
strings/ctype-uca.c:
Ignore duplicated space inside strings and end space in my_hash_sort_uca(). This fixed MDEV-6255
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-ucs2.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-utf8.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/strings_def.h:
Made a macro of the hash function, to simplify code and to be able to experiment with new hash functions.
Before, the arrival of same GTID twice in multi-source replication
would cause double-apply or in gtid strict mode an error.
Keep the behaviour, but add an option --gtid-ignore-duplicates which
allows to correctly handle duplicates, ignoring all but the first.
This relies on the user ensuring correct configuration so that
sequence numbers are strictly increasing within each replication
domain; then duplicates can be detected simply by comparing the
sequence numbers against what is already applied.
Only one master connection (but possibly multiple parallel worker
threads within that connection) is allowed to apply events within
one replication domain at a time; any other connection that
receives a GTID in the same domain either discards it (if it is
already applied) or waits for the other connection to not have
any events to apply.
Intermediate patch, as proof-of-concept for testing. The main limitation
is that currently it is only implemented for parallel replication,
@@slave_parallel_threads > 0.