Replicated transaction extra gtid statement on slave failed to specify
an engine gtid_slave_pos name correctly. In case lower-case-table-names > 0
the InnoDB table name was generated to reproduce the lower-case-table-names=0 version
which is of mixed cases.
In rpl.rpl_mdev12179 test run this triggered a failure to DROP table which
was due to the innodb table handle was not closed:
InnoDB: Waited XYZ seconds for ref-count on table: `mysql`.`gtid_slave_pos_innodb`
on windows.
The closing issue was caused by having the table registered twice in the table cache,
for its lower- and mixed- case name versions. The DROP-table handler closed only
only one of the cache item to leave the 2nd one active.
(On Linux a failure occurs earlier at attempt to open an expected lower-cased table:
Last_Error: Error during XID COMMIT: failed to update GTID state in mysql.gtid_slave_pos: 1146: Table 'mysql.gtid_slave_pos_InnoDB' doesn't exist
but the table's name as the message shows is not in the right case).
Fixed with consulting lower-case-table-names when the engine gtid-slave-pos table
is created.
Note the lower-case-table-names=a-value created table will not recognized when next
the lower case option changes to a different value.
In 10.4 a follow-up patch is going to lowercase gtid-slave-pos autocreated table
at once at their origination, and a warning is issued in the 10.3 current patch.
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
and specifically the ack receiving functionality.
Semisync is turned to be static instead of plugin so its functions
are invoked at the same points as RUN_HOOKS.
The RUN_HOOKS and the observer interface remain to be removed by later
patch.
Todo:
React on killed status by repl_semisync_master.wait_after_sync(). Currently
Repl_semi_sync_master::commit_trx does not check the killed status.
There were few bugfixes found that are present in mysql and its unclear
whether/how they are covered. Those include:
Bug#15985893: GTID SKIPPED EVENTS ON MASTER CAUSE SEMI SYNC TIME-OUTS
Bug#17932935 CALLING IS_SEMI_SYNC_SLAVE() IN EACH FUNCTION CALL
HAS BAD PERFORMANCE
Bug#20574628: SEMI-SYNC REPLICATION PERFORMANCE DEGRADES WITH A HIGH NUMBER OF THREADS
Part of MDEV-13073 AliSQL Optimize performance of semisync
The idea it to use a dedicated lock detecting if there is new data in
the master's binary log instead of the overused LOCK_log.
Changes:
- Use dedicated COND variables for the relay and binary log signaling.
This was needed as we where the old 'update_cond' variable was used
with different mutex's, which could cause deadlocks.
- Relay log uses now COND_relay_log_updated and LOCK_log
- Binary log uses now COND_bin_log_updated and LOCK_binlog_end_pos
- Renamed signal_cnt to relay_signal_cnt (as we now have two signals)
- Added some missing error handling in MYSQL_BIN_LOG::new_file_impl()
- Reformatted some comments with old style
- Renamed m_key_LOCK_binlog_end_pos to key_LOCK_binlog_end_pos
- Changed 'signal_update()' to update_binlog_end_pos() which works for
both relay and binary log
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
Intermediate commit.
Fix compilation failure with different my_atomic implementation.
The my_atomic_loadptr* takes void ** as first argument, so variables
updated with it needs to be void * (it is not legal C to cast
some_type ** to void **).
Intermediate commit.
Move the discovery of mysql.gtid_slave_pos* tables into the SQL thread.
This avoids doing things like opening tables and scanning the mysql
schema for tables inside of the START SLAVE statement, which might
interact badly with existing transaction or table locks.
(Even though START SLAVE is documented to implicitly commit any active
transactions, this appears not to be the case in current code).
Table discovery fits naturally in the SQL thread init code, next to
the loading of mysql.gtid_slave_pos state.
Intermediate commit.
Implement auto-creation of mysql.gtid_slave_pos* tables with needed engines,
if listed in --gtid-pos-auto-engines.
Uses an asynchronous approach to minimise locking overhead.
The list of available tables is extended with a flag. Extra entries are
added for --gtid-pos-auto-engines tables that do not exist yet, marked as
not existing but ready for auto-creation.
If record_gtid() needs a table marked for auto-creation, it sends a request
to the slave background thread to create the table, and continues to use an
existing table for the current and immediately coming transactions.
As soon as the slave background thread has made the new table available, it
will be used for all subsequent relevant transactions in record_gtid().
This asynchronous approach also avoids a lot of complex issues around trying
to do DDL in the middle of an on-going transaction.
Intermediate commit.
This commit implements that record_gtid() selects a gtid_slave_posXXX table
with a storage engine already in use by current transaction, if any.
The default table mysql.gtid_slave_pos is used if no match can be found on
storage engine, or for GTID position updates with no specific storage
engine.
Table discovery of mysql.gtid_slave_pos* happens on initial GTID state load
as well as on every START SLAVE. Some effort is made to make this possible
without additional locking. New tables are added using lock-free atomics.
Removing tables requires stopping all slaves first. A warning is given in
the error log when a table is removed but a non-stopped slave still has a
reference to it.
If multiple mysql.gtid_slave_posXXX tables with same storage engine exist,
one is chosen arbitrarily to be used, with a warning in the error log. GTID
data from all tables is still read, but only one among redundant tables with
same storage engine will be updated.
Intermediate commit.
For each GTID recorded in mysq.gtid_slave_pos, keep track of which
engine the update was made in.
This will be later used to know which rows can be deleted in the table
of a given engine.
Intermediate commit.
Keep track of which mysql.gtid_slave_posXXX tables are available for each
engine, by searching for all tables in the mysql schema with names that
start with "gtid_slave_pos".
The list is computed at server start when the GTID position is loaded, and
it is re-computed on every START SLAVE command. This way, the DBA can
manually add a table for a new engine, and it will be automatically picked
up on next START SLAVE, so a full server restart is not needed.
The list is not yet actually used in the code.
Intermediate commit.
On server start, look for and read all tables mysql.gtid_slave_pos* to
restore the GTID position.
Simple test case that moves the data to a new
mysql.gtid_slave_pos_innodb table and verifies that the new table is
read at server start.
Protection added to reopen_file() and new_file_impl().
Without this we could get an assert in fn_format() as name == 0,
because the file was closed and name reset, atthe same time
new_file_impl() was called.
Description:
============
If you have a relay log index file that has ended up with
some relay log files that do not exists, then RESET SLAVE
ALL is not enough to get back to a clean state.
Analysis:
=========
In the bug scenario slave server is in stopped state and
some of the relay logs got deleted but the relay log index
file is not updated.
During slave server restart replication initialization fails
as some of the required relay logs are missing. User
executes RESET SLAVE/RESET SLAVE ALL command to start a
clean slave. As per the documentation RESET SLAVE command
clears the master info and relay log info repositories,
deletes all the relay log files, and starts a new relay log
file. But in a scenario where the slave server's
Relay_log_info object is not initialized slave will not
purge the existing relay logs. Hence the index file still
remains in a bad state. Users will not be able to start
the slave unless these files are cleared.
Fix:
===
RESET SLAVE/RESET SLAVE ALL commands should do the cleanup
even in a scenario where Relay_log_info object
initialization failed.
Backported a flag named 'error_on_rli_init_info' which is
required to identify slave's Relay_log_info object
initialization failure. This flag exists in MySQL-5.6
onwards as part of BUG#14021292 fix.
During RESET SLAVE/RESET SLAVE ALL execution this flag
indicates the Relay_log_info initialization failure.
In such a case open the relay log index/relay log files
and do the required clean up.
This has no functional changes, but it helps avoid merge problems from 10.0
to 10.1. In 10.0, code that checks for parallel replication uses
opt_slave_parallel_threads > 0, but this check needs to be
mi->using_parallel() in 10.1. By using the same check in 10.0 (with
unchanged semantics), merge problems to 10.1 are avoided.
Add some event types for the compressed event, there are:
QUERY_COMPRESSED_EVENT,
WRITE_ROWS_COMPRESSED_EVENT_V1,
UPDATE_ROWS_COMPRESSED_EVENT_V1,
DELETE_POWS_COMPRESSED_EVENT_V1,
WRITE_ROWS_COMPRESSED_EVENT,
UPDATE_ROWS_COMPRESSED_EVENT,
DELETE_POWS_COMPRESSED_EVENT.
These events inheritance the uncompressed editor events. One of their constructor functions and write
function have been overridden for uncompressing and compressing. Anything but this is totally the same.
On slave, The IO thread will uncompress and convert them When it receiving the events from the master.
So the SQL and worker threads can be stay unchanged.
Now we use zlib as compress algorithm. It maybe support other algorithm in the future.
Merge feature into 10.2 from feature branch.
Delayed replication adds an option
CHANGE MASTER TO master_delay=<seconds>
Replication will then delay applying events with that many
seconds. This creates a replication slave that reflects the state of
the master some time in the past.
Feature is ported from MySQL source tree.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The original MySQL patch left some refactoring todo's, possibly
because of known conflicts with other parallel development (like
info-repository feature perhaps).
This patch fixes those todos/refactorings.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Initial merge of delayed replication from MySQL git.
The code from the initial push into MySQL is merged, and the
associated test case passes. A number of tasks are still pending:
1. Check full test suite run for any regressions or .result file updates.
2. Extend the feature to also work for parallel replication.
3. There are some todo-comments about future refactoring left from
MySQL, these should be located and merged on top.
4. There are some later related MySQL commits, these should be checked
and merged. These include:
e134b9362ba0b750d6ac1b444780019622d14aa5
b38f0f7857c073edfcc0a64675b7f7ede04be00f
fd2b210383358fe7697f201e19ac9779879ba72a
afc397376ec50e96b2918ee64e48baf4dda0d37d
5. The testcase from MySQL relies heavily on sleep and timing for
testing, and seems likely to sporadically fail on heavily loaded test
servers in buildbot or distro build farms.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The issue was that when running with valgrind the wait for master_pos_Wait()
was not long enough.
This patch also fixes two other failures that could affect rpl_mdev6020:
- check_if_conflicting_replication_locks() didn't properly check domains
- 'did_mark_start_commit' was after signals to other threads was sent which could
get the variable read too early.
MDEV-10134 Add full support for DEFAULT
- Added support for using tables with MySQL 5.7 virtual fields,
including MySQL 5.7 syntax
- Better error messages also for old cases
- CREATE ... SELECT now also updates timestamp columns
- Blob can now have default values
- Added new system variable "check_constraint_checks", to turn of
CHECK constraint checking if needed.
- Removed some engine independent tests in suite vcol to only test myisam
- Moved some tests from 'include' to 't'. Should some day be done for all tests.
- FRM version increased to 11 if one uses virtual fields or constraints
- Changed to use a bitmap to check if a field has got a value, instead of
setting HAS_EXPLICIT_VALUE bit in field flags
- Expressions can now be up to 65K in total
- Ensure we are not refering to uninitialized fields when handling virtual fields or defaults
- Changed check_vcol_func_processor() to return a bitmap of used types
- Had to change some functions that calculated cached value in fix_fields to do
this in val() or getdate() instead.
- store_now_in_TIME() now takes a THD argument
- fill_record() now updates default values
- Add a lookahead for NOT NULL, to be able to handle DEFAULT 1+1 NOT NULL
- Automatically generate a name for constraints that doesn't have a name
- Added support for ALTER TABLE DROP CONSTRAINT
- Ensure that partition functions register virtual fields used. This fixes
some bugs when using virtual fields in a partitioning function
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.