Commit graph

2408 commits

Author SHA1 Message Date
Sergey Vojtovich
b6c175aad4 MDEV-6039 - WebScaleSQL patches
Preserve CLIENT_REMEMBER_OPTIONS flag for compressed connections

Code cleanup: removed reference to CLIENT_REMEMBER_OPTIONS from server
code. This flag is ignored in MariaDB.
2014-06-18 12:12:43 +04:00
unknown
081926f3d8 MDEV-6188: master_retry_count (ignored if disconnect happens on SET master_heartbeat_period)
That particular part of slave connect to master was missing code to handle
retry in case of network errors. The same problem is present in MySQL 5.5, but
fixed in MySQL 5.6.

Fixed with this patch, by adding the code (mostly identical to MySQL 5.6), and
also adding a test case.

I checked other queries done towards master during slave connect, and they now
all seem to handle reconnect in case of network failures.
2014-06-17 14:10:13 +02:00
unknown
bd4153a8c2 MDEV-5262, MDEV-5914, MDEV-5941, MDEV-6020: Deadlocks during parallel
replication causing replication to fail.

Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.

The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.

With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.

Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.

Also some misc. fixes found during development and testing:

 - Remove thd_rpl_is_parallel(), it is not used or needed.

 - Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
   worker thread is killed to resolve a deadlock with fixed commit
   ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
   can be ignored if the query otherwise completed successfully, and this
   could cause the deadlock kill to be lost, so that the deadlock was not
   correctly resolved.

 - Fix random test failure due to missing wait_for_binlog_checkpoint.inc.

 - Make sure that deadlock or other temporary errors during parallel
   replication are not printed to the the error log; there were some places
   around the replication code with extra error logging. These conditions can
   occur occasionally and are handled automatically without breaking
   replication, so they should not pollute the error log.

 - Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
   the end of a transaction, to be able to detect and resolve deadlocks due to
   commit ordering. But this value was also used as a flag to mark whether
   record_gtid() had been called, by being set to zero, losing the value. Now,
   introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
   valid for the entire duration of the transaction.

 - Fix one place where the code to handle ignored errors called reset_killed()
   unconditionally, even if no error was caught that should be ignored. This
   could cause loss of a deadlock kill signal, breaking deadlock detection and
   resolution.

 - Fix a couple of missing mysql_reset_thd_for_next_command(). This could
   cause a prior error condition to remain for the next event executed,
   causing assertions about errors already being set and possibly giving
   incorrect error handling for following event executions.

 - Fix code that cleared thd->rgi_slave in the parallel replication worker
   threads after each event execution; this caused the deadlock detection and
   handling code to not be able to correctly process the associated
   transactions as belonging to replication worker threads.

 - Remove useless error code in slave_background_kill_request().

 - Fix bug where wfc->wakeup_error was not cleared at
   wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
   error condition to wrongly propagate to a later wait_for_prior_commit(),
   causing spurious ER_PRIOR_COMMIT_FAILED errors.

 - Do not put the binlog background thread into the processlist. It causes
   too many result differences in mtr, but also it probably is not useful
   for users to pollute the process list with a system thread that does not
   really perform any user-visible tasks...
2014-06-10 10:13:15 +02:00
unknown
629b822913 MDEV-5262, MDEV-5914, MDEV-5941, MDEV-6020: Deadlocks during parallel
replication causing replication to fail.

In parallel replication, we run transactions from the master in parallel, but
force them to commit in the same order they did on the master. If we force T1
to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get
a deadlock when T2 waits until T1 has committed.

Usually, we do not run T1 and T2 in parallel if there is a chance that they
can have conflicting locks like this, but there are certain edge cases where
it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was
that this would cause replication to hang, eventually getting a lock timeout
and causing the slave to stop with error.

With this patch, InnoDB will report back to the upper layer whenever a
transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel
replication transactions, and T2 needs to commit later than T1, we can thus
detect the deadlock; we then kill T2, setting a flag that causes it to catch
the kill and convert it to a deadlock error; this error will then cause T2 to
roll back and release its locks (so that T1 can commit), and later T2 will be
re-tried and eventually also committed.

The kill happens asynchroneously in a slave background thread; this is
necessary, as the reporting from InnoDB about lock waits happen deep inside
the locking code, at a point where it is not possible to directly call
THD::awake() due to mutexes held.

Deadlock is assumed to be (very) rarely occuring, so this patch tries to
minimise the performance impact on the normal case where no deadlocks occur,
rather than optimise the handling of the occasional deadlock.

Also fix transaction retry due to deadlock when it happens after a transaction
already signalled to later transactions that it started to commit. In this
case we need to undo this signalling (and later redo it when we commit again
during retry), so following transactions will not start too early.

Also add a missing thd->send_kill_message() that got triggered during testing
(this corrects an incorrect fix for MySQL Bug#58933).
2014-06-03 10:31:11 +02:00
Nirbhay Choubey
645d402544 Fix for a build failure. 2014-05-21 15:16:15 -04:00
Nirbhay Choubey
99df0fbad5 bzr merge -r3968..3984 codership/5.5 (non-Innodb changes only). 2014-05-21 14:32:57 -04:00
Nirbhay Choubey
086af8367e bzr merge -r4209 maria/10.0. 2014-05-21 11:09:55 -04:00
unknown
787c470cef MDEV-5262: Missing retry after temp error in parallel replication
Handle retry of event groups that span multiple relay log files.

 - If retry reaches the end of one relay log file, move on to the next.

 - Handle refcounting of relay log files, and avoid purging relay log
   files until all event groups have completed that might have needed
   them for transaction retry.
2014-05-15 15:52:08 +02:00
Sergei Golubchik
edf1fbd25b MDEV-6153 Trivial Lintian errors in MariaDB sources: spelling errors and wrong executable bits 2014-05-13 11:53:30 +02:00
Venkatesh Duggirala
33f15dc7ac Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Problem:  A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
              Master, a new dump thread is trying kill
              zombie dump threads. It acquired thread's
              LOCK_thd_data and it is about to acquire
              mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
               acquired LOCK_log and it is about to acquire
               LOCK_index.
Thread 3: Application thread is executing Purge binary logs
               and acquired LOCK_index and it is about to
               acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
               and acquired LOCK_thread_count and it is
               about to acquire zombie dump thread's
               LOCK_thd_data.
Deadlock Cycle:
     Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1

The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.

Analysis:
There are four locks involved in the deadlock.  LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.

Fix: 
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.

LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).

Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.

Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
2014-05-08 18:13:01 +05:30
unknown
b0b60f2498 MDEV-5262: Missing retry after temp error in parallel replication
Start implementing that an event group can be re-tried in parallel replication
if it fails with a temporary error (like deadlock).

Patch is very incomplete, just some very basic retry works.

Stuff still missing (not complete list):

 - Handle moving to the next relay log file, if event group to be retried
   spans multiple relay log files.

 - Handle refcounting of relay log files, to ensure that we do not purge a
   relay log file and then later attempt to re-execute events out of it.

 - Handle description_event_for_exec - we need to save this somehow for the
   possible retry - and use the correct one in case it differs between relay
   logs.

 - Do another retry attempt in case the first retry also fails.

 - Limit the max number of retries.

 - Lots of testing will be needed for the various edge cases.
2014-05-08 14:20:18 +02:00
unknown
64923bb653 MDEV-6156: Parallel replication incorrectly caches charset between worker threads
The previous patch for this bug was unfortunately completely wrong.

The purpose of cached_charset is to remember which character set we
have installed currently in the THD, so that in the common case where
charset does not change between queries, we do not need to update it
in the THD. Thus, it is important that the cached_charset field is
tightly coupled to the THD for which it handles caching.

Thus the right place to put cached_charset seems to be in the THD.
This patch introduces a field THD:system_thread_info where such info
in general can be placed without further inflating the THD with unused
data for other threads (THD is already far too big as it is). It then
moves the cached_charset into this slot for the SQL driver thread and
for the parallel replication worker threads.

The THD::rpl_filter field is also moved inside system_thread_info, to
keep the size of THD unchanged. Moving further fields in to reduce the
size of THD is a separate task, filed as MDEV-6164.
2014-04-25 12:58:31 +02:00
unknown
010971a761 MDEV-6156: Parallel replication incorrectly caches charset between worker threads
Replication caches the character sets used in a query, to be able to quickly
reuse them for the next query in the common case of them not having changed.

In parallel replication, this caching needs to be per-worker-thread. The
code was not modified to handle this correctly, so the caching in one worker
could cause another worker to run a query using the wrong character set,
causing replication corruption.
2014-04-23 16:06:06 +02:00
Nirbhay Choubey
4213263101 Merging mariadb-10.0.10.
* bzr merge -rtag:mariadb-10.0.10 maria/10.0.
2014-04-08 10:36:34 -04:00
unknown
584c2d0ae9 MDEV-6040: MariaDB hangs if terminated quickly after start
We need to use mysql_cond_broadcast() rather than _signal for
COND_thread_count, as there can be multiple waiters.

Thanks to Pavel Ivanov for reporting both the problem and the
solution.
2014-04-10 09:38:57 +02:00
Nirbhay Choubey
02ba2bfdb4 Merging revision from codership-mysql/5.5 (r3928..3968) and
codership-mysql/5.6 (r4021..4065).
- Also contains fixes for some build failures.
2014-03-27 16:26:00 -04:00
Nirbhay Choubey
90e4f7f9d3 * bzr merge -rtag:mariadb-10.0.9 maria/10.0
* Fix for post-merge build failures.
2014-03-26 14:27:24 -04:00
Nirbhay Choubey
c5f7486654 bzr merge -r4062..4065 codership/5.6 2014-03-26 14:13:12 -04:00
Nirbhay Choubey
899f9801d4 bzr merge -r3946..3968 codership/5.5 2014-03-25 17:01:05 -04:00
Sergei Golubchik
10740939eb 5.5 merge 2014-03-26 22:25:38 +01:00
Michael Widenius
dd13db6f4a MDEV-5829: STOP SLAVE resets global status variables
Reason for the bug was an optimization for higher connect speed where we moved when global status was updated,
but forgot to update states when slave thread dies.
Fixed by adding thd->add_status_to_global() before deleting slave thread's thd.


mysys/my_delete.c:
  Added missing newline
sql/mysqld.cc:
  Use add_status_to_global()
sql/slave.cc:
  Added missing add_status_to_global()
sql/sql_class.cc:
  Use add_status_to_global()
sql/sql_class.h:
  Simplify adding local status to global by adding add_status_to_global()
2014-03-14 16:29:23 +02:00
unknown
8b9b7ec395 MDEV-5804: If same GTID is received on multiple master connections in multi-source replication, the event is double-executed causing corruption or replication failure
Some fixes, mainly to make it work in non-parallel replication mode also
(--slave-parallel-threads=0).

Patch should be fairly complete now.
2014-03-12 00:14:49 +01:00
unknown
2c2478b822 MDEV-5804: If same GTID is received on multiple master connections in multi-source replication, the event is double-executed causing corruption or replication failure
Before, the arrival of same GTID twice in multi-source replication
would cause double-apply or in gtid strict mode an error.

Keep the behaviour, but add an option --gtid-ignore-duplicates which
allows to correctly handle duplicates, ignoring all but the first.
This relies on the user ensuring correct configuration so that
sequence numbers are strictly increasing within each replication
domain; then duplicates can be detected simply by comparing the
sequence numbers against what is already applied.

Only one master connection (but possibly multiple parallel worker
threads within that connection) is allowed to apply events within
one replication domain at a time; any other connection that
receives a GTID in the same domain either discards it (if it is
already applied) or waits for the other connection to not have
any events to apply.

Intermediate patch, as proof-of-concept for testing. The main limitation
is that currently it is only implemented for parallel replication,
@@slave_parallel_threads > 0.
2014-03-09 10:27:38 +01:00
Sergei Golubchik
4c788b06d4 10.0-base merge 2014-03-05 23:20:10 +01:00
unknown
cfc2eb9dd2 MDEV-5703: [PATCH] Slave disconnects and fails to reconnect on Error_code: 1159
Patch from Tomas Matejicek

Add missing error code to is_networ_error(), to allow slave to
automatically attempt reconnection also in this case.
2014-03-04 14:37:09 +01:00
unknown
5ec49e6452 Merge MDEV-5754, MDEV-5769, and MDEV-5764 into 10.0 2014-03-04 14:32:42 +01:00
unknown
bd2a0a2389 Merge MDEV-5754, MDEV-5769, and MDEV-5764 into 10.0-base 2014-03-04 14:21:00 +01:00
unknown
ec374f1e53 MDEV-5769: Slave crashes on attempt to do parallel replication from an older master
Older master has no GTID events, so such events are not available for
deciding on scheduling of event groups and so on.

With this patch, we run such events from old masters single-threaded, in the
sql driver thread.

This seems better than trying to make the parallel code handle the data from
older masters; while possible, this would require a lot of testing (as well as
possibly some extra overhead in the scheduling of events), which hardly seems
worthwhile.
2014-03-04 08:48:32 +01:00
unknown
641feed481 MDEV-5764: START SLAVE UNTIL does not work with parallel replication
With parallel replication, there can be any number of events queued on
in-memory lists in the worker threads.

For normal STOP SLAVE, we want to skip executing any remaining events on those
lists and stop as quickly as possible.

However, for START SLAVE UNTIL, when the UNTIL position is reached in the SQL
driver thread, we must _not_ stop until all already queued events for the
workers have been executed - otherwise we would stop too early, before the
actual UNTIL position had been completely reached.

The code did not handle UNTIL correctly, stopping too early due to not
executing the queued events to completion. Fix this, and also implement that
an explicit STOP SLAVE in the middle (when the SQL driver thread has reached
the UNTIL position but the workers have not) _will_ cause an immediate stop.
2014-03-03 12:13:55 +01:00
Sergei Golubchik
41c760b121 merge 2014-02-28 10:00:31 +01:00
Sergei Golubchik
05aba79e98 MDEV-4447 MariaDB sources should have unix-style line endings everywhere 2014-02-27 12:00:16 +01:00
unknown
6d3150c48d Merge MDEV-5657 (parallel replication) to 10.0-base 2014-02-26 17:03:32 +01:00
unknown
20959fa09c Merge MDEV-5657 (parallel replication) to 10.0 2014-02-26 16:38:42 +01:00
Sergei Golubchik
0dc23679c8 10.0-base merge 2014-02-26 15:28:07 +01:00
unknown
e90f68c0ba MDEV-5657: Parallel replication.
Clean up and improve the parallel implementation code, mainly related to
scheduling of work to threads and handling of stop and errors.

Fix a lot of bugs in various corner cases that could lead to crashes or
corruption.

Fix that a single replication domain could easily grab all worker threads and
stall all other domains; now a configuration variable
--slave-domain-parallel-threads allows to limit the number of
workers.

Allow next event group to start as soon as previous group begins the commit
phase (as opposed to when it ends it); this allows multiple event groups on
the slave to participate in group commit, even when no other opportunities for
parallelism are available.

Various fixes:

 - Fix some races in the rpl.rpl_parallel test case.

 - Fix an old incorrect assertion in Log_event iocache read.

 - Fix repeated malloc/free of wait_for_commit and rpl_group_info objects.

 - Simplify wait_for_commit wakeup logic.

 - Fix one case in queue_for_group_commit() where killing one thread would
   fail to correctly signal the error to the next, causing loss of the
   transaction after slave restart.

 - Fix leaking of pthreads (and their allocated stack) due to missing
   PTHREAD_CREATE_DETACHED attribute.

 - Fix how one batch of group-committed transactions wait for the previous
   batch before starting to execute themselves. The old code had a very
   complex scheduling where the first transaction was handled differently,
   with subtle bugs in corner cases. Now each event group is always scheduled
   for a new worker (in a round-robin fashion amongst available workers).
   Keep a count of how many transactions have started to commit, and wait for
   that counter to reach the appropriate value.

 - Fix slave stop to wait for all workers to actually complete processing;
   before, the wait was for update of last_committed_sub_id, which happens a
   bit earlier, and could leave worker threads potentially accessing bits of
   the replication state that is no longer valid after slave stop.

 - Fix a couple of places where the test suite would kill a thread waiting
   inside enter_cond() in connection with debug_sync; debug_sync + kill can
   crash in rare cases due to a race with mysys_var_current_mutex in this
   case.

 - Fix some corner cases where we had enter_cond() but no exit_cond().

 - Fix that we could get failure in wait_for_prior_commit() but forget to flag
   the error with my_error().

 - Fix slave stop (both for normal stop and stop due to error). Now, at stop
   we pick a specific safe point (in terms of event groups executed) and make
   sure that all event groups before that point are executed to completion,
   and that no event group after start executing; this ensures a safe place to
   restart replication, even for non-transactional stuff/DDL. In error stop,
   make sure that all prior event groups are allowed to execute to completion,
   and that any later event groups that have started are rolled back, if
   possible. The old code could leave eg. T1 and T3 committed but T2 not, or
   it could even leave half a transaction not rolled back in some random
   worker, which would cause big problems when that worker was later reused
   after slave restart.

 - Fix the accounting of amount of events queued for one worker. Before, the
   amount was reduced immediately as soon as the events were dequeued (which
   happens all at once); this allowed twice the amount of events to be queued
   in memory for each single worker, which is not what users would expect.

 - Fix that an error set during execution of one event was sometimes not
   cleared before executing the next, causing problems with the error
   reporting.

 - Fix incorrect handling of thd->killed in worker threads.
2014-02-26 15:02:09 +01:00
Sergei Golubchik
0b9a0a3517 5.5 merge 2014-02-25 16:04:35 +01:00
Sergey Vojtovich
d12c7adf71 MDEV-5314 - Compiling fails on OSX using clang
This is port of fix for MySQL BUG#17647863.

revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
  Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM

  Rename test() macro to MY_TEST() to avoid conflict with libc++.
2014-02-19 14:05:15 +04:00
Sergei Golubchik
84651126c0 MySQL-5.5.36 merge
(without few incorrect bugfixes and with 1250 files where only a copyright year was changed)
2014-02-17 11:00:51 +01:00
unknown
33a8afd8ef MDEV-5703: [PATCH] Slave disconnects and fails to reconnect on Error_code: 1159
Patch from Tomas Matejicek

Add missing error code to is_networ_error(), to allow slave to
automatically attempt reconnection also in this case.
2014-03-04 20:50:19 +01:00
unknown
dd93ec5633 Merge MariaDB 10.0-base to 10.0. 2014-02-10 15:12:17 +01:00
Michael Widenius
10001c8e4f Automatic merge 2014-02-05 19:23:11 +02:00
Michael Widenius
5426facdcb Replication changes for CREATE OR REPLACE TABLE
- CREATE TABLE is by default executed on the slave as CREATE OR REPLACE
- DROP TABLE is by default executed on the slave as DROP TABLE IF NOT EXISTS

This means that a slave will by default continue even if we try to create
a table that existed on the slave (the table will be deleted and re-created) or
if we try to drop a table that didn't exist on the slave.
This should be safe as instead of having the slave stop because of an inconsistency between
master and slave, it will fix the inconsistency.
Those that would prefer to get a stopped slave instead for the above cases can set slave_ddl_exec_mode to STRICT. 

- Ensure that a CREATE OR REPLACE TABLE which dropped a table is replicated
- DROP TABLE that generated an error on master is handled as an identical DROP TABLE on the slave (IF NOT EXISTS is not added in this case)
- Added slave_ddl_exec_mode variable to decide how DDL's are replicated

New logic for handling BEGIN GTID ... COMMIT from the binary log:

- When we find a BEGIN GTID, we start a transaction and set OPTION_GTID_BEGIN
- When we find COMMIT, we reset OPTION_GTID_BEGIN and execute the normal COMMIT code.
- While OPTION_GTID_BEGIN is set:
  - We don't generate implict commits before or after statements
  - All tables are regarded as transactional tables in the binary log (to ensure things are executed exactly as on the master)
- We reset OPTION_GTID_BEGIN also on rollback

This will help ensuring that we don't get any sporadic commits (and thus new GTID's) on the slave and will help keep the GTID's between master and slave in sync.


mysql-test/extra/rpl_tests/rpl_log.test:
  Added testing of mode slave_ddl_exec_mode=STRICT
mysql-test/r/mysqld--help.result:
  New help messages
mysql-test/suite/rpl/r/create_or_replace_mix.result:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/r/create_or_replace_row.result:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/r/create_or_replace_statement.result:
  Testing replication of create or replace
mysql-test/suite/rpl/r/rpl_gtid_startpos.result:
  Test must be run in slave_ddl_exec_mode=STRICT as part of the test depends on that DROP TABLE should fail on slave.
mysql-test/suite/rpl/r/rpl_row_log.result:
  Updated result
mysql-test/suite/rpl/r/rpl_row_log_innodb.result:
  Updated result
mysql-test/suite/rpl/r/rpl_row_show_relaylog_events.result:
  Updated result
mysql-test/suite/rpl/r/rpl_stm_log.result:
  Updated result
mysql-test/suite/rpl/r/rpl_stm_mix_show_relaylog_events.result:
  Updated result
mysql-test/suite/rpl/r/rpl_temp_table_mix_row.result:
  Updated result
mysql-test/suite/rpl/t/create_or_replace.inc:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_mix.cnf:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_mix.test:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_row.cnf:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_row.test:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_statement.cnf:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/create_or_replace_statement.test:
  Testing of CREATE OR REPLACE TABLE with replication
mysql-test/suite/rpl/t/rpl_gtid_startpos.test:
  Test must be run in slave_ddl_exec_mode=STRICT as part of the test depends on that DROP TABLE should fail on slave.
mysql-test/suite/rpl/t/rpl_stm_log.test:
  Removed some lines
mysql-test/suite/sys_vars/r/slave_ddl_exec_mode_basic.result:
  Testing of slave_ddl_exec_mode
mysql-test/suite/sys_vars/t/slave_ddl_exec_mode_basic.test:
  Testing of slave_ddl_exec_mode
sql/handler.cc:
  Regard all tables as transactional in commit if OPTION_GTID_BEGIN is set.
  This is to ensure that statments are not commited too early if non transactional tables are used.
sql/log.cc:
  Regard all tables as transactional in commit if OPTION_GTID_BEGIN is set.
  Also treat 'direct' log events as transactional (to get them logged as they where on the master)
sql/log_event.cc:
  Ensure that the new error from DROP TABLE when trying to drop a view is treated same as the old one.
  Store error code that slave expects in THD.
  Set OPTION_GTID_BEGIN if we find a BEGIN.
  Reset OPTION_GTID_BEGIN if we find a COMMIT.
sql/mysqld.cc:
  Added slave_ddl_exec_mode_options
sql/mysqld.h:
  Added slave_ddl_exec_mode_options
sql/rpl_gtid.cc:
  Reset OPTION_GTID_BEGIN if we record a gtid (safety)
sql/sql_class.cc:
  Regard all tables as transactional in commit if OPTION_GTID_BEGIN is set.
sql/sql_class.h:
  Added to THD: log_current_statement and slave_expected_error
sql/sql_insert.cc:
  Ensure that CREATE OR REPLACE is logged if table was deleted.
  Don't do implicit commit for CREATE if we are under OPTION_GTID_BEGIN
sql/sql_parse.cc:
  Change CREATE TABLE -> CREATE OR REPLACE TABLE for slaves
  Change DROP TABLE -> DROP TABLE IF EXISTS for slaves
  CREATE TABLE doesn't force implicit commit in case of OPTION_GTID_BEGIN
  Don't do commits before or after any statement if OPTION_GTID_BEGIN was set.
sql/sql_priv.h:
  Added OPTION_GTID_BEGIN
sql/sql_show.cc:
  Enhanced store_create_info() to also be able to handle CREATE OR REPLACE
sql/sql_show.h:
  Updated prototype
sql/sql_table.cc:
  Ensure that CREATE OR REPLACE is logged if table was deleted.
sql/sys_vars.cc:
  Added slave_ddl_exec_mode
sql/transaction.cc:
  Added warning if we got a GTID under OPTION_GTID_BEGIN
2014-02-05 19:01:59 +02:00
Sergei Golubchik
72c20282db 10.0-base merge 2014-02-03 15:22:39 +01:00
Sergei Golubchik
59d9d08e2b 5.5 merge 2014-02-01 00:54:03 +01:00
Michael Widenius
7ffc9da093 Implementation of MDEV-5491: CREATE OR REPLACE TABLE
Using CREATE OR REPLACE TABLE is be identical to

DROP TABLE IF EXISTS table_name;
CREATE TABLE ...;

Except that:

* CREATE OR REPLACE is be atomic (now one can create the same table between drop and create).
* Temporary tables will not shadow the table name for the DROP as the CREATE TABLE tells us already if we are using a temporary table or not.
* If the table was locked with LOCK TABLES, the new table will be locked with the same lock after it's created.

Implementation details:
- We don't anymore open the to-be-created table during CREATE TABLE, which the original code did.
  - There is no need to open a table we are planning to create. It's enough to check if the table exists or not.
- Removed some of duplicated code for CREATE IF NOT EXISTS.
- Give an error when using CREATE OR REPLACE with IF NOT EXISTS (conflicting options).
- As a side effect of the code changes, we don't anymore have to internally re-prepare prepared statements with CREATE TABLE if the table exists.
- Made one code path for all testing if log table are in use.
- Better error message if one tries to create/drop/alter a log table in use
- Added back disabled rpl_row_create_table test as it now seams to work and includes a lot of interesting tests.
- Added HA_LEX_CREATE_REPLACE to mark if we are using CREATE OR REPLACE
- Aligned CREATE OR REPLACE parsing code in sql_yacc.yy for TABLE and VIEW
- Changed interface for drop_temporary_table() to make it more reusable
- Changed Locked_tables_list::init_locked_tables() to work on the table object instead of the table list object. Before this it used a mix of both, which was not good.
- Locked_tables_list::unlock_locked_tables(THD *thd) now requires a valid thd argument. Old usage of calling this with 0 i changed to instead call Locked_tables_list::reset()
- Added functions Locked_tables_list:restore_lock() and Locked_tables_list::add_back_last_deleted_lock() to be able to easily add back a locked table to the lock list.
- Added restart_trans_for_tables() to be able to restart a transaction.
- DROP_ACL is required if one uses CREATE TABLE OR REPLACE.
- Added drop of normal and temporary tables in create_table_imp() if CREATE OR REPLACE was used.
- Added reacquiring of table locks in mysql_create_table() and mysql_create_like_table()




mysql-test/include/commit.inc:
  With new code we get fewer status increments
mysql-test/r/commit_1innodb.result:
  With new code we get fewer status increments
mysql-test/r/create.result:
  Added testing of create or replace with timeout
mysql-test/r/create_or_replace.result:
  Basic testing of CREATE OR REPLACE TABLE
mysql-test/r/partition_exchange.result:
  New error message
mysql-test/r/ps_ddl.result:
  Fewer reprepares with new code
mysql-test/suite/archive/discover.result:
  Don't rediscover archive tables if the .frm file exists
  (Sergei will look at this if there is a better way...)
mysql-test/suite/archive/discover.test:
  Don't rediscover archive tables if the .frm file exists
  (Sergei will look at this if there is a better way...)
mysql-test/suite/funcs_1/r/innodb_views.result:
  New error message
mysql-test/suite/funcs_1/r/memory_views.result:
  New error message
mysql-test/suite/rpl/disabled.def:
  rpl_row_create_table should now be safe to use
mysql-test/suite/rpl/r/rpl_row_create_table.result:
  Updated results after adding back disabled test
mysql-test/suite/rpl/t/rpl_create_if_not_exists.test:
  Added comment
mysql-test/suite/rpl/t/rpl_row_create_table.test:
  Added CREATE OR REPLACE TABLE test
mysql-test/t/create.test:
  Added CREATE OR REPLACE TABLE test
mysql-test/t/create_or_replace-master.opt:
  Create logs
mysql-test/t/create_or_replace.test:
  Basic testing of CREATE OR REPLACE TABLE
mysql-test/t/partition_exchange.test:
  Error number changed as we are now using same code for all log table change issues
mysql-test/t/ps_ddl.test:
  Fewer reprepares with new code
sql/handler.h:
  Moved things around a bit in a structure to get better alignment.
  Added HA_LEX_CREATE_REPLACE to mark if we are using CREATE OR REPLACE
  Added 3 elements to end of HA_CREATE_INFO to be able to store state to add backs locks in case of LOCK TABLES.
sql/log.cc:
  Reimplemented check_if_log_table():
  - Simpler and faster usage
  - Can give error messages
  
  This gives us one code path for allmost all error messages if log tables are in use
sql/log.h:
  New interface for check_if_log_table()
sql/slave.cc:
  More logging
sql/sql_alter.cc:
  New interface for check_if_log_table()
sql/sql_base.cc:
  More documentation
  Changed interface for drop_temporary_table() to make it more reusable
  Changed Locked_tables_list::init_locked_tables() to work on the table object instead of the table list object. Before this it used a mix of both, which was not good.
  Locked_tables_list::unlock_locked_tables(THD *thd) now requires a valid thd argument.  Old usage of calling this with 0 i changed to instead call Locked_tables_list::reset()
  Added functions Locked_tables_list:restore_lock() and Locked_tables_list::add_back_last_deleted_lock() to be able to easily add back a locked table to the lock list.
  Check for command number instead of open_strategy of CREATE TABLE was used.
  Added restart_trans_for_tables() to be able to restart a transaction.  This was needed in "create or replace ... select" between the drop table and the select.
sql/sql_base.h:
  Added and updated function prototypes
sql/sql_class.h:
  Added new prototypes to Locked_tables_list class
  Added extra argument to select_create to avoid double call to eof() or send_error()
  - I needed this in some edge case where the table was not created against expections.
sql/sql_db.cc:
  New interface for check_if_log_table()
sql/sql_insert.cc:
  Remember position to lock information so that we can reaquire table lock for LOCK TABLES + CREATE OR REPLACE TABLE SELECT. Later add back the lock by calling restore_lock().
  Removed one not needed indentation level in create_table_from_items()
  Ensure we don't call send_eof() or abort_result_set() twice.
sql/sql_lex.h:
  Removed variable that I temporarly added in an earlier changeset
sql/sql_parse.cc:
  Removed old test code (marked with QQ)
  Ensure that we have open_strategy set as TABLE_LIST::OPEN_STUB in CREATE TABLE
  Removed some IF NOT EXISTS code as this is now handled in create_table_table_impl().
  Set OPTION_KEEP_LOGS later. This code had to be moved as the test for IF EXISTS has changed place.
  DROP_ACL is required if one uses CREATE TABLE OR REPLACE.
sql/sql_partition_admin.cc:
  New interface for check_if_log_table()
sql/sql_rename.cc:
  New interface for check_if_log_table()
sql/sql_table.cc:
  New interface for check_if_log_table()
  Moved some code in mysql_rm_table() under a common test.
  - Safe as temporary tables doesn't have statistics.
  - !is_temporary_table(table) test was moved out from drop_temporary_table() and merged with upper level code.
  - Added drop of normal and temporary tables in create_table_imp() if CREATE OR REPLACE was used.
  - Added reacquiring of table locks in mysql_create_table() and mysql_create_like_table()
  - In mysql_create_like_table(), restore table->open_strategy() if it was changed.
  - Re-test if table was a view after opening it.
sql/sql_table.h:
  New prototype for mysql_create_table_no_lock()
sql/sql_yacc.yy:
  Added syntax for CREATE OR REPLACE TABLE
  Reuse new code for CREATE OR REPLACE VIEW
sql/table.h:
  Added name for enum type
sql/table_cache.cc:
  More DBUG
2014-01-29 15:37:17 +02:00
Jan Lindström
d43afb8828 Merge MariaDB-10.0.7 revision 3961. 2014-01-25 11:02:49 +02:00
unknown
8cc6e90d74 MDEV-5509: Seconds_behind_master incorrect in parallel replication
The problem was a race between the SQL driver thread and the worker threads.
The SQL driver thread would set rli->last_master_timestamp to zero to
mark that it has caught up with the master, while the worker threads would
set it to the timestamp of the executed event. This can happen out-of-order
in parallel replication, causing the "caught up" status to be overwritten
and Seconds_Behind_Master to wrongly grow when the slave is idle.

To fix, introduce a separate flag rli->sql_thread_caught_up to mark that the
SQL driver thread is caught up. This avoids issues with worker threads
overwriting the SQL driver thread status. In parallel replication, we then
make SHOW SLAVE STATUS check in addition that all worker threads are idle
before showing Seconds_Behind_Master as 0 due to slave idle.
2014-01-08 11:00:44 +01:00
Murthy Narkedimilli
c92223e198 Updated/added copyright headers 2014-01-06 10:52:35 +05:30
Murthy Narkedimilli
496abd0814 Updated/added copyright headers 2014-01-06 10:52:35 +05:30
Michael Widenius
4e9a2d5469 Don't writing entries to slave log about binlog_checksum not existing on master if log_warnings is <=1.
This solves the issue of getting a lot of unnecessary errors logged on the slave when connecting to MySQL or an old MariaDB version.


sql/slave.cc:
  Don't write that binlog_checksum doesn't exists on the master if log_warnings <= 1
2014-01-05 15:21:58 +02:00