mariadb/mysql-test/t/flush_block_commit_notembedded.test
Konstantin Osipov 0b39c189ba Backport of revno ## 2617.31.1, 2617.31.3, 2617.31.4, 2617.31.5,
2617.31.12, 2617.31.15, 2617.31.15, 2617.31.16, 2617.43.1
- initial changeset that introduced the fix for 
Bug#989 and follow up fixes for all test suite failures
introduced in the initial changeset. 
------------------------------------------------------------
revno: 2617.31.1
committer: Davi Arnaut <Davi.Arnaut@Sun.COM>
branch nick: 4284-6.0
timestamp: Fri 2009-03-06 19:17:00 -0300
message:
Bug#989: If DROP TABLE while there's an active transaction, wrong binlog order
WL#4284: Transactional DDL locking

Currently the MySQL server does not keep metadata locks on
schema objects for the duration of a transaction, thus failing
to guarantee the integrity of the schema objects being used
during the transaction and to protect then from concurrent
DDL operations. This also poses a problem for replication as
a DDL operation might be replicated even thought there are
active transactions using the object being modified.

The solution is to defer the release of metadata locks until
a active transaction is either committed or rolled back. This
prevents other statements from modifying the table for the
entire duration of the transaction. This provides commitment
ordering for guaranteeing serializability across multiple
transactions.

- Incompatible change:

If MySQL's metadata locking system encounters a lock conflict,
the usual schema is to use the try and back-off technique to
avoid deadlocks -- this schema consists in releasing all locks
and trying to acquire them all in one go.

But in a transactional context this algorithm can't be utilized
as its not possible to release locks acquired during the course
of the transaction without breaking the transaction commitments.
To avoid deadlocks in this case, the ER_LOCK_DEADLOCK will be
returned if a lock conflict is encountered during a transaction.

Let's consider an example:

A transaction has two statements that modify table t1, then table
t2, and then commits. The first statement of the transaction will
acquire a shared metadata lock on table t1, and it will be kept
utill COMMIT to ensure serializability.

At the moment when the second statement attempts to acquire a
shared metadata lock on t2, a concurrent ALTER or DROP statement
might have locked t2 exclusively. The prescription of the current
locking protocol is that the acquirer of the shared lock backs off
-- gives up all his current locks and retries. This implies that
the entire multi-statement transaction has to be rolled back.

- Incompatible change:

FLUSH commands such as FLUSH PRIVILEGES and FLUSH TABLES WITH READ
LOCK won't cause locked tables to be implicitly unlocked anymore.
2009-12-05 02:02:48 +03:00

72 lines
1.8 KiB
Text

# Let's see if FLUSH TABLES WITH READ LOCK blocks COMMIT of existing
# transactions.
# We verify that we did not introduce a deadlock.
# This is intended to mimick how mysqldump and innobackup work.
--source include/have_log_bin.inc
# And it requires InnoDB
--source include/have_log_bin.inc
--source include/have_innodb.inc
--echo # Save the initial number of concurrent sessions
--source include/count_sessions.inc
--echo # Establish connection con1 (user=root)
connect (con1,localhost,root,,);
--echo # Establish connection con2 (user=root)
connect (con2,localhost,root,,);
# FLUSH TABLES WITH READ LOCK should block writes to binlog too
--echo # Switch to connection con1
connection con1;
CREATE TABLE t1 (a INT) ENGINE=innodb;
RESET MASTER;
SET AUTOCOMMIT=0;
SELECT 1;
--echo # Switch to connection con2
connection con2;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
--echo # Switch to connection con1
connection con1;
send INSERT INTO t1 VALUES (1);
--echo # Switch to connection con2
connection con2;
sleep 1;
SHOW MASTER STATUS;
UNLOCK TABLES;
--echo # Switch to connection con1
connection con1;
reap;
DROP TABLE t1;
SET AUTOCOMMIT=1;
# GLR blocks new transactions
create table t1 (a int) engine=innodb;
connection con1;
flush tables with read lock;
connection con2;
begin;
--send insert into t1 values (1);
connection con1;
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = "Waiting for release of readlock" and
info = "insert into t1 values (1)";
--source include/wait_condition.inc
unlock tables;
connection con2;
--reap
commit;
drop table t1;
--echo # Switch to connection default and close connections con1 and con2
connection default;
disconnect con1;
disconnect con2;
--echo # Wait till all disconnects are completed
--source include/wait_until_count_sessions.inc