mariadb/mysql-test/t/sp_sync.test

158 lines
4.4 KiB
Text
Raw Normal View History

# This test should work in embedded server after mysqltest is fixed
-- source include/not_embedded.inc
--echo Tests of syncronization of stored procedure execution.
--source include/have_debug_sync.inc
# Save the initial number of concurrent sessions.
--source include/count_sessions.inc
# Clean up resources used in this test case.
--disable_warnings
SET DEBUG_SYNC= 'RESET';
--enable_warnings
--echo #
--echo # Bug #30977 Concurrent statement using stored function and
--echo # DROP FUNCTION breaks SBR
--echo #
--echo # A stored routine could change after dispatch_command()
--echo # but before a MDL lock is taken. This must be noticed and the
--echo # sp cache flushed so the correct version can be loaded.
--echo #
connect (con2, localhost, root);
--echo # Connection default
connection default;
CREATE FUNCTION f1() RETURNS INT RETURN 1;
--echo # Get f1 cached
SELECT f1();
--echo # Then start executing it again...
SET DEBUG_SYNC= 'before_execute_sql_command SIGNAL before WAIT_FOR changed';
--echo # Sending:
--send SELECT f1()
--echo # Connection 2
connection con2;
SET DEBUG_SYNC= 'now WAIT_FOR before';
--echo # ... but before f1 is locked, change it.
DROP FUNCTION f1;
CREATE FUNCTION f1() RETURNS INT RETURN 2;
SET DEBUG_SYNC= 'now SIGNAL changed';
--echo # Connection default
--echo # We should now get '2' and not '1'.
connection default;
--echo # Reaping: SELECT f1()
--reap
disconnect con2;
DROP FUNCTION f1;
SET DEBUG_SYNC= 'RESET';
--echo #
--echo # Field translation items must be cleared in case of back-offs
--echo # for queries that use Information Schema tables. Otherwise
--echo # memory allocated in fix_fields() for views may end up referring
--echo # to freed memory.
--echo #
--disable_warnings
DROP FUNCTION IF EXISTS f1;
--enable_warnings
connect (con2, localhost, root);
connect (con3, localhost, root);
--echo # Connection default
connection default;
CREATE FUNCTION f1() RETURNS INT RETURN 0;
--echo # Connection con2
connection con2;
SET DEBUG_SYNC= 'after_wait_locked_pname SIGNAL locked WAIT_FOR issued';
--echo # con2 will now have an x-lock on f1
--echo # Sending:
--send ALTER FUNCTION f1 COMMENT 'comment'
--echo # Connection default
connection default;
SET DEBUG_SYNC= 'now WAIT_FOR locked';
--disable_result_log
--echo # This query will block due to the x-lock on f1 and back-off
--send SHOW OPEN TABLES WHERE f1()=0
--echo # Connection con3
connection con3;
let $wait_condition= SELECT COUNT(*)= 1 FROM information_schema.processlist
WHERE state= 'Waiting for stored function metadata lock'
AND info='SHOW OPEN TABLES WHERE f1()=0';
--source include/wait_condition.inc
--echo # Check that the IS query is blocked before releasing the x-lock
SET DEBUG_SYNC= 'now SIGNAL issued';
--echo # Connection default
connection default;
--echo # Reaping: ALTER FUNCTION f1 COMMENT 'comment'
--reap
--enable_result_log
DROP FUNCTION f1;
SET DEBUG_SYNC= 'RESET';
disconnect con2;
disconnect con3;
--echo #
--echo # Bug #48246 assert in close_thread_table
--echo #
Patch that changes approach to how we acquire metadata locks for DML statements and changes the way MDL locks are acquired/granted in contended case. Instead of backing-off when a lock conflict is encountered and waiting for it to go away before restarting open_tables() process we now wait for lock to be released without releasing any previously acquired locks. If conflicting lock goes away we resume opening tables. If waiting leads to a deadlock we try to resolve it by backing-off and restarting open_tables() immediately. As result both waiting for possibility to acquire and acquiring of a metadata lock now always happen within the same MDL API call. This has allowed to make release of a lock and granting it to the most appropriate pending request an atomic operation. Thanks to this it became possible to wake up during release of lock only those waiters which requests can be satisfied at the moment as well as wake up only one waiter in case when granting its request would prevent all other requests from being satisfied. This solves thundering herd problem which occured in cases when we were releasing some lock and woke up many waiters for SNRW or X locks (this was the issue in bug#52289 "performance regression for MyISAM in sysbench OLTP_RW test". This also allowed to implement more fair (FIFO) scheduling among waiters with the same priority. It also opens the door for introducing new types of requests for metadata locks such as low-prio SNRW lock which is necessary in order to support LOCK TABLES LOW_PRIORITY WRITE. Notice that after this sometimes can report ER_LOCK_DEADLOCK error in cases in which it has not happened before. Particularly we will always report this error if waiting for conflicting lock has happened in the middle of transaction and resulted in a deadlock. Before this patch the error was not reported if deadlock could have been resolved by backing off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
CREATE TABLE t0 (b INTEGER);
CREATE TABLE t1 (a INTEGER);
CREATE FUNCTION f1(b INTEGER) RETURNS INTEGER RETURN 1;
Patch that changes approach to how we acquire metadata locks for DML statements and changes the way MDL locks are acquired/granted in contended case. Instead of backing-off when a lock conflict is encountered and waiting for it to go away before restarting open_tables() process we now wait for lock to be released without releasing any previously acquired locks. If conflicting lock goes away we resume opening tables. If waiting leads to a deadlock we try to resolve it by backing-off and restarting open_tables() immediately. As result both waiting for possibility to acquire and acquiring of a metadata lock now always happen within the same MDL API call. This has allowed to make release of a lock and granting it to the most appropriate pending request an atomic operation. Thanks to this it became possible to wake up during release of lock only those waiters which requests can be satisfied at the moment as well as wake up only one waiter in case when granting its request would prevent all other requests from being satisfied. This solves thundering herd problem which occured in cases when we were releasing some lock and woke up many waiters for SNRW or X locks (this was the issue in bug#52289 "performance regression for MyISAM in sysbench OLTP_RW test". This also allowed to implement more fair (FIFO) scheduling among waiters with the same priority. It also opens the door for introducing new types of requests for metadata locks such as low-prio SNRW lock which is necessary in order to support LOCK TABLES LOW_PRIORITY WRITE. Notice that after this sometimes can report ER_LOCK_DEADLOCK error in cases in which it has not happened before. Particularly we will always report this error if waiting for conflicting lock has happened in the middle of transaction and resulted in a deadlock. Before this patch the error was not reported if deadlock could have been resolved by backing off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
CREATE PROCEDURE p1() SELECT COUNT(f1(a)) FROM t1, t0;
Patch that changes approach to how we acquire metadata locks for DML statements and changes the way MDL locks are acquired/granted in contended case. Instead of backing-off when a lock conflict is encountered and waiting for it to go away before restarting open_tables() process we now wait for lock to be released without releasing any previously acquired locks. If conflicting lock goes away we resume opening tables. If waiting leads to a deadlock we try to resolve it by backing-off and restarting open_tables() immediately. As result both waiting for possibility to acquire and acquiring of a metadata lock now always happen within the same MDL API call. This has allowed to make release of a lock and granting it to the most appropriate pending request an atomic operation. Thanks to this it became possible to wake up during release of lock only those waiters which requests can be satisfied at the moment as well as wake up only one waiter in case when granting its request would prevent all other requests from being satisfied. This solves thundering herd problem which occured in cases when we were releasing some lock and woke up many waiters for SNRW or X locks (this was the issue in bug#52289 "performance regression for MyISAM in sysbench OLTP_RW test". This also allowed to implement more fair (FIFO) scheduling among waiters with the same priority. It also opens the door for introducing new types of requests for metadata locks such as low-prio SNRW lock which is necessary in order to support LOCK TABLES LOW_PRIORITY WRITE. Notice that after this sometimes can report ER_LOCK_DEADLOCK error in cases in which it has not happened before. Particularly we will always report this error if waiting for conflicting lock has happened in the middle of transaction and resulted in a deadlock. Before this patch the error was not reported if deadlock could have been resolved by backing off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
INSERT INTO t0 VALUES(1);
INSERT INTO t1 VALUES(1), (2);
--echo # Connection 2
connect (con2, localhost, root);
CALL p1();
Patch that changes approach to how we acquire metadata locks for DML statements and changes the way MDL locks are acquired/granted in contended case. Instead of backing-off when a lock conflict is encountered and waiting for it to go away before restarting open_tables() process we now wait for lock to be released without releasing any previously acquired locks. If conflicting lock goes away we resume opening tables. If waiting leads to a deadlock we try to resolve it by backing-off and restarting open_tables() immediately. As result both waiting for possibility to acquire and acquiring of a metadata lock now always happen within the same MDL API call. This has allowed to make release of a lock and granting it to the most appropriate pending request an atomic operation. Thanks to this it became possible to wake up during release of lock only those waiters which requests can be satisfied at the moment as well as wake up only one waiter in case when granting its request would prevent all other requests from being satisfied. This solves thundering herd problem which occured in cases when we were releasing some lock and woke up many waiters for SNRW or X locks (this was the issue in bug#52289 "performance regression for MyISAM in sysbench OLTP_RW test". This also allowed to implement more fair (FIFO) scheduling among waiters with the same priority. It also opens the door for introducing new types of requests for metadata locks such as low-prio SNRW lock which is necessary in order to support LOCK TABLES LOW_PRIORITY WRITE. Notice that after this sometimes can report ER_LOCK_DEADLOCK error in cases in which it has not happened before. Particularly we will always report this error if waiting for conflicting lock has happened in the middle of transaction and resulted in a deadlock. Before this patch the error was not reported if deadlock could have been resolved by backing off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
SET DEBUG_SYNC= 'after_open_table_mdl_shared SIGNAL locked_t1 WAIT_FOR go_for_t0';
--echo # This call used to cause an assertion. MDL deadlock with upcoming
--echo # LOCK TABLES statement will cause back-off and retry.
--echo # A variable indicating if a prelocking list exists, used to be not
--echo # reset properly causing an eventual assert.
--echo # Sending:
--send CALL p1()
--echo # Connection default
connection default;
Patch that changes approach to how we acquire metadata locks for DML statements and changes the way MDL locks are acquired/granted in contended case. Instead of backing-off when a lock conflict is encountered and waiting for it to go away before restarting open_tables() process we now wait for lock to be released without releasing any previously acquired locks. If conflicting lock goes away we resume opening tables. If waiting leads to a deadlock we try to resolve it by backing-off and restarting open_tables() immediately. As result both waiting for possibility to acquire and acquiring of a metadata lock now always happen within the same MDL API call. This has allowed to make release of a lock and granting it to the most appropriate pending request an atomic operation. Thanks to this it became possible to wake up during release of lock only those waiters which requests can be satisfied at the moment as well as wake up only one waiter in case when granting its request would prevent all other requests from being satisfied. This solves thundering herd problem which occured in cases when we were releasing some lock and woke up many waiters for SNRW or X locks (this was the issue in bug#52289 "performance regression for MyISAM in sysbench OLTP_RW test". This also allowed to implement more fair (FIFO) scheduling among waiters with the same priority. It also opens the door for introducing new types of requests for metadata locks such as low-prio SNRW lock which is necessary in order to support LOCK TABLES LOW_PRIORITY WRITE. Notice that after this sometimes can report ER_LOCK_DEADLOCK error in cases in which it has not happened before. Particularly we will always report this error if waiting for conflicting lock has happened in the middle of transaction and resulted in a deadlock. Before this patch the error was not reported if deadlock could have been resolved by backing off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
SET DEBUG_SYNC= 'now WAIT_FOR locked_t1';
--echo # Issue LOCK TABLES statement which will enter in MDL deadlock
--echo # with CALL statement and as result will cause it to perform
--echo # back-off and retry.
SET DEBUG_SYNC= 'mdl_acquire_lock_wait SIGNAL go_for_t0';
LOCK TABLES t0 WRITE, t1 WRITE;
UNLOCK TABLES;
--echo # Connection 2
connection con2;
--echo # Reaping: CALL p1()
--reap;
--echo # Connection default
connection default;
disconnect con2;
DROP PROCEDURE p1;
DROP FUNCTION f1;
Patch that changes approach to how we acquire metadata locks for DML statements and changes the way MDL locks are acquired/granted in contended case. Instead of backing-off when a lock conflict is encountered and waiting for it to go away before restarting open_tables() process we now wait for lock to be released without releasing any previously acquired locks. If conflicting lock goes away we resume opening tables. If waiting leads to a deadlock we try to resolve it by backing-off and restarting open_tables() immediately. As result both waiting for possibility to acquire and acquiring of a metadata lock now always happen within the same MDL API call. This has allowed to make release of a lock and granting it to the most appropriate pending request an atomic operation. Thanks to this it became possible to wake up during release of lock only those waiters which requests can be satisfied at the moment as well as wake up only one waiter in case when granting its request would prevent all other requests from being satisfied. This solves thundering herd problem which occured in cases when we were releasing some lock and woke up many waiters for SNRW or X locks (this was the issue in bug#52289 "performance regression for MyISAM in sysbench OLTP_RW test". This also allowed to implement more fair (FIFO) scheduling among waiters with the same priority. It also opens the door for introducing new types of requests for metadata locks such as low-prio SNRW lock which is necessary in order to support LOCK TABLES LOW_PRIORITY WRITE. Notice that after this sometimes can report ER_LOCK_DEADLOCK error in cases in which it has not happened before. Particularly we will always report this error if waiting for conflicting lock has happened in the middle of transaction and resulted in a deadlock. Before this patch the error was not reported if deadlock could have been resolved by backing off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
DROP TABLES t0, t1;
SET DEBUG_SYNC= 'RESET';
# Check that all connections opened by test cases in this file are really
# gone so execution of other tests won't be affected by their presence.
--source include/wait_until_count_sessions.inc