mariadb/mysql-test/main/status.test
Monty b9f5793176 MDEV-9101 Limit size of created disk temporary files and tables
Two new variables added:
- max_tmp_space_usage : Limits the the temporary space allowance per user
- max_total_tmp_space_usage: Limits the temporary space allowance for
  all users.

New status variables: tmp_space_used & max_tmp_space_used
New field in information_schema.process_list: TMP_SPACE_USED

The temporary space is counted for:
- All SQL level temporary files. This includes files for filesort,
  transaction temporary space, analyze, binlog_stmt_cache etc.
  It does not include engine internal temporary files used for repair,
  alter table, index pre sorting etc.
- All internal on disk temporary tables created as part of resolving a
  SELECT, multi-source update etc.

Special cases:
- When doing a commit, the last flush of the binlog_stmt_cache
  will not cause an error even if the temporary space limit is exceeded.
  This is to avoid giving errors on commit. This means that a user
  can temporary go over the limit with up to binlog_stmt_cache_size.

Noteworthy issue:
- One has to be careful when using small values for max_tmp_space_limit
  together with binary logging and with non transactional tables.
  If a the binary log entry for the query is bigger than
  binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit
  when flushing the entry to disk, the query will abort and the
  binary log will not contain the last changes to the table.
  This will also stop the slave!
  This is also true for all Aria tables as Aria cannot do rollback
  (except in case of crashes)!
  One way to avoid it is to use @@binlog_format=statement for
  queries that updates a lot of rows.

Implementation:
- All writes to temporary files or internal temporary tables, that
  increases the file size, are routed through temp_file_size_cb_func()
  which updates and checks the temp space usage.
- Most of the temporary file monitoring is done inside IO_CACHE.
  Temporary file monitoring is done inside the Aria engine.
- MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache().
  MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means
  that we track the file usage and we give an error if the limit is
  breached. This is used to not give an error on commit when
  binlog_stmp_cache is flushed.
- global_tmp_space_used contains the total tmp space used so far.
  This is needed quickly check against max_total_tmp_space_usage.
- Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and
  handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL.
  This is needed until we move general errors to it's own error space
  so that they cannot conflict with system error numbers.
- Return value of my_chsize() and mysql_file_chsize() has changed
  so that -1 is returned in the case my_chsize() could not decrease
  the file size (very unlikely and will not happen on modern systems).
  All calls to _chsize() are updated to check for > 0 as the error
  condition.
- At the destruction of THD we check that THD::tmp_file_space == 0
- At server end we check that global_tmp_space_used == 0
- As a precaution against errors in the tmp_space_used code, one can set
  max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable
  the tmp space quota errors.
- truncate_io_cache() function added.
- Aria tables using static or dynamic row length are registered in 8K
  increments to avoid some calls to update_tmp_file_size().

Other things:
- Ensure that all handler errors are registered.  Before, some engine
  errors could be printed as "Unknown error".
- Fixed bug in filesort() that causes a assert if there was an error
  when writing to the temporay file.
- Fixed that compute_window_func() now takes into account write errors.
- In case of parallel replication, rpl_group_info::cleanup_context()
  could call trans_rollback() with thd->error set, which would cause
  an assert. Fixed by resetting the error before calling trans_rollback().
- Fixed bug in subselect3.inc which caused following test to use
  heap tables with low value for max_heap_table_size
- Fixed bug in sql_expression_cache where it did not overflow
  heap table to Aria table.
- Added Max_tmp_disk_space_used to slow query log.
- Fixed some bugs in log_slow_innodb.test
2024-05-27 12:39:04 +02:00

537 lines
16 KiB
Text

# This test requires that --log-output includes 'table', and the general
# log is on
# embedded server causes different stat
-- source include/not_embedded.inc
# Save the initial number of concurrent sessions
--source include/count_sessions.inc
# Disable concurrent inserts to avoid sporadic test failures as it might
# affect the the value of variables used throughout the test case.
set @old_concurrent_insert= @@global.concurrent_insert;
set @@global.concurrent_insert= 0;
# Disable logging to table, since this will also cause table locking and unlocking, which will
# show up in SHOW STATUS and may cause sporadic failures
SET @old_log_output = @@global.log_output;
SET GLOBAL LOG_OUTPUT = 'FILE';
# PS causes different statistics
--disable_ps_protocol
connect (con1,localhost,root,,);
connect (con2,localhost,root,,);
connection default;
flush global status;
show status like 'Table_lock%';
select * from information_schema.session_status where variable_name like 'Table_lock%';
set sql_log_bin=0;
set @old_general_log = @@global.general_log;
set global general_log = 'OFF';
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1(n int) engine=myisam;
insert into t1 values(1);
select get_lock('mysqltest_lock', 100);
connection con2;
--echo # Sending:
--send update t1 set n = get_lock('mysqltest_lock', 100)
connection con1;
--echo # Wait for the first UPDATE to get blocked.
let $wait_condition= select count(*) from INFORMATION_SCHEMA.PROCESSLIST
where STATE = "User lock" and
INFO = "update t1 set n = get_lock('mysqltest_lock', 100)";
--source include/wait_condition.inc
let $ID= `select connection_id()`;
--echo # Sending:
--send update t1 set n = 3
connection default;
--echo # wait for the second UPDATE to get blocked
let $wait_condition= select 1 from INFORMATION_SCHEMA.PROCESSLIST
where ID = $ID and STATE = "Waiting for table level lock";
--source include/wait_condition.inc
select release_lock('mysqltest_lock');
connection con2;
--echo # Reaping first UPDATE
--reap
select release_lock('mysqltest_lock');
connection con1;
--echo # Reaping second UPDATE
reap;
show status like 'Table_locks_waited';
connection default;
drop table t1;
set global general_log = @old_general_log;
disconnect con2;
disconnect con1;
# End of 4.1 tests
#
# last_query_cost
#
select 1;
show status like 'last_query_cost';
create table t1 (a int);
insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
select * from t1 where a=6;
show status like 'last_query_cost';
# Ensure value dosn't change by second status call
show status like 'last_query_cost';
select 1;
show status like 'last_query_cost';
drop table t1;
#
# Test for Bug#15933 max_used_connections is wrong after FLUSH STATUS
# if connections are cached
#
#
# The first suggested fix from the bug report was chosen
# (see http://bugs.mysql.com/bug.php?id=15933):
#
# a) On flushing the status, set max_used_connections to
# threads_connected, not to 0.
#
# b) Check if it is necessary to increment max_used_connections when
# taking a thread from the cache as well as when creating new threads
#
# Wait for at most $disconnect_timeout seconds for disconnects to finish.
let $disconnect_timeout = 10;
# Wait for any previous disconnects to finish.
FLUSH GLOBAL STATUS;
--disable_query_log
--disable_result_log
eval SET @wait_left = $disconnect_timeout;
let $max_used_connections = `SHOW STATUS LIKE 'max_used_connections'`;
eval SET @max_used_connections = SUBSTRING('$max_used_connections', 21)+0;
let $wait_more = `SELECT @max_used_connections != 1 && @wait_left > 0`;
while ($wait_more)
{
sleep 1;
FLUSH GLOBAL STATUS;
SET @wait_left = @wait_left - 1;
let $max_used_connections = `SHOW STATUS LIKE 'max_used_connections'`;
eval SET @max_used_connections = SUBSTRING('$max_used_connections', 21)+0;
let $wait_more = `SELECT @max_used_connections != 1 && @wait_left > 0`;
}
--enable_query_log
--enable_result_log
# Prerequisite.
SHOW STATUS LIKE 'max_used_connections';
SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections';
# Save original setting.
SET @save_thread_cache_size=@@thread_cache_size;
SET GLOBAL thread_cache_size=3;
connect (con1,localhost,root,,);
connect (con2,localhost,root,,);
connection con1;
disconnect con2;
# Check that max_used_connections still reflects maximum value.
SHOW STATUS LIKE 'max_used_connections';
SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections';
# Check that after flush max_used_connections equals to current number
# of connections. First wait for previous disconnect to finish.
FLUSH GLOBAL STATUS;
--disable_query_log
--disable_result_log
eval SET @wait_left = $disconnect_timeout;
let $max_used_connections = `SHOW STATUS LIKE 'max_used_connections'`;
eval SET @max_used_connections = SUBSTRING('$max_used_connections', 21)+0;
let $wait_more = `SELECT @max_used_connections != 2 && @wait_left > 0`;
while ($wait_more)
{
sleep 1;
FLUSH GLOBAL STATUS;
SET @wait_left = @wait_left - 1;
let $max_used_connections = `SHOW STATUS LIKE 'max_used_connections'`;
eval SET @max_used_connections = SUBSTRING('$max_used_connections', 21)+0;
let $wait_more = `SELECT @max_used_connections != 2 && @wait_left > 0`;
}
--enable_query_log
--enable_result_log
# Check that we don't count disconnected thread any longer.
SHOW STATUS LIKE 'max_used_connections';
SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections';
# Check that max_used_connections is updated when cached thread is
# reused...
connect (con2,localhost,root,,);
SHOW STATUS LIKE 'max_used_connections';
SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections';
# ...and when new thread is created.
connect (con3,localhost,root,,);
SHOW STATUS LIKE 'max_used_connections';
SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections';
# Restore original setting.
connection default;
SET GLOBAL thread_cache_size=@save_thread_cache_size;
disconnect con3;
disconnect con2;
disconnect con1;
#
# Bug#30377 EXPLAIN loses last_query_cost when used with UNION
#
CREATE TABLE t1 ( a INT );
INSERT INTO t1 VALUES (1), (2);
SELECT a FROM t1 LIMIT 1;
SHOW SESSION STATUS LIKE 'Last_query_cost';
EXPLAIN SELECT a FROM t1;
SHOW SESSION STATUS LIKE 'Last_query_cost';
SELECT a FROM t1 UNION SELECT a FROM t1 ORDER BY a;
SHOW SESSION STATUS LIKE 'Last_query_cost';
EXPLAIN SELECT a FROM t1 UNION SELECT a FROM t1 ORDER BY a;
SHOW SESSION STATUS LIKE 'Last_query_cost';
SELECT a IN (SELECT a FROM t1) FROM t1 LIMIT 1;
SHOW SESSION STATUS LIKE 'Last_query_cost';
SELECT (SELECT a FROM t1 LIMIT 1) x FROM t1 LIMIT 1;
SHOW SESSION STATUS LIKE 'Last_query_cost';
SELECT * FROM t1 a, t1 b LIMIT 1;
SHOW SESSION STATUS LIKE 'Last_query_cost';
DROP TABLE t1;
# End of 5.0 tests
#
# Ensure that SHOW STATUS only changes global status variables
#
connect (con1,localhost,root,,);
let $rnd_next = `show global status like 'handler_read_rnd_next'`;
let $tmp_table = `show global status like 'Created_tmp_tables'`;
show status like 'com_show_status';
show status like 'hand%write%';
show status like '%\_tmp%';
show status like 'hand%write%';
show status like '%\_tmp%';
show status like 'com_show_status';
let $rnd_next2 = `show global status like 'handler_read_rnd_next'`;
let $tmp_table2 = `show global status like 'Created_tmp_tables'`;
--disable_query_log
eval select substring_index('$rnd_next2',0x9,-1)-substring_index('$rnd_next',0x9,-1) as rnd_diff, substring_index('$tmp_table2',0x9,-1)-substring_index('$tmp_table',0x9,-1) as tmp_table_diff;
--enable_query_log
disconnect con1;
connection default;
#
# Bug#30252 Com_create_function is not incremented.
#
flush status;
show status like 'Com%function';
DELIMITER //;
create function f1 (x INTEGER) returns integer
begin
declare ret integer;
set ret = x * 10;
return ret;
end //
DELIMITER ;//
drop function f1;
show status like 'Com%function';
#
# Bug#37908 Skipped access right check caused server crash.
#
connect (root, localhost, root,,test);
connection root;
--disable_warnings
create database db37908;
--enable_warnings
create table db37908.t1(f1 int);
insert into db37908.t1 values(1);
create user mysqltest_1@localhost;
grant usage,execute on test.* to mysqltest_1@localhost;
delimiter |;
create procedure proc37908() begin select 1; end |
create function func37908() returns int sql security invoker
return (select * from db37908.t1 limit 1)|
delimiter ;|
connect (user1,localhost,mysqltest_1,,test);
connection user1;
--error ER_TABLEACCESS_DENIED_ERROR
select * from db37908.t1;
--error ER_TABLEACCESS_DENIED_ERROR
show status where variable_name ='uptime' and 2 in (select * from db37908.t1);
--error ER_TABLEACCESS_DENIED_ERROR
show procedure status where name ='proc37908' and 1 in (select f1 from db37908.t1);
--error ER_TABLEACCESS_DENIED_ERROR
show function status where name ='func37908' and 1 in (select func37908());
connection default;
disconnect user1;
disconnect root;
drop database db37908;
drop procedure proc37908;
drop function func37908;
REVOKE ALL PRIVILEGES, GRANT OPTION FROM mysqltest_1@localhost;
DROP USER mysqltest_1@localhost;
# Wait until all non-default sessions are disconnected
let $wait_condition = SELECT COUNT(*) = 1 FROM information_schema.processlist;
--source include/wait_condition.inc
#
# Bug#41131 "Questions" fails to increment - ignores statements instead stored procs
#
connect (con1,localhost,root,,);
connection con1;
--disable_warnings
DROP PROCEDURE IF EXISTS p1;
DROP FUNCTION IF EXISTS f1;
--enable_warnings
DELIMITER $$;
CREATE FUNCTION f1() RETURNS INTEGER
BEGIN
DECLARE foo INTEGER;
DECLARE bar INTEGER;
SET foo=1;
SET bar=2;
RETURN foo;
END $$
CREATE PROCEDURE p1()
BEGIN
SELECT 1;
END $$
DELIMITER ;$$
let $org_queries= `SHOW STATUS LIKE 'Queries'`;
SELECT f1();
CALL p1();
let $new_queries= `SHOW STATUS LIKE 'Queries'`;
--disable_query_log
let $diff= `SELECT SUBSTRING('$new_queries',9)-SUBSTRING('$org_queries',9)`;
--enable_query_log
eval SELECT $diff;
disconnect con1;
connection default;
DROP PROCEDURE p1;
DROP FUNCTION f1;
# End of 5.1 tests
#
# Test of internal temporary table status variables
#
flush status;
create table t1 (a int not null auto_increment primary key, g int, b blob);
insert into t1 (g,b) values (1,'a'), (2, 'b'), (3, 'b'), (1, 'c');
select * from t1;
select b, count(*) from t1 group by b;
select g, count(*) from t1 group by g;
show status like 'Row%';
show status like 'Handler%';
show status like '%_tmp%';
drop table t1;
#
# Test of handler status counts
#
CREATE TABLE t1 (i int(11) DEFAULT NULL, KEY i (i) ) ENGINE=MyISAM;
insert into t1 values (1),(2),(3),(4),(5);
flush status;
select * from t1 where i=5 union select * from t1 where i=5;
show status like "handler%";
drop table t1;
# End of 5.3 tests
--echo #
--echo # MDEV-11153 - Introduce status variables for table cache monitoring and
--echo # tuning
--echo #
SET @old_table_open_cache= @@table_open_cache;
SET @@global.table_open_cache=10;
FLUSH TABLES;
FLUSH STATUS;
SHOW STATUS LIKE 'Table_open_cache%';
disable_query_log;
let $i= `SELECT @@table_open_cache + 5`;
while ($i)
{
eval CREATE TABLE t$i(a INT);
eval DELETE FROM t$i;
eval DELETE FROM t$i;
eval DELETE FROM t$i;
dec $i;
}
enable_query_log;
SHOW STATUS LIKE 'Table_open_cache%';
FLUSH TABLES;
FLUSH STATUS;
disable_query_log;
let $i= `SELECT @@table_open_cache + 5`;
while ($i)
{
eval DROP TABLE t$i;
dec $i;
}
enable_query_log;
SET @@global.table_open_cache= @old_table_open_cache;
--echo #
--echo # MDEV-14505 - Threads_running becomes scalability bottleneck
--echo #
--echo # Session status for Threads_running is currently always 1.
SHOW STATUS LIKE 'Threads_running';
SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME='THREADS_RUNNING';
FLUSH STATUS;
SHOW STATUS LIKE 'Threads_running';
SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME='THREADS_RUNNING';
--echo #
--echo # MDEV-15915 Add Feature_json status variable.
--echo #
SHOW STATUS LIKE 'Feature_json';
select json_valid('123');
SHOW STATUS LIKE 'Feature_json';
select * from json_table ('{}', '$' COLUMNS(x FOR ORDINALITY)) a;
SHOW STATUS LIKE 'Feature_json';
# Restore global concurrent_insert value. Keep in the end of the test file.
--connection default
set @@global.concurrent_insert= @old_concurrent_insert;
SET GLOBAL log_output = @old_log_output;
# Wait till we reached the initial number of concurrent sessions
--source include/wait_until_count_sessions.inc
--echo #
--echo # MDEV-30543 New status variable: Max_used_connections_time
--echo #
FLUSH GLOBAL STATUS;
connect (con1,localhost,root,,);
--sleep 1
connect (con2,localhost,root,,);
--sleep 1
let $time_1=`SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections_time'`;
--sleep 1
connection con1;
disconnect con2;
--sleep 1
let $time_2=`SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections_time'`;
--replace_regex /[0-9]*-[0-9]*-[0-9]* [0-9]*:[0-9]*:[0-9]*/DTVALUE/
eval SELECT '$time_1' = '$time_2' AS expect_1;
--sleep 1
connect (con3,localhost,root,,);
--sleep 1
connect (con4,localhost,root,,);
--sleep 1
let $time_3=`SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'max_used_connections_time'`;
--replace_regex /[0-9]*-[0-9]*-[0-9]* [0-9]*:[0-9]*:[0-9]*/DTVALUE/
eval SELECT '$time_1' < '$time_3' as expect_1;
disconnect con4;
disconnect con3;
disconnect con1;
connection default;
--echo # Test Uptime_since_flush_status
--echo # All results below should be 1
flush global status;
flush session status;
--sleep 1
let $global_flush_status= `SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME LIKE 'Uptime_since_flush_status'`;
let $local_flush_status= `SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'Uptime_since_flush_status'`;
--sleep 2
--evalp select $global_flush_status >= $local_flush_status as "1"
flush session status;
let $new_global_flush_status= `SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME LIKE 'Uptime_since_flush_status'`;
let $new_local_flush_status= `SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'Uptime_since_flush_status'`;
--evalp select $new_global_flush_status >= $global_flush_status as "1"
--evalp select $new_local_flush_status < $global_flush_status as "1"
--sleep 2
flush global status;
let $local_flush_status=$new_local_flush_status;
let $new_global_flush_status= `SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME LIKE 'Uptime_since_flush_status'`;
let $new_local_flush_status= `SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE VARIABLE_NAME LIKE 'Uptime_since_flush_status'`;
--evalp select $new_local_flush_status >= $local_flush_status as "1"
--evalp select $new_global_flush_status < $new_local_flush_status as "1"
#
# Test OLD_FLUSH_STATUS
#
create or replace table t1 (a int, key(a)) engine=MyISAM;
insert into t1 values (1),(2);
--connect con1,localhost,root
set old_mode=OLD_FLUSH_STATUS;
insert into t1 values (3),(4);
drop table t1;
select t1.variable_name, t1.variable_value as global_value, t2.variable_value as session_value
from information_schema.global_status t1 join information_schema.session_status t2
on t1.variable_name = t2.variable_name
where t1.variable_name in ('Key_writes','Com_insert');
flush status;
select t1.variable_name, t1.variable_value as global_value, t2.variable_value as session_value
from information_schema.global_status t1 join information_schema.session_status t2
on t1.variable_name = t2.variable_name
where t1.variable_name in ('Key_writes','Com_insert');
disconnect con1;
connection default;
--echo # end of 11.5 tests