mariadb/mysql-test/main/join_outer_innodb.result

521 lines
17 KiB
Text
Raw Normal View History

SET @save_stats_persistent=@@GLOBAL.innodb_stats_persistent;
SET GLOBAL innodb_stats_persistent=0;
CREATE TABLE t1 (id int(11) NOT NULL PRIMARY KEY, name varchar(20),
INDEX (name)) ENGINE=InnoDB;
CREATE TABLE t2 (id int(11) NOT NULL PRIMARY KEY, fkey int(11),
FOREIGN KEY (fkey) REFERENCES t2(id)) ENGINE=InnoDB;
INSERT INTO t1 VALUES (1,'A1'),(2,'A2'),(3,'B');
INSERT INTO t2 VALUES (1,1),(2,2),(3,2),(4,3),(5,3);
EXPLAIN
SELECT COUNT(*) FROM t2 LEFT JOIN t1 ON t2.fkey = t1.id
WHERE t1.name LIKE 'A%';
id select_type table type possible_keys key key_len ref rows Extra
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t1 range PRIMARY,name name 23 NULL 2 Using where; Using index
1 SIMPLE t2 ref fkey fkey 5 test.t1.id 1 Using index
EXPLAIN
SELECT COUNT(*) FROM t2 LEFT JOIN t1 ON t2.fkey = t1.id
WHERE t1.name LIKE 'A%' OR FALSE;
id select_type table type possible_keys key key_len ref rows Extra
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t1 range PRIMARY,name name 23 NULL 2 Using where; Using index
1 SIMPLE t2 ref fkey fkey 5 test.t1.id 1 Using index
DROP TABLE t1,t2;
#
# BUG#58456: Assertion 0 in QUICK_INDEX_MERGE_SELECT::need_sorted_output
# in opt_range.h
#
CREATE TABLE t1 (
col_int INT,
col_int_key INT,
pk INT NOT NULL,
PRIMARY KEY (pk),
KEY col_int_key (col_int_key)
) ENGINE=InnoDB;
INSERT INTO t1 VALUES (NULL,1,1), (6,2,2), (5,3,3), (NULL,4,4);
INSERT INTO t1 VALUES (1,NULL,6), (8,5,7), (NULL,8,8), (8,NULL,5);
CREATE TABLE t2 (
pk INT PRIMARY KEY
) ENGINE=InnoDB;
EXPLAIN SELECT t1.pk
FROM t2 LEFT JOIN t1 ON t2.pk = t1.col_int
WHERE t1.col_int_key BETWEEN 5 AND 6
AND t1.pk IS NULL OR t1.pk IN (5)
ORDER BY pk;
id select_type table type possible_keys key key_len ref rows Extra
2012-03-01 23:22:22 +01:00
1 SIMPLE NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
SELECT t1.pk
FROM t2 LEFT JOIN t1 ON t2.pk = t1.col_int
WHERE t1.col_int_key BETWEEN 5 AND 6
AND t1.pk IS NULL OR t1.pk IN (5)
ORDER BY pk;
pk
DROP TABLE t1,t2;
# End BUG#58456
CREATE TABLE t1(a int, b int, KEY (a), PRIMARY KEY (b)) ENGINE=InnoDB;
CREATE TABLE t2 (b int, PRIMARY KEY (b));
INSERT INTO t2 VALUES (4),(9);
SELECT STRAIGHT_JOIN t1.a FROM t1 RIGHT JOIN t2 ON t1.b = t2.b
WHERE (t1.b NOT BETWEEN 1 AND 7 OR t1.a IS NULL AND t1.b = t2.b) AND t2.b = 4
GROUP BY 1;
a
DROP TABLE t1,t2;
#
Bug #59487: WRONG RESULT WITH STRAIGHT_JOIN AND RIGHT JOIN
#
CREATE TABLE t1 (
pk int(11) NOT NULL,
col_varchar_10_latin1_key varchar(10) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t1 VALUES (1,'1');
CREATE TABLE t2 (
pk int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t2 VALUES (1);
CREATE TABLE t3 (
pk int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t3 VALUES (1);
CREATE TABLE t4 (
pk int(11) NOT NULL,
col_int int(11) DEFAULT NULL,
col_int_key int(11) DEFAULT NULL,
col_varchar_10_latin1_key varchar(10) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t4 VALUES (1,1,1,'1');
CREATE TABLE t5 (
col_int int(11) DEFAULT NULL,
col_varchar_10_utf8_key varchar(10) CHARACTER SET utf8 DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t5 VALUES (1,'1');
CREATE TABLE t6 (
col_int_key int(11) DEFAULT NULL,
col_varchar_10_latin1_key varchar(10) DEFAULT NULL,
pk int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t6 VALUES (1,'1',1);
SELECT STRAIGHT_JOIN t6a.pk, t2.pk
FROM t6 AS t6a
LEFT JOIN
(
t2
RIGHT JOIN
(
(t1 LEFT JOIN (t4 JOIN t3 ON t4.col_int) ON t4.col_int_key = t1.pk)
LEFT JOIN
(t5 JOIN t6 AS t6b
ON t5.col_varchar_10_utf8_key = t6b.col_varchar_10_latin1_key)
ON t1.pk = t5.col_int
)
ON t4.col_varchar_10_latin1_key = t1.col_varchar_10_latin1_key
AND t5.col_varchar_10_utf8_key = 0
)
ON t6a.pk IS TRUE
WHERE t6b.col_int_key IS TRUE;
pk pk
1 NULL
EXPLAIN SELECT STRAIGHT_JOIN t6a.pk, t2.pk
FROM t6 AS t6a
LEFT JOIN
(
t2
RIGHT JOIN
(
(t1 LEFT JOIN (t4 JOIN t3 ON t4.col_int) ON t4.col_int_key = t1.pk)
LEFT JOIN
(t5 JOIN t6 AS t6b
ON t5.col_varchar_10_utf8_key = t6b.col_varchar_10_latin1_key)
ON t1.pk = t5.col_int
)
ON t4.col_varchar_10_latin1_key = t1.col_varchar_10_latin1_key
AND t5.col_varchar_10_utf8_key = 0
)
ON t6a.pk IS TRUE
WHERE t6b.col_int_key IS TRUE;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t6a ALL NULL NULL NULL NULL 1 Using where
2012-02-24 17:07:12 +01:00
1 SIMPLE t1 ALL NULL NULL NULL NULL 1 Using join buffer (flat, BNL join)
1 SIMPLE t4 ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
1 SIMPLE t3 ALL NULL NULL NULL NULL 1 Using join buffer (incremental, BNL join)
1 SIMPLE t5 ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
1 SIMPLE t6b ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
1 SIMPLE t2 ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
SELECT t6a.pk, t2.pk
FROM t6 AS t6a
LEFT JOIN
(
t2
RIGHT JOIN
(
(t1 LEFT JOIN (t4 JOIN t3 ON t4.col_int) ON t4.col_int_key = t1.pk)
LEFT JOIN
(t5 JOIN t6 AS t6b
ON t5.col_varchar_10_utf8_key = t6b.col_varchar_10_latin1_key)
ON t1.pk = t5.col_int
)
ON t4.col_varchar_10_latin1_key = t1.col_varchar_10_latin1_key
AND t5.col_varchar_10_utf8_key = 0
)
ON t6a.pk IS TRUE
WHERE t6b.col_int_key IS TRUE;
pk pk
1 NULL
EXPLAIN SELECT t6a.pk, t2.pk
FROM t6 AS t6a
LEFT JOIN
(
t2
RIGHT JOIN
(
(t1 LEFT JOIN (t4 JOIN t3 ON t4.col_int) ON t4.col_int_key = t1.pk)
LEFT JOIN
(t5 JOIN t6 AS t6b
ON t5.col_varchar_10_utf8_key = t6b.col_varchar_10_latin1_key)
ON t1.pk = t5.col_int
)
ON t4.col_varchar_10_latin1_key = t1.col_varchar_10_latin1_key
AND t5.col_varchar_10_utf8_key = 0
)
ON t6a.pk IS TRUE
WHERE t6b.col_int_key IS TRUE;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t6a ALL NULL NULL NULL NULL 1 Using where
2012-02-24 17:07:12 +01:00
1 SIMPLE t1 ALL NULL NULL NULL NULL 1 Using join buffer (flat, BNL join)
1 SIMPLE t4 ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
1 SIMPLE t3 ALL NULL NULL NULL NULL 1 Using join buffer (incremental, BNL join)
1 SIMPLE t5 ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
1 SIMPLE t6b ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
1 SIMPLE t2 ALL NULL NULL NULL NULL 1 Using where; Using join buffer (incremental, BNL join)
drop table t1,t2,t3,t4,t5,t6;
#
Bug mdev-4318: view over a complex query with outer joins
#
CREATE TABLE t1 (
a1 int NOT NULL, a2 int NOT NULL, a3 int DEFAULT NULL, a4 tinyint NOT NULL,
a5 int NOT NULL, a6 tinyint NOT NULL, a7 tinyint(4) DEFAULT NULL,
a8 smallint(6) DEFAULT NULL, a9 smallint(6) DEFAULT NULL, a10 tinyint NOT NULL,
PRIMARY KEY (a1), KEY a2 (a2), KEY a3 (a3), KEY a4 (a4), KEY a6 (a6),
KEY a5 (a5), KEY a7 (a7), KEY a8 (a8), KEY a9 (a9)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT IGNORE INTO t1 VALUES
(3360,5684,2219,1,316832,1,0,NULL,NULL,NULL),
(3362,2754,597,2,316844,1,0,NULL,NULL,NULL),
(3363,369,NULL,1,317295,1,0,NULL,NULL,NULL);
Warnings:
Warning 1048 Column 'a10' cannot be null
Warning 1048 Column 'a10' cannot be null
Warning 1048 Column 'a10' cannot be null
CREATE TABLE t2 (
b1 int NOT NULL, b2 int NOT NULL, PRIMARY KEY (b1,b2), KEY b2 (b2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE t3 (
c1 int NOT NULL, PRIMARY KEY (c1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t3 VALUES
(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),
(1000),(1001),(1002),(1003),(9999);
CREATE TABLE t4 (
d1 int NOT NULL, PRIMARY KEY (d1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t4 VALUES (5674),(5676),(5680),(5684),(5685);
CREATE TABLE t5 (
e1 int NOT NULL, e2 varchar(64) NOT NULL, PRIMARY KEY (e1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t5 VALUES
(5684,'51a5de7a9f56314e082094d78f58be082c3cf0c1'),
(5685,'754dc8292cb9f5eb9ade126fe7e961c62412a349'),
(5686,'75eeb33f1c819bac21f6d023b4c5b24185eeda5c');
CREATE TABLE t6 (
f1 int NOT NULL, PRIMARY KEY (f1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t6 VALUES (5542),(5620),(5686);
CREATE TABLE t7 (
g1 tinyint NOT NULL DEFAULT '0', g2 varchar(20) NOT NULL, PRIMARY KEY (g1)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO t7 VALUES
(1,'60feec2b20ed19f55ad0'),(3,'9ddb18bff7fcbd1e3133'),
(5,'a05599df9222bb160d11'),(7,'e31bae372f7d01df0589'),
(9,'8f8372dd7fc8eb46c8a3'),(11,'f8d0e28529e990a09309');
CREATE TABLE t8 (
h1 tinyint NOT NULL, h2 varchar(128) DEFAULT NULL, PRIMARY KEY (h1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t8 VALUES (1,'b'),(2,'c'),(3,'d'),(4,'e');
CREATE TABLE t9 (
i1 tinyint NOT NULL, i2 varchar(7) NOT NULL, i3 varchar(128) NOT NULL,
PRIMARY KEY (i1,i2), KEY i2 (i2), KEY i3 (i3)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t9 VALUES
(2,'a','07630d223c7e5f7b1feb19b3caafb0833fd028eb'),
(3,'b','1ca53dcc50b68af86f4b1b4676dbed917b543c30'),
(1,'b','2c01ac36c1ce9a7de66be89f85d8aa5f0052e2e8'),
(4,'a','496c486b3a9edc439477fef7d34cbefdebba86df'),
(3,'a','98bf72d8d467201058a5f69bd7709bfc74a8637e'),
(2,'b','9a45425f6160fb59d7f8a02c721498d4ce945302'),
(4,'b','9c9a7300f3e708f8e430f9f3376d966f5951f583'),
(1,'a','c0af3f076b905f31cbb51af304b9c7ad539e0861');
CREATE TABLE t10 (
j1 tinyint NOT NULL, j2 varchar(20) NOT NULL, PRIMARY KEY (j1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t10 VALUES (1,'b'),(2,'c'),(3,'d');
CREATE TABLE t11 (
k1 int NOT NULL, k2 datetime DEFAULT NULL, k3 int DEFAULT NULL,
k4 int DEFAULT NULL, PRIMARY KEY (k1), KEY k3 (k3), KEY k4 (k4)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t11 VALUES
(317422,'2013-03-18 11:43:03',1,NULL),(317423,'2013-03-18 11:43:11',1,NULL),
(317424,'2013-03-18 11:52:01',1,1),(317425,'2013-03-18 11:52:01',1,1),
(317426,'2013-03-18 11:56:38',1,1),(317427,'2013-03-18 12:18:25',1,NULL),
(317428,'2013-03-18 12:46:28',1,NULL),(317429,'2013-03-18 12:46:28',1,NULL),
(317430,'2013-03-18 12:46:28',1,NULL),(317431,'2013-03-18 12:46:28',1,NULL),
(317432,'2013-03-18 12:46:28',1,NULL),(317433,'2013-03-18 12:46:28',1,NULL),
(317434,'2013-03-18 12:46:28',1,NULL),(317435,'2013-03-18 12:46:28',1,NULL),
(317436,'2013-03-18 12:46:28',1,NULL),(317437,'2013-03-18 12:46:28',1,NULL),
(317438,'2013-03-18 12:46:28',1,NULL),(317439,'2013-03-18 12:46:28',1,NULL),
(317440,'2013-03-18 12:55:20',1,NULL),(317441,'2013-03-18 12:58:29',1,NULL),
(317442,'2013-03-18 13:06:02',1,NULL),(317443,'2013-03-18 15:23:18',21,NULL);
CREATE TABLE t12 (
l1 int NOT NULL, l2 varchar(64) NOT NULL, PRIMARY KEY (l1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t12 VALUES
(552,'59a498252ef59f96fbdc13a414abe244d8e8bc30'),
(554,'c6025c7cb2d9dfb1be7ce4a61f35b45bb9e61ba3'),
(555,'b245bcc672082bb6d10794b2b4ac972dd14b1cf5');
CREATE TABLE t13 (
m1 int NOT NULL, m2 int NOT NULL, m3 int NOT NULL,
PRIMARY KEY (m1,m2,m3), KEY m3 (m3), KEY m2 (m2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t13 VALUES (3324,43,4),(3332,263,1),(3348,27,3);
CREATE TABLE t14 (
n1 smallint NOT NULL, n2 varchar(64) NOT NULL, PRIMARY KEY (n1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t14 VALUES
(21,'685bf7ca576af964c7cff564d5e4473b81499b8b'),
(23,'b8e42dab1ab952406b3accfb47089c61478138a8'),
(25,'3fea441e411db8c70bf039b50c8f18f59515be53'),
(27,'998aecc30fd0e0b8a1cac6590e5eccc2d7822223');
CREATE TABLE t15 (
o1 smallint NOT NULL, PRIMARY KEY (o1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t15 VALUES (1),(3);
CREATE TABLE t16 (
p1 smallint NOT NULL, p2 varchar(7) NOT NULL, p3 varchar(64) NOT NULL,
PRIMARY KEY (p1,p2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO t16 VALUES
(1,'a','66bdbb389456f3ae97206da115a7b397c31400e8'),
(1,'b','66bdbb389456f3ae97206da115a7b397c31400e8'),
(3,'a','386c10e454278c6e27feb16258089166422f79b4'),
(3,'b','386c10e454278c6e27feb16258089166422f79b4');
create view v1 as select t3.c1,t5.e2,t1.a1,t14.n2,t16.p3,t10.j2,t7.g2,t11.k2,l3.l2,
t9.i3,t12.l2 AS l_l2,l2.l2 AS l2_l2,l4.l1 AS l4_l1,t6.f1
from
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
t1
left join t2 on t1.a1 = t2.b1
)
left join t3 on t2.b2 = t3.c1
)
left join t4 on t1.a2 = t4.d1
)
left join t5 on t4.d1 = t5.e1
)
left join t6 on t1.a3 = t6.f1
)
left join t5 e2 on t6.f1 = e2.e1
)
join t7 on t1.a7 = t7.g1
)
join t8 on t1.a4 = t8.h1
)
join t9 on t8.h1 = t9.i1
)
join t10 on t1.a6 = t10.j1
)
join t11 on t1.a5 = t11.k1
)
left join t12 on t11.k3 = t12.l1
)
left join t12 l2 on t11.k4 = l2.l1
)
left join t13 on t1.a1 = t13.m1 and t13.m3 = 4
)
left join t12 l4 on l4.l1 = t13.m2
)
left join t13 m2 on t1.a1 = m2.m1 and m2.m3 = 3
)
left join t12 l3 on l3.l1 = m2.m2
)
left join t14 on t1.a8 = t14.n1
)
left join t15 on t1.a9 = t15.o1
)
left join t16 on t15.o1 = t16.p1
where t1.a10 = 1;
explain select t3.c1,t5.e2,t1.a1,t14.n2,t16.p3,t10.j2,t7.g2,t11.k2,l3.l2,
t9.i3,t12.l2 AS l_l2,l2.l2 AS l2_l2,l4.l1 AS l4_l1,t6.f1
from
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
t1
left join t2 on t1.a1 = t2.b1
)
left join t3 on t2.b2 = t3.c1
)
left join t4 on t1.a2 = t4.d1
)
left join t5 on t4.d1 = t5.e1
)
left join t6 on t1.a3 = t6.f1
)
left join t5 e2 on t6.f1 = e2.e1
)
join t7 on t1.a7 = t7.g1
)
join t8 on t1.a4 = t8.h1
)
join t9 on t8.h1 = t9.i1
)
join t10 on t1.a6 = t10.j1
)
join t11 on t1.a5 = t11.k1
)
left join t12 on t11.k3 = t12.l1
)
left join t12 l2 on t11.k4 = l2.l1
)
left join t13 on t1.a1 = t13.m1 and t13.m3 = 4
)
left join t12 l4 on l4.l1 = t13.m2
)
left join t13 m2 on t1.a1 = m2.m1 and m2.m3 = 3
)
left join t12 l3 on l3.l1 = m2.m2
)
left join t14 on t1.a8 = t14.n1
)
left join t15 on t1.a9 = t15.o1
)
left join t16 on t15.o1 = t16.p1
where t1.a10 = 1;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t1 ALL a4,a6,a5,a7 NULL NULL NULL 3 Using where
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t2 ref PRIMARY PRIMARY 4 test.t1.a1 1 Using index
1 SIMPLE t3 eq_ref PRIMARY PRIMARY 4 test.t2.b2 1 Using where; Using index
Update row and key fetch cost models to take into account data copy costs Before this patch, when calculating the cost of fetching and using a row/key from the engine, we took into account the cost of finding a row or key from the engine, but did not consistently take into account index only accessed, clustered key or covered keys for all access paths. The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently considered in best_access_path(). TIME_FOR_COMPARE was used in calculation in other places, like greedy_search(), but was in some cases (like scans) done an a different number of rows than was accessed. The cost calculation of row and index scans didn't take into account the number of rows that where accessed, only the number of accepted rows. When using a filter, the cost of index_only_reads and cost of accessing and disregarding 'filtered rows' where not taken into account, which made filters cost less than there actually where. To remedy the above, the following key & row fetch related costs has been added: - The cost of fetching and using a row is now split into different costs: - key + Row fetch cost (as before) but multiplied with the variable 'optimizer_cache_cost' (default to 0.5). This allows the user to tell the optimizer the likehood of finding the key and row in the engine cache. - ROW_COPY_COST, The cost copying a row from the engine to the sql layer or creating a row from the join_cache to the record buffer. Mostly affects table scan costs. - ROW_LOOKUP_COST, the cost of fetching a row by rowid. - KEY_COPY_COST the cost of finding the next key and copying it from the engine to the SQL layer. This is used when we calculate the cost index only reads. It makes index scans more expensive than before if they cover a lot of rows. (main.index_merge_myisam) - KEY_LOOKUP_COST, the cost of finding the first key in a range. This replaces the old define IDX_LOOKUP_COST, but with a higher cost. - KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid). when doing a index scan and comparing the rowid to the filter. Before this cost was assumed to be 0. All of the above constants/variables are now tuned to be somewhat in proportion of executing complexity to each other. There is tuning need for these in the future, but that can wait until the above are made user variables as that will make tuning much easier. To make the usage of the above easy, there are new (not virtual) cost calclation functions in handler: - ha_read_time(), like read_time(), but take optimizer_cache_cost into account. - ha_read_and_copy_time(), like ha_read_time() but take into account ROW_COPY_TIME - ha_read_and_compare_time(), like ha_read_and_copy_time() but take TIME_FOR_COMPARE into account. - ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST into account. This is used with filesort where we don't need to execute the WHERE clause again. - ha_keyread_time(), like keyread_time() but take optimizer_cache_cost into account. - ha_keyread_and_copy_time(), like ha_keyread_time(), but add KEY_COPY_COST. - ha_key_scan_time(), like key_scan_time() but take optimizer_cache_cost nto account. - ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add KEY_COPY_COST & TIME_FOR_COMPARE. I also added some setup costs for doing different types of scans and creating temporary tables (on disk and in memory). This encourages the optimizer to not use these for simple 'a few row' lookups if there are adequate key lookup strategies. - TABLE_SCAN_SETUP_COST, cost of starting a table scan. - INDEX_SCAN_SETUP_COST, cost of starting an index scan. - HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory temporary table. - DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary table. When calculating cost of fetching ranges, we had a cost of IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) * optimizer_cache_cost', which matches the cost we use for 'ref' and other key lookups. The effect is that the cost is now a bit higher when we have many ranges for a key. Allmost all calculation with TIME_FOR_COMPARE is now done in best_access_path(). 'JOIN::read_time' now includes the full cost for finding the rows in the table. In the result files, many of the changes are now again close to what they where before the "Update cost for hash and cached joins" commit, as that commit didn't fix the filter cost (too complex to do everything in one commit). The above changes showed a lot of a lot of inconsistencies in optimizer cost calculation. The main objective with the other changes was to do calculation as similar (and accurate) as possible and to make different plans more comparable. Detailed list of changes: - Calculate index_only_cost consistently and correctly for all scan and ref accesses. The row fetch_cost and index_only_cost now takes into account clustered keys, covered keys and index only accesses. - cost_for_index_read now returns both full cost and index_only_cost - Fixed cost calculation of get_sweep_read_cost() to match other similar costs. This is bases on the assumption that data is more often stored on SSD than a hard disk. - Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST. - Some scan cost estimates did not take into account TIME_FOR_COMPARE. Now all scan costs takes this into account. (main.show_explain) - Added session variable optimizer_cache_hit_ratio (default 50%). By adjusting this on can reduce or increase the cost of index or direct record lookups. The effect of the default is that key lookups is now a bit cheaper than before. See usage of 'optimizer_cache_cost' in handler.h. - JOIN_TAB::scan_time() did not take into account index only scans, which produced a wrong cost when index scan was used. Changed JOIN_TAB:::scan_time() to take into consideration clustered and covered keys. The values are now cached and we only have to call this function once. Other calls are changed to use the cached values. Function renamed to JOIN_TAB::estimate_scan_time(). - Fixed that most index cost calculations are done the same way and more close to 'range' calculations. The cost is now lower than before for small data sets and higher for large data sets as we take into account how many keys are read (main.opt_trace_selectivity, main.limit_rows_examined). - Ensured that index_scan_cost() == range(scan_of_all_rows_in_table_using_one_range) + MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there is choice of doing a full index scan and a range-index scan over almost the whole table then index scan will be preferred (no range-read setup cost). (innodb.innodb, main.show_explain, main.range) - Fixed the EQ_REF and REF takes into account clustered and covered keys. This changes some plans to use covered or clustered indexes as these are much cheaper. (main.subselect_mat_cost, main.state_tables_innodb, main.limit_rows_examined) - Rowid filter setup cost and filter compare cost now takes into account fetching and checking the rowid (KEY_NEXT_FIND_COST). (main.partition_pruning heap.heap_btree main.log_state) - Added KEY_NEXT_FIND_COST to Range_rowid_filter_cost_info::lookup_cost to account of the time to find and check the next key value against the container - Introduced ha_keyread_time(rows) that takes into account finding the next row and copying the key value to 'record' (KEY_COPY_COST). - Introduced ha_key_scan_time() for calculating an index scan over all rows. - Added IDX_LOOKUP_COST to keyread_time() as a startup cost. - Added index_only_fetch_cost() as a convenience function to OPT_RANGE. - keyread_time() cost is slightly reduced to prefer shorter keys. (main.index_merge_myisam) - All of the above caused some index_merge combinations to be rejected because of cost (main.index_intersect). In some cases 'ref' where replaced with index_merge because of the low cost calculation of get_sweep_read_cost(). - Some index usage moved from PRIMARY to a covering index. (main.subselect_innodb) - Changed cost calculation of filter to take KEY_LOOKUP_COST and TIME_FOR_COMPARE into account. See sql_select.cc::apply_filter(). filter parameters and costs are now written to optimizer_trace. - Don't use matchings_records_in_range() to try to estimate the number of filtered rows for ranges. The reason is that we want to ensure that 'range' is calculated similar to 'ref'. There is also more work needed to calculate the selectivity when using ranges and ranges and filtering. This causes filtering column in EXPLAIN EXTENDED to be 100.00 for some cases where range cannot use filtering. (main.rowid_filter) - Introduced ha_scan_time() that takes into account the CPU cost of finding the next row and copying the row from the engine to 'record'. This causes costs of table scan to slightly increase and some test to changed their plan from ALL to RANGE or ALL to ref. (innodb.innodb_mysql, main.select_pkeycache) In a few cases where scan time of very small tables have lower cost than a ref or range, things changed from ref/range to ALL. (main.myisam, main.func_group, main.limit_rows_examined, main.subselect2) - Introduced ha_scan_and_compare_time() which is like ha_scan_time() but also adds the cost of the where clause (TIME_FOR_COMPARE). - Added small cost for creating temporary table for materialization. This causes some very small tables to use scan instead of materialization. - Added checking of the WHERE clause (TIME_FOR_COMPARE) of the accepted rows to ROR costs in get_best_ror_intersect() - Removed '- 0.001' from 'join->best_read' and optimize_straight_join() to ensure that the 'Last_query_cost' status variable contains the same value as the one that was calculated by the optimizer. - Take avg_io_cost() into account in handler::keyread_time() and handler::read_time(). This should have no effect as it's 1.0 by default, except for heap that overrides these functions. - Some 'ref_or_null' accesses changed to 'range' because of cost adjustments (main.order_by) - Added scan type "scan_with_join_cache" for optimizer_trace. This is just to show in the trace what kind of scan was used. - When using 'scan_with_join_cache' take into account number of preceding tables (as have to restore all fields for all previous table combination when checking the where clause) The new cost added is: (row_combinations * ROW_COPY_COST * number_of_cached_tables). This increases the cost of join buffering in proportion of the number of tables in the join buffer. One effect is that full scans are now done earlier as the cost is then smaller. (main.join_outer_innodb, main.greedy_optimizer) - Removed the usage of 'worst_seeks' in cost_for_index_read as it caused wrong plans to be created; It prefered JT_EQ_REF even if it would be much more expensive than a full table scan. A related issue was that worst_seeks only applied to full lookup, not to clustered or index only lookups, which is not consistent. This caused some plans to use index scan instead of eq_ref (main.union) - Changed federated block size from 4096 to 1500, which is the typical size of an IO packet. - Added costs for reading rows to Federated. Needed as there is no caching of rows in the federated engine. - Added ha_innobase::rnd_pos_time() cost function. - A lot of extra things added to optimizer trace - More costs, especially for materialization and index_merge. - Make lables more uniform - Fixed a lot of minor bugs - Added 'trace_started()' around a lot of trace blocks. - When calculating ORDER BY with LIMIT cost for using an index the cost did not take into account the number of row retrivals that has to be done or the cost of comparing the rows with the WHERE clause. The cost calculated would be just a fraction of the real cost. Now we calculate the cost as we do for ranges and 'ref'. - 'Using index for group-by' is used a bit more than before as now take into account the WHERE clause cost when comparing with 'ref' and prefer the method with fewer row combinations. (main.group_min_max). Bugs fixed: - Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans, like in optimize_straight_join() and greedy_search() - Fixed bug in save_explain_data where we could test for the wrong index when displaying 'Using index'. This caused some old plans to show 'Using index'. (main.subselect_innodb, main.subselect2) - Fixed bug in get_best_ror_intersect() where 'min_cost' was not updated, and the cost we compared with was not the one that was used. - Fixed very wrong cost calculation for priority queues in check_if_pq_applicable(). (main.order_by now correctly uses priority queue) - When calculating cost of EQ_REF or REF, we added the cost of comparing the WHERE clause with the found rows, not all row combinations. This made ref and eq_ref to be regarded way to cheap compared to other access methods. - FORCE INDEX cost calculation didn't take into account clustered or covered indexes. - JT_EQ_REF cost was estimated as avg_io_cost(), which is half the cost of a JT_REF key. This may be true for InnoDB primary key, but not for other unique keys or other engines. Now we use handler function to calculate the cost, which allows us to handle consistently clustered, covered keys and not covered keys. - ha_start_keyread() didn't call extra_opt() if keyread was already enabled but still changed the 'keyread' variable (which is wrong). Fixed by not doing anything if keyread is already enabled. - multi_range_read_info_cost() didn't take into account io_cost when calculating the cost of ranges. - fix_semijoin_strategies_for_picked_join_order() used the wrong record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH and SJ_OPT_LOOSE_SCAN. - Hash joins didn't provide correct best_cost to the upper level, which means that the cost for hash_joins more expensive than calculated in best_access_path (a difference of 10x * TIME_OF_COMPARE). This is fixed in the new code thanks to that we now include TIME_OF_COMPARE cost in 'read_time'. Other things: - Added some 'if (thd->trace_started())' to speed up code - Removed not used function Cost_estimate::is_zero() - Simplified testing of HA_POS_ERROR in get_best_ror_intersect(). (No cost changes) - Moved ha_start_keyread() from join_read_const_table() to join_read_const() to enable keyread for all types of JT_CONST tables. - Made a few very short functions inline in handler.h Notes: - In main.rowid_filter the join order of order and lineitem is swapped. This is because the cost of doing a range fetch of lineitem(98 rows) is almost as big as the whole join of order,lineitem. The filtering will also ensure that we only have to do very small key fetches of the rows in lineitem. - main.index_merge_myisam had a few changes where we are now using less keys for index_merge. This is because index scans are now more expensive than before. - handler->optimizer_cache_cost is updated in ha_external_lock(). This ensures that it is up to date per statements. Not an optimal solution (for locked tables), but should be ok for now. - 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of filesort into consideration when table scan is chosen. (main.myisam_explain_non_select_all) - perfschema.table_aggregate_global_* has changed because an update on a table with 1 row will now use table scan instead of key lookup. TODO in upcomming commits: - Fix selectivity calculation for ranges with and without filtering and when there is a ref access but scan is chosen. For this we have to store the lowest known value for 'accepted_records' in the OPT_RANGE structure. - Change that records_read does not include filtered rows. - test_if_cheaper_ordering() needs to be updated to properly calculate costs. This will fix tests like main.order_by_innodb, main.single_delete_update - Extend get_range_limit_read_cost() to take into considering cost_for_index_read() if there where no quick keys. This will reduce the computed cost for ORDER BY with LIMIT in some cases. (main.innodb_ext_key) - Fix that we take into account selectivity when counting the number of rows we have to read when considering using a index table scan to resolve ORDER BY. - Add new calculation for rnd_pos_time() where we take into account the benefit of reading multiple rows from the same page.
2021-11-01 11:34:24 +01:00
1 SIMPLE t10 eq_ref PRIMARY PRIMARY 1 test.t1.a6 1
2022-06-02 18:47:23 +02:00
1 SIMPLE t8 eq_ref PRIMARY PRIMARY 1 test.t1.a4 1 Using index
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t4 eq_ref PRIMARY PRIMARY 4 test.t1.a2 1 Using index
1 SIMPLE t5 eq_ref PRIMARY PRIMARY 4 test.t4.d1 1 Using where
1 SIMPLE t6 eq_ref PRIMARY PRIMARY 4 test.t1.a3 1 Using where; Using index
1 SIMPLE t7 eq_ref PRIMARY PRIMARY 1 test.t1.a7 1
1 SIMPLE t11 eq_ref PRIMARY PRIMARY 4 test.t1.a5 1
1 SIMPLE t12 eq_ref PRIMARY PRIMARY 4 test.t11.k3 1 Using where
1 SIMPLE l2 eq_ref PRIMARY PRIMARY 4 test.t11.k4 1 Using where
2022-06-02 18:47:23 +02:00
1 SIMPLE t9 ref PRIMARY PRIMARY 1 test.t1.a4 1
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t13 ref PRIMARY,m3 m3 8 const,test.t1.a1 1 Using index
1 SIMPLE l4 eq_ref PRIMARY PRIMARY 4 test.t13.m2 1 Using where; Using index
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE m2 ref PRIMARY,m3 m3 8 const,test.t1.a1 1 Using index
1 SIMPLE l3 eq_ref PRIMARY PRIMARY 4 test.m2.m2 1 Using where
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t14 eq_ref PRIMARY PRIMARY 2 test.t1.a8 1 Using where
1 SIMPLE t15 eq_ref PRIMARY PRIMARY 2 test.t1.a9 1 Using where; Using index
1 SIMPLE t16 ref PRIMARY PRIMARY 2 test.t15.o1 1 Using where
explain select * from v1;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t1 ALL a4,a6,a5,a7 NULL NULL NULL 3 Using where
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t2 ref PRIMARY PRIMARY 4 test.t1.a1 1 Using index
1 SIMPLE t3 eq_ref PRIMARY PRIMARY 4 test.t2.b2 1 Using where; Using index
Update row and key fetch cost models to take into account data copy costs Before this patch, when calculating the cost of fetching and using a row/key from the engine, we took into account the cost of finding a row or key from the engine, but did not consistently take into account index only accessed, clustered key or covered keys for all access paths. The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently considered in best_access_path(). TIME_FOR_COMPARE was used in calculation in other places, like greedy_search(), but was in some cases (like scans) done an a different number of rows than was accessed. The cost calculation of row and index scans didn't take into account the number of rows that where accessed, only the number of accepted rows. When using a filter, the cost of index_only_reads and cost of accessing and disregarding 'filtered rows' where not taken into account, which made filters cost less than there actually where. To remedy the above, the following key & row fetch related costs has been added: - The cost of fetching and using a row is now split into different costs: - key + Row fetch cost (as before) but multiplied with the variable 'optimizer_cache_cost' (default to 0.5). This allows the user to tell the optimizer the likehood of finding the key and row in the engine cache. - ROW_COPY_COST, The cost copying a row from the engine to the sql layer or creating a row from the join_cache to the record buffer. Mostly affects table scan costs. - ROW_LOOKUP_COST, the cost of fetching a row by rowid. - KEY_COPY_COST the cost of finding the next key and copying it from the engine to the SQL layer. This is used when we calculate the cost index only reads. It makes index scans more expensive than before if they cover a lot of rows. (main.index_merge_myisam) - KEY_LOOKUP_COST, the cost of finding the first key in a range. This replaces the old define IDX_LOOKUP_COST, but with a higher cost. - KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid). when doing a index scan and comparing the rowid to the filter. Before this cost was assumed to be 0. All of the above constants/variables are now tuned to be somewhat in proportion of executing complexity to each other. There is tuning need for these in the future, but that can wait until the above are made user variables as that will make tuning much easier. To make the usage of the above easy, there are new (not virtual) cost calclation functions in handler: - ha_read_time(), like read_time(), but take optimizer_cache_cost into account. - ha_read_and_copy_time(), like ha_read_time() but take into account ROW_COPY_TIME - ha_read_and_compare_time(), like ha_read_and_copy_time() but take TIME_FOR_COMPARE into account. - ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST into account. This is used with filesort where we don't need to execute the WHERE clause again. - ha_keyread_time(), like keyread_time() but take optimizer_cache_cost into account. - ha_keyread_and_copy_time(), like ha_keyread_time(), but add KEY_COPY_COST. - ha_key_scan_time(), like key_scan_time() but take optimizer_cache_cost nto account. - ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add KEY_COPY_COST & TIME_FOR_COMPARE. I also added some setup costs for doing different types of scans and creating temporary tables (on disk and in memory). This encourages the optimizer to not use these for simple 'a few row' lookups if there are adequate key lookup strategies. - TABLE_SCAN_SETUP_COST, cost of starting a table scan. - INDEX_SCAN_SETUP_COST, cost of starting an index scan. - HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory temporary table. - DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary table. When calculating cost of fetching ranges, we had a cost of IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) * optimizer_cache_cost', which matches the cost we use for 'ref' and other key lookups. The effect is that the cost is now a bit higher when we have many ranges for a key. Allmost all calculation with TIME_FOR_COMPARE is now done in best_access_path(). 'JOIN::read_time' now includes the full cost for finding the rows in the table. In the result files, many of the changes are now again close to what they where before the "Update cost for hash and cached joins" commit, as that commit didn't fix the filter cost (too complex to do everything in one commit). The above changes showed a lot of a lot of inconsistencies in optimizer cost calculation. The main objective with the other changes was to do calculation as similar (and accurate) as possible and to make different plans more comparable. Detailed list of changes: - Calculate index_only_cost consistently and correctly for all scan and ref accesses. The row fetch_cost and index_only_cost now takes into account clustered keys, covered keys and index only accesses. - cost_for_index_read now returns both full cost and index_only_cost - Fixed cost calculation of get_sweep_read_cost() to match other similar costs. This is bases on the assumption that data is more often stored on SSD than a hard disk. - Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST. - Some scan cost estimates did not take into account TIME_FOR_COMPARE. Now all scan costs takes this into account. (main.show_explain) - Added session variable optimizer_cache_hit_ratio (default 50%). By adjusting this on can reduce or increase the cost of index or direct record lookups. The effect of the default is that key lookups is now a bit cheaper than before. See usage of 'optimizer_cache_cost' in handler.h. - JOIN_TAB::scan_time() did not take into account index only scans, which produced a wrong cost when index scan was used. Changed JOIN_TAB:::scan_time() to take into consideration clustered and covered keys. The values are now cached and we only have to call this function once. Other calls are changed to use the cached values. Function renamed to JOIN_TAB::estimate_scan_time(). - Fixed that most index cost calculations are done the same way and more close to 'range' calculations. The cost is now lower than before for small data sets and higher for large data sets as we take into account how many keys are read (main.opt_trace_selectivity, main.limit_rows_examined). - Ensured that index_scan_cost() == range(scan_of_all_rows_in_table_using_one_range) + MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there is choice of doing a full index scan and a range-index scan over almost the whole table then index scan will be preferred (no range-read setup cost). (innodb.innodb, main.show_explain, main.range) - Fixed the EQ_REF and REF takes into account clustered and covered keys. This changes some plans to use covered or clustered indexes as these are much cheaper. (main.subselect_mat_cost, main.state_tables_innodb, main.limit_rows_examined) - Rowid filter setup cost and filter compare cost now takes into account fetching and checking the rowid (KEY_NEXT_FIND_COST). (main.partition_pruning heap.heap_btree main.log_state) - Added KEY_NEXT_FIND_COST to Range_rowid_filter_cost_info::lookup_cost to account of the time to find and check the next key value against the container - Introduced ha_keyread_time(rows) that takes into account finding the next row and copying the key value to 'record' (KEY_COPY_COST). - Introduced ha_key_scan_time() for calculating an index scan over all rows. - Added IDX_LOOKUP_COST to keyread_time() as a startup cost. - Added index_only_fetch_cost() as a convenience function to OPT_RANGE. - keyread_time() cost is slightly reduced to prefer shorter keys. (main.index_merge_myisam) - All of the above caused some index_merge combinations to be rejected because of cost (main.index_intersect). In some cases 'ref' where replaced with index_merge because of the low cost calculation of get_sweep_read_cost(). - Some index usage moved from PRIMARY to a covering index. (main.subselect_innodb) - Changed cost calculation of filter to take KEY_LOOKUP_COST and TIME_FOR_COMPARE into account. See sql_select.cc::apply_filter(). filter parameters and costs are now written to optimizer_trace. - Don't use matchings_records_in_range() to try to estimate the number of filtered rows for ranges. The reason is that we want to ensure that 'range' is calculated similar to 'ref'. There is also more work needed to calculate the selectivity when using ranges and ranges and filtering. This causes filtering column in EXPLAIN EXTENDED to be 100.00 for some cases where range cannot use filtering. (main.rowid_filter) - Introduced ha_scan_time() that takes into account the CPU cost of finding the next row and copying the row from the engine to 'record'. This causes costs of table scan to slightly increase and some test to changed their plan from ALL to RANGE or ALL to ref. (innodb.innodb_mysql, main.select_pkeycache) In a few cases where scan time of very small tables have lower cost than a ref or range, things changed from ref/range to ALL. (main.myisam, main.func_group, main.limit_rows_examined, main.subselect2) - Introduced ha_scan_and_compare_time() which is like ha_scan_time() but also adds the cost of the where clause (TIME_FOR_COMPARE). - Added small cost for creating temporary table for materialization. This causes some very small tables to use scan instead of materialization. - Added checking of the WHERE clause (TIME_FOR_COMPARE) of the accepted rows to ROR costs in get_best_ror_intersect() - Removed '- 0.001' from 'join->best_read' and optimize_straight_join() to ensure that the 'Last_query_cost' status variable contains the same value as the one that was calculated by the optimizer. - Take avg_io_cost() into account in handler::keyread_time() and handler::read_time(). This should have no effect as it's 1.0 by default, except for heap that overrides these functions. - Some 'ref_or_null' accesses changed to 'range' because of cost adjustments (main.order_by) - Added scan type "scan_with_join_cache" for optimizer_trace. This is just to show in the trace what kind of scan was used. - When using 'scan_with_join_cache' take into account number of preceding tables (as have to restore all fields for all previous table combination when checking the where clause) The new cost added is: (row_combinations * ROW_COPY_COST * number_of_cached_tables). This increases the cost of join buffering in proportion of the number of tables in the join buffer. One effect is that full scans are now done earlier as the cost is then smaller. (main.join_outer_innodb, main.greedy_optimizer) - Removed the usage of 'worst_seeks' in cost_for_index_read as it caused wrong plans to be created; It prefered JT_EQ_REF even if it would be much more expensive than a full table scan. A related issue was that worst_seeks only applied to full lookup, not to clustered or index only lookups, which is not consistent. This caused some plans to use index scan instead of eq_ref (main.union) - Changed federated block size from 4096 to 1500, which is the typical size of an IO packet. - Added costs for reading rows to Federated. Needed as there is no caching of rows in the federated engine. - Added ha_innobase::rnd_pos_time() cost function. - A lot of extra things added to optimizer trace - More costs, especially for materialization and index_merge. - Make lables more uniform - Fixed a lot of minor bugs - Added 'trace_started()' around a lot of trace blocks. - When calculating ORDER BY with LIMIT cost for using an index the cost did not take into account the number of row retrivals that has to be done or the cost of comparing the rows with the WHERE clause. The cost calculated would be just a fraction of the real cost. Now we calculate the cost as we do for ranges and 'ref'. - 'Using index for group-by' is used a bit more than before as now take into account the WHERE clause cost when comparing with 'ref' and prefer the method with fewer row combinations. (main.group_min_max). Bugs fixed: - Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans, like in optimize_straight_join() and greedy_search() - Fixed bug in save_explain_data where we could test for the wrong index when displaying 'Using index'. This caused some old plans to show 'Using index'. (main.subselect_innodb, main.subselect2) - Fixed bug in get_best_ror_intersect() where 'min_cost' was not updated, and the cost we compared with was not the one that was used. - Fixed very wrong cost calculation for priority queues in check_if_pq_applicable(). (main.order_by now correctly uses priority queue) - When calculating cost of EQ_REF or REF, we added the cost of comparing the WHERE clause with the found rows, not all row combinations. This made ref and eq_ref to be regarded way to cheap compared to other access methods. - FORCE INDEX cost calculation didn't take into account clustered or covered indexes. - JT_EQ_REF cost was estimated as avg_io_cost(), which is half the cost of a JT_REF key. This may be true for InnoDB primary key, but not for other unique keys or other engines. Now we use handler function to calculate the cost, which allows us to handle consistently clustered, covered keys and not covered keys. - ha_start_keyread() didn't call extra_opt() if keyread was already enabled but still changed the 'keyread' variable (which is wrong). Fixed by not doing anything if keyread is already enabled. - multi_range_read_info_cost() didn't take into account io_cost when calculating the cost of ranges. - fix_semijoin_strategies_for_picked_join_order() used the wrong record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH and SJ_OPT_LOOSE_SCAN. - Hash joins didn't provide correct best_cost to the upper level, which means that the cost for hash_joins more expensive than calculated in best_access_path (a difference of 10x * TIME_OF_COMPARE). This is fixed in the new code thanks to that we now include TIME_OF_COMPARE cost in 'read_time'. Other things: - Added some 'if (thd->trace_started())' to speed up code - Removed not used function Cost_estimate::is_zero() - Simplified testing of HA_POS_ERROR in get_best_ror_intersect(). (No cost changes) - Moved ha_start_keyread() from join_read_const_table() to join_read_const() to enable keyread for all types of JT_CONST tables. - Made a few very short functions inline in handler.h Notes: - In main.rowid_filter the join order of order and lineitem is swapped. This is because the cost of doing a range fetch of lineitem(98 rows) is almost as big as the whole join of order,lineitem. The filtering will also ensure that we only have to do very small key fetches of the rows in lineitem. - main.index_merge_myisam had a few changes where we are now using less keys for index_merge. This is because index scans are now more expensive than before. - handler->optimizer_cache_cost is updated in ha_external_lock(). This ensures that it is up to date per statements. Not an optimal solution (for locked tables), but should be ok for now. - 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of filesort into consideration when table scan is chosen. (main.myisam_explain_non_select_all) - perfschema.table_aggregate_global_* has changed because an update on a table with 1 row will now use table scan instead of key lookup. TODO in upcomming commits: - Fix selectivity calculation for ranges with and without filtering and when there is a ref access but scan is chosen. For this we have to store the lowest known value for 'accepted_records' in the OPT_RANGE structure. - Change that records_read does not include filtered rows. - test_if_cheaper_ordering() needs to be updated to properly calculate costs. This will fix tests like main.order_by_innodb, main.single_delete_update - Extend get_range_limit_read_cost() to take into considering cost_for_index_read() if there where no quick keys. This will reduce the computed cost for ORDER BY with LIMIT in some cases. (main.innodb_ext_key) - Fix that we take into account selectivity when counting the number of rows we have to read when considering using a index table scan to resolve ORDER BY. - Add new calculation for rnd_pos_time() where we take into account the benefit of reading multiple rows from the same page.
2021-11-01 11:34:24 +01:00
1 SIMPLE t10 eq_ref PRIMARY PRIMARY 1 test.t1.a6 1
2022-06-02 18:47:23 +02:00
1 SIMPLE t8 eq_ref PRIMARY PRIMARY 1 test.t1.a4 1 Using index
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t4 eq_ref PRIMARY PRIMARY 4 test.t1.a2 1 Using index
1 SIMPLE t5 eq_ref PRIMARY PRIMARY 4 test.t4.d1 1 Using where
1 SIMPLE t6 eq_ref PRIMARY PRIMARY 4 test.t1.a3 1 Using where; Using index
1 SIMPLE t7 eq_ref PRIMARY PRIMARY 1 test.t1.a7 1
1 SIMPLE t11 eq_ref PRIMARY PRIMARY 4 test.t1.a5 1
1 SIMPLE t12 eq_ref PRIMARY PRIMARY 4 test.t11.k3 1 Using where
1 SIMPLE l2 eq_ref PRIMARY PRIMARY 4 test.t11.k4 1 Using where
2022-06-02 18:47:23 +02:00
1 SIMPLE t9 ref PRIMARY PRIMARY 1 test.t1.a4 1
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t13 ref PRIMARY,m3 m3 8 const,test.t1.a1 1 Using index
1 SIMPLE l4 eq_ref PRIMARY PRIMARY 4 test.t13.m2 1 Using where; Using index
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE m2 ref PRIMARY,m3 m3 8 const,test.t1.a1 1 Using index
1 SIMPLE l3 eq_ref PRIMARY PRIMARY 4 test.m2.m2 1 Using where
Updated optimizer costs in multi_range_read_info_const() and sql_select.cc - multi_range_read_info_const now uses the new records_in_range interface - Added handler::avg_io_cost() - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is not 1.0. In this case we trust the avg_io_cost() from the handler. - Changed test_quick_select to use TIME_FOR_COMPARE instead of TIME_FOR_COMPARE_IDX to align this with the rest of the code. - Fixed bug when using test_if_cheaper_ordering where we didn't use keyread if index was changed - Fixed a bug where we didn't use index only read when using order-by-index - Added keyread_time() to HEAP. The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. - Fixed get_sweep_read_cost() for HEAP tables - Ensure that range and ref have same cost for simple ranges Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure we favior ref for range for simple queries. - Fixed that matching_candidates_in_table() uses same number of records as the rest of the optimizer - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for HEAP and temporary tables better. A few tests changed because of this. - heap::read_time() and heap::keyread_time() adjusted to not add +1. This was to ensure that handler::keyread_time() doesn't give higher cost for heap tables than for normal tables. One effect of this is that heap and derived tables stored in heap will prefer key access as this is now regarded as cheap. - Changed cost for index read in sql_select.cc to match multi_range_read_info_const(). All index cost calculation is now done trough one function. - 'ref' will now use quick_cost for keys if it exists. This is done so that for '=' ranges, 'ref' is prefered over 'range'. - scan_time() now takes avg_io_costs() into account - get_delayed_table_estimates() uses block_size and avg_io_cost() - Removed default argument to test_if_order_by_key(); simplifies code
2020-02-28 11:59:30 +01:00
1 SIMPLE t14 eq_ref PRIMARY PRIMARY 2 test.t1.a8 1 Using where
1 SIMPLE t15 eq_ref PRIMARY PRIMARY 2 test.t1.a9 1 Using where; Using index
1 SIMPLE t16 ref PRIMARY PRIMARY 2 test.t15.o1 1 Using where
drop view v1;
drop table t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14,t15,t16;
#
# MDEV-4270: crash in fix_semijoin_strategies_for_picked_join_order
#
drop table if exists t1,t2,t3;
Warnings:
Note 1051 Unknown table 'test.t1,test.t2,test.t3'
create table t2(a int,unique key (a)) engine=innodb;
create table t3(b int) engine=innodb;
create table t1(c int,b int)engine=innodb;
set @mdev4270_opl= @@optimizer_prune_level;
set @mdev4270_osd= @@optimizer_search_depth;
set optimizer_prune_level=0;
set optimizer_search_depth=2;
select 1 from t1 join t2 a
natural left join t2 b
natural right outer join t3;
1
drop table t1,t2,t3;
set optimizer_prune_level=@mdev4270_opl;
set optimizer_search_depth=@mdev4270_osd;
MDEV-26106: [ERROR] InnoDB: Unlock row could not find a 2 mode lock on the record Port the following patch from MySQL: commit 1b2e8ea269c80cb93cc79d8be934c40b1c58e947 Author: Kailasnath Nagarkar <kailasnath.nagarkar@oracle.com> Date: Fri Nov 30 16:43:13 2018 +0530 Bug #20939184: INNODB: UNLOCK ROW COULD NOT FIND A 2 MODE LOCK ON THE RECORD Issue: ------ Consdier tables t1 and t2 such that t1 has multiple rows and join condition for t1 left join t2 results in only single row from t2. In this case, access to table t2 is const since there is a single row that qualifies the join condition. However, while executing the query, attempt is made to unlock t2's row multiple times. The current algorithm to fetch rows approximates to: 1) Retrieve the row for t1. 2) Retrieve the row for t2. 3) Apply the join conditions. a) If condition evaluates to true: Project the row to the result. b) If condition evaluates to false: i) If t2's qep_tab->not_null_complement is true, unlock t2's row. ii) Null-complement the row by calling "evaluate_null_complemented_join_record()". In this function qep_tab->not_null_complement is set to false. The t2's only one row, that qualifies join condition, is unlocked in Step i) when t1's row is evaluated to false. When t1's next row is also evaluated to false, another attempt is made to unlock t2's already unlocked row. This results in following error being logged in error.log: "[ERROR] InnoDB: Unlock row could not find a 3 mode lock on the record. Current statement: select * from t1 left join t2 ......" Solution: --------- When a table's access method is "const", set record unlock method for this table to do no operation.
2021-07-07 17:51:13 +02:00
#
# Bug #20939184:INNODB: UNLOCK ROW COULD NOT FIND A 2 MODE LOCK ON THE
# RECORD
#
CREATE TABLE t1 (c1 INT, c2 INT, c3 INT, PRIMARY KEY (c1,c2) ) engine=innodb;
CREATE TABLE t2 (c1 INT, c2 INT, c3 INT, PRIMARY KEY (c1), KEY (c2)) engine=innodb;
INSERT INTO t1 VALUES (1,2,3),(2,3,4),(3,4,5);
INSERT INTO t2 SELECT * FROM t1;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
START TRANSACTION;
SELECT * FROM t1 LEFT JOIN t2 ON t1.c2=t2.c2 AND t2.c1=1 FOR UPDATE;
c1 c2 c3 c1 c2 c3
1 2 3 1 2 3
2 3 4 NULL NULL NULL
3 4 5 NULL NULL NULL
UPDATE t1 LEFT JOIN t2 ON t1.c1 = t2.c2 AND t2.c1 = 3 SET t1.c3 = RAND()*10;
COMMIT;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
DROP TABLE t1,t2;
SET GLOBAL innodb_stats_persistent=@save_stats_persistent;