mirror of
https://github.com/MariaDB/server.git
synced 2025-01-31 02:51:44 +01:00
d9a3a6ed19
WITH LARGE BUFFER POOL (Note: this a backport of revno:3472 from mysql-trunk) rb://845 approved by: Marko When dropping a table (with an .ibd file i.e.: with innodb_file_per_table set) we scan entire LRU to invalidate pages from that table. This can be painful in case of large buffer pools as we hold the buf_pool->mutex for the scan. Note that gravity of the problem does not depend on the size of the table. Even with an empty table but a large and filled up buffer pool we'll end up scanning a very long LRU list. The fix is to scan flush_list and just remove the blocks belonging to the table from the flush_list, marking them as non-dirty. The blocks are left in the LRU list for eventual eviction due to aging. The flush_list is typically much smaller than the LRU list but for cases where it is very long we have the solution of releasing the buf_pool->mutex after scanning 1K pages. buf_page_[set|unset]_sticky(): Use new IO-state BUF_IO_PIN to ensure that a block stays in the flush_list and LRU list when we release buf_pool->mutex. Previously we have been abusing BUF_IO_READ to achieve this.
14 lines
468 B
Text
14 lines
468 B
Text
set global innodb_file_per_table=on;
|
|
set global innodb_file_format=`1`;
|
|
create table t1(a text) engine=innodb key_block_size=8;
|
|
SELECT page_size FROM information_schema.innodb_cmpmem WHERE pages_used > 0;
|
|
page_size
|
|
8192
|
|
drop table t1;
|
|
SELECT page_size FROM information_schema.innodb_cmpmem WHERE pages_used > 0;
|
|
page_size
|
|
8192
|
|
create table t2(a text) engine=innodb;
|
|
SELECT page_size FROM information_schema.innodb_cmpmem WHERE pages_used > 0;
|
|
page_size
|
|
drop table t2;
|