mirror of
https://github.com/MariaDB/server.git
synced 2025-01-16 12:02:42 +01:00
292f6568fa
tmp_table_size can now be set to 0 (to disable in memory internal temp tables) Improved speed for internal Maria temp tables: - Don't use packed keys, except with long text fields. - Don't copy key all accessed pages during key search. Some new benchmark tests to sql-bench (for group by) BUILD/compile-pentium64-gcov: Update script to use same pentium_config flags as other tests BUILD/compile-pentium64-gprof: Update script to use same pentium_config flags as other tests include/my_sys.h: Added count of my_sync calls mysql-test/r/variables.result: tmp_table_size can now be set to 0 sql-bench/test-select.sh: Added some new test for GROUP BY on a not key field and group by with different order by sql/mysqld.cc: Added count of my_sync calls tmp_table_size can now be set to 0 (to disable in memory internal temp tables) sql/sql_select.cc: If tmp_table_size is 0, don't use in memory temp tables (good for benchmarking MyISAM/Maria temp tables) Don't pack keys for Maria tables; The 8K page size makes packed keys too slow for temp tables. storage/maria/ma_key_recover.h: Moved definition to maria_def.h storage/maria/ma_page.c: Moved code used to simplify comparing of identical Maria tables to own function (page_cleanup()) Fixed that one can read a page with a read lock. storage/maria/ma_rkey.c: For not exact key reads, cache the page where we found key (to speed up future read-next/read-prev calls) storage/maria/ma_search.c: Moved code to cache last key page to separate function. Instead of copying pages, only get a link to the page. This notable speeds up key searches on bigger tables. storage/maria/ma_write.c: Added comment storage/maria/maria_def.h: Moved page_cleanup() to separate function. |
||
---|---|---|
.. | ||
Comments | ||
Data | ||
limits | ||
as3ap.sh | ||
bench-count-distinct.sh | ||
bench-init.pl.sh | ||
compare-results.sh | ||
copy-db.sh | ||
crash-me.sh | ||
example | ||
example.bat | ||
graph-compare-results.sh | ||
innotest1.sh | ||
innotest1a.sh | ||
innotest1b.sh | ||
innotest2.sh | ||
innotest2a.sh | ||
innotest2b.sh | ||
Makefile.am | ||
myisam.cnf | ||
pwd.bat | ||
README | ||
run-all-tests.sh | ||
server-cfg.sh | ||
test-alter-table.sh | ||
test-ATIS.sh | ||
test-big-tables.sh | ||
test-connect.sh | ||
test-create.sh | ||
test-insert.sh | ||
test-select.sh | ||
test-table-elimination.sh | ||
test-transactions.sh | ||
test-wisconsin.sh | ||
uname.bat |
The MySQL Benchmarks These tests require a MySQL version of at least 3.20.28 or 3.21.10. Currently the following servers are supported: MySQL 3.20 and 3.21, PostgreSQL 6.#, mSQL 2.# and Solid Server 2.2 The benchmark directory contains the query files and raw data files used to populate the MySQL benchmark tables. In order to run the benchmarks, you should normally execute a command such as the following: run-all-tests --server=mysql --cmp=mysql,pg,solid --user=test --password=test --log This means that you want to run the benchmarks with MySQL. The limits should be taken from all of MySQL, PostgreSQL, and Solid. The login name and password for connecting to the server both are ``test''. The result should be saved as a RUN file in the output directory. When run-all-tests has finished, will have the individual results and the the total RUN- file in the output directory. If you want to look at some old results, use the compare-results script. For example: compare-results --dir=Results --cmp=mysql,pg,solid compare-results --dir=Results --cmp=mysql,pg,solid --relative compare-results --dir=Results --cmp=msql,mysql,pg,solid compare-results --dir=Results --cmp=msql,mysql,pg,solid --relative compare-results --dir=Results --server=mysql --same-server --cmp=mysql,pg,solid Some of the files in the benchmark directory are: File Description Data/ATIS Contains data for 29 related tables used in the ATIS tests. Data/Wisconsin Contains data for the Wisconsin benchmark. Results Contains old benchmark results. Makefile.am Automake Makefile README This file. test-ATIS.sh Creation of 29 tables and a lot of selects on them. test-connect.sh Test how fast a connection to the server is. test-create.sh Test how fast a table is created. test-insert.sh Test create and fill of a table. test-wisconsin.sh A port of the PostgreSQL version of this benchmark. run-all-tests Use this to run all tests. When all tests are run, use the --log and --use-old options to get a RUN-file. compare-results Generates a comparison table from different RUN files. server-cfg Contains the limits and functions for all supported SQL servers. If you want to add a new server, this should be the only file that neads to be changed. Most of the tests should use portable SQL to make it possible to compare different databases. Sometimes SQL extensions can make things a lot faster. In this case the test may use the extensions if the --fast option is used. Useful options to all test-scripts (and run-all-tests): --host=# Hostname for MySQL server (default: localhost) --db=# Database to use (default: test) --fast Allow use of any non-standard SQL extension to get things done faster. --lock-tables Use table locking to get more speed. From a text at http://www.mgt.ncu.edu.tw/CSIM/Paper/sixth/11.html: The Wisconsin Benchmark The Wisconsin Benchmark described in [Bitton, DeWitt, and Turbyfill 1983] [Boral and DeWitt 1984] [Bitton and Turbyfill 1985] [Bitton and Turbyfill 1988], and [DeWitt 1993] is the first effort to systematically measure and compare the performance of relational database systems with database machines. The benchmark is a single-user and single-factor experiment using a synthetic database and a controlled workload. It measures the query optimization performance of database systems with 32 query types to exercise the components of the proposed systems. The query suites include selection, join, projection, aggregate, and simple update queries. The test database consists of four generic relations. The tenk relation is the key table and most used. Two data types of small integer numbers and character strings are utilized. Data values are uniformly distributed. The primary metric is the query elapsed time. The main criticisms of the benchmark include the nature of single-user workload, the simplistic database structure, and the unrealistic query tests. A number of efforts have been made to extend the benchmark to incorporate the multi-user test. However, they do not receive the same acceptance as the original Wisconsin benchmark except an extension work called the AS3AP benchmark.