This was done after discussions with Igor, Sanja and Bar.
The main reason for removing the deprication was to ensure that MariaDB
is always backward compatible whenever possible.
Other things:
- Added statistics counters, mainly for the feedback plugin.
- INTO OUTFILE
- INTO variable
- If INTO is using the old syntax (end of query)
In essence this means that we expect the user query to have at least
one matching row in the end.
This change will not affect the estimated rows for the plan, but will
ensure that the cost for adding a table is not neglected because of
record count being too low.
The reasons for this is that if we have table combination that
together has a very high selectivity then join record_count could
become very low (close to 0)
This would cause costs for all future tables to be so small that they
are irrelevant for the rest of the plan.
This has been shown to be the case in some performance benchmarks and
in a few mtr tests.
There is also still a problem in selectivity calculations as joining two
tables in different order causes a different estimation of total rows.
This can be seen in selectivity_innodb.test, test 'Q20' where joining
nation,supplier is expecting 1.111 rows_out while joining supplier,nation
is expecting 0.04 rows_out.
The reason for 0.04 is that the optimizer estimates 'supplier' to have
10 matching rows, and joining with nation (eq_ref) has 1 row. However
selectivity of n_name = 'UNITED STATES' makes the optimizer things
that there will be only 0.04 matching rows.
This patch avoids this "too low row count" to affect cost
caclulations.
"select * from information_schema.tables limit 1" was giving the following
warning in the log:
[ERROR] Invalid (old?) table or database name '#rocksdb'
- Simplified test by setting read_time=DBL_MAX at start of loop if
FORCE INDEX is used
- No need to test for 'group by' as the cost compare should handle it.
- Only one test change where index scan was replaced with table scan
(correct)
In the case one has an old Aria log file that ands with a Aria checkpoint
and the server restarts after next recovery, just after created a
new Aria log file (of 8K), the Aria recovery code would abort.
If one would try to delete all Aria log files after this (but not the
aria_control_file), the server would crash during recovery.
The problem was that translog_get_last_page_addr() would regard a log file
of exactly 8K as illegal and the rest of the code could not handle this
case.
Another issue was that if there was a crash directly after the log file
head was written to the next page, the code in translog_get_next_chunk()
would crash.
This patch fixes most of the issues, but not all. For Sanja to look at!
Things fixed:
- Added code to ignore 8K log files.
- Removed ASSERT in translog_get_next_chunk() that checks if page only
contains the log page header.
I spent 4 hours on work and 12 hours of testing to try to find
the reason for aria crashing in recovery when starting a new test,
in which case the 'data directory' should be a copy of "install.db",
but aria_log.00000001 content was not correct.
The following changes are mostly done to make it a bit easier to find out
more in case of future similar crashes:
- Mark last_checkpoint_lsn volatile (safety).
- Write checkpoint message to aria_recovery.trace
- When compling with DBUG and with HAVE_DBUG_TRANSLOG_SRC,
use checksum's for Aria log pages. We cannot have it on by default
for DBUG servers yet as there is bugs when changing CRC between
restarts.
- Added a message to mtr --verbose when copying the data directory.
- Removed extra linefeed in Aria recovery message (cleanup)
This includes:
- cleanup and optimization of filtering and pushdown engine code.
- Adjusted costs for rowid filters (based on extensive testing
and profiling).
This made a small two changes to the handler_rowid_filter_is_active()
API:
- One should not call it with a zero pointer!
- One does not need to call handler_rowid_filter_is_active() for every
row anymore. It is enough to check if filter is active by calling it
call it during index_init() or when handler::rowid_filter_changed()
is called
The changes was to avoid unnecessary function calls and checks if
pushdown conditions and rowid_filter is not used.
Updated costs for rowid_filter_lookup() to be closer to reality.
The old cost was based only on rowid_compare_cost. This is now
changed to take into account the overhead in checking the rowid.
Changed the Range_rowid_filter class to use DYNAMIC_ARRAY directly
instead of Dynamic_array<>. This was done to be able to use the new
append_dynamic() functions which gives a notable speed improvment
compared to the old code. Removing the abstraction also makes
the code easier to understand.
The cost of filtering is now slightly lower than before, which
is reflected in some test cases that is now using rowid filters.
This is intended to be the start of a (not complete) coding standards
document we can refer contributors to. This can be modified to add more
nuances and become stricter over time. It can also have additional
content for other file types (CMake, YACC, etc).
It does not cover plugins which should each individually have their own coding
standards.
This includes all test changes from
"Changing all cost calculation to be given in milliseconds"
and forwards.
Some of the things that caused changes in the result files:
- As part of fixing tests, I added 'echo' to some comments to be able to
easier find out where things where wrong.
- MATERIALIZED has now a higher cost compared to X than before. Because
of this some MATERIALIZED types have changed to DEPENDEND SUBQUERY.
- Some test cases that required MATERIALIZED to repeat a bug was
changed by adding more rows to force MATERIALIZED to happen.
- 'Filtered' in SHOW EXPLAIN has in many case changed from 100.00 to
something smaller. This is because now filtered also takes into
account the smallest possible ref access and filters, even if they
where not used. Another reason for 'Filtered' being smaller is that
we now also take into account implicit filtering done for subqueries
using FIRSTMATCH.
(main.subselect_no_exists_to_in)
This is caluculated in best_access_path() and stored in records_out.
- Table orders has changed because more accurate costs.
- 'index' and 'ALL' for small tables has changed to use 'range' or
'ref' because of optimizer_scan_setup_cost.
- index can be changed to 'range' as 'range' optimizer assumes we don't
have to read the blocks from disk that range optimizer has already read.
This can be confusing in the case where there is no obvious where clause
but instead there is a hidden 'key_column > NULL' added by the optimizer.
(main.subselect_no_exists_to_in)
- Scan on primary clustered key does not report 'Using Index' anymore
(It's a table scan, not an index scan).
- For derived tables, the number of rows is now 100 instead of 2,
which can be seen in EXPLAIN.
- More tests have "Using index for group by" as the cost of this
optimization is now more correct (lower).
- A primary key could be preferred for a normal key, even if it would
access more rows, as it's faster to do 1 lokoup and 3 'index_next' on a
clustered primary key than one lookup trough a secondary.
(main.stat_tables_innodb)
Notes:
- There was a 4.7% more calls to best_extension_by_limited_search() in
the main.greedy_optimizer test. However examining the test results
it looked that the plans where slightly better (eq_ref where more
chained together) so I assume this is ok.
- I have verified a few test cases where there was notable/unexpected
changes in the plan and in all cases the new optimizer plans where
faster. (main.greedy_optimizer and some others)
The old code had a bug when the normal sorting code where
where eliminated as part of "Using index for group-by" optimization.
The effect was that the result contained more rows than expected
InnoDB FTS scan was used by a subquery. A subquery execution may start
a table read and continue until it finds the first matching record
combination. This can happen before the table read returns EOF.
The next time the subquery is executed, it will start another table read.
InnoDB FTS table read fails to re-initialize its data structures in this
scenario and will try to continue the scan started at the first execution.
Fixed by ha_innobase::ft_init() to stop the FTS scan if there is one.
Author: Sergei Petrunia <sergey@mariadb.com>
Reviewer: Monty
- table_after_join_selectivity() should use records_init (new bug)
- get_examined_rows() changed to double to get similar results
as in MariaDB 10.11
- Fixed bug where table_after_join_selectivity() did not correct
selectivity in the case where a RANGE is used instead of a REF.
This can happen if the range can use more key_parts than the REF.
WHERE key_part1=10 and key_part2 < 10
Other things:
- Use JT_RANGE instead of JT_ALL for RANGE access in all parts of the code.
Before we used JT_ALL for RANGE.
- Force RANGE be used in best_access_path() if the range used more key
parts than ref. In the original code, this was done much later in
make_join_select)(). However we need to know in
table_after_join_selectivity() if we have used RANGE or not.
- Added more information about filtering to optimizer_trace.
The reason is that 2 is usually way to low and as information_schema
tables may have implicit locks when accessing rows, it is better that
the optimizer doesn't think that these tables are 'very small and fast'.
This change will affect a very small set of test cases.
Before the cost of an aggregate distinct (COUNT(DISTINCT ...)) was set
to 0 if the values where part of an index and the cost of grouping
was higher than the best cost so far. This was shown in explain with
"Using index for group-by (scanning)".
This patch fixes it by calculating the cost of aggregate distinct
and using scanning only if the cost was better than group-by-optimization.
Thing taken into account:
- When using aggregate distinct on index, the filtering is done before
the row is checked against the WHERE and we have thus less WHERE cost.
- When comparing a cost from aggregate distinct, we add to the compared
to plan the cost of doing the filtering later in the SQL level.
Allows FirstMatch to handle the case where the fanout of firstmatch tables
is already less than 1.
Also Fixes LooseScan strategy to set position->{records_init, records_out}
(They were set to 0 which also caused assertion failures)
Author: Sergei Petrunia <sergey@mariadb.com>
Reviewer: Monty
This happens when the subquery marks some index fields as constant
but the fields are still present in GROUP BY
Fixed by checking if the 'constant field' is still part of GROUP BY before
skipping it.
Other things:
- Added Item_field::contains() to make it easier to check if a field
is equal to a Item_field or part of Item_equal.
This solves the current problem in the optimizer
- SELECT FROM big_table
- SELECT from small_table where small_table.eq_ref_key=big_table.id
The old code assumed that each eq_ref access will cause an IO.
As the cost of IO is high, this dominated the cost for the later table
which caused the optimizer to prefer table scans + join cache over
index reads.
This patch fixes this issue by limit the number of expected IO calls,
for rows and index separately, to the size of the table or index or
the number of accesses that we except in a range for the index.
The major changes are:
- Adding a new structure ALL_READ_COST that is mainly used in
best_access_path() to hold the costs parts of the cost we are
calculating. This allows us to limit the number of IO when multiplying
the cost with the previous row combinations.
- All storage engine cost functions are changed to return IO_AND_CPU_COST.
The virtual cost functions should now return in IO_AND_CPU_COST.io
the number of disk blocks that will be accessed instead of the cost
of the access.
- We are not limiting the io_blocks for table or index scans as we
assume that engines may not store these in the 'hot' part of the
cache. Table and index scan also uses much less IO blocks than
key accesses, so the original issue is not as critical with scans.
Other things:
OPT_RANGE now holds a 'Cost_estimate cost' instead a lot of different
costs. All the old costs, like index_only_read, can be extracted
from 'cost'.
- Added to the start of some functions 'handler *file= table->file'
to shorten the code that is using the handler.
- handler->cost() is used to change a ALL_READ_COST or IO_AND_CPU_COST
to 'cost in milliseconds'
- New functions: handler::index_blocks() and handler::row_blocks()
which are used to limit the IO.
- Added index_cost and row_cost to Cost_estimate and removed all not
needed members.
- Removed cost coefficients from Cost_estimate as these don't make sense
when costs (except IO_BLOCKS) are in milliseconds.
- Removed handler::avg_io_cost() and replaced it with DISK_READ_COST.
- Renamed best_range_rowid_filter_for_partial_join() to
best_range_rowid_filter() as using the old name made rows too long.
- Changed all SJ_MATERIALIZATION_INFO 'Cost_estimate' variables to
'double' as Cost_estimate power was not used for these and thus
just caused storage and performance overhead.
- Changed cost_for_index_read() to use 'worst_seeks' to only limit
IO, not number of table accesses. With this patch worst_seeks is
probably not needed anymore, but I kept it around just in case.
- Applying cost for filter got to be much shorter and easier thanks
to the API changes.
- Adjusted cost for fulltext keys in collaboration with Sergei Golubchik.
- Most test changes caused by this patch is that table scans are changed
to use indexes.
- Added ha_seq::keyread_time() and ha_seq::key_scan_time() to get
make checking number of potential IO blocks easier during debugging.
If the final range restrictions (SEL_ARG tree) over GROUP BY
columns are single-point, we can compute the number of GROUP BY groups.
Example: in the query:
SELECT ... FROM tbl
WHERE keypart1 IN (1,2,3) and keypart2 IN ('foo','bar')
Other things:
- Fixed cost calculation to more correctly count the number of blocks
that may be read. The old code could use the total blocks in the file
even if a range was available.
The issue was that when limit is used,
SQL_SELECT::test_quick_select would set the cost of table scan to be
unreasonable high to force a range to be used.
The problem with this approach was that range was used even when the
cost of range, when it would only read 'limit rows' would be higher
than the cost of a table scan.
This patch fixes it by not accepting ranges when the range can never
have a lower cost than a table scan, even if every row would match the
WHERE clause.
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)
Added code to support that force index can be used to force an index scan
instead of a full table scan. Currently this code is disable but I added
a test to verify that things works if the code is ever enabled.
Other things:
- FORCE INDEX will now work with "Range checked for each record" and
join cache (see main/type_time_6065)
- Removed code ifdef with BAD_OPTIMIZATION (New cost calculations should
fix this).
- Removed TABLE_LIST->force_index and comment that it should be removed
- Added TABLE->force_index_join and use in the corresponding places.
This means that FORCE INDEX FOR ORDER BY will not affect keys used
in joins anymore.
Remove TODO that the above should be added.
I still kept TABLE->force_index as it's used in
test_if_cheaper_ordering() and opt_range.cc
- Removed setting table->force_index when calling test_quick_select() as
it's not needed (force_index is an argument to test_quick_select())
The original code was mostly rule based and preferred clustered or
covering indexed independent of cost.
There where a few test changes:
- Some test changed from using filesort to index or table scan. This
happened when most of the rows had to be sorted and the ORDER BY could
use covering or a clustered index (innodb_mysql, create_spatial_index).
- Some test changed range to filesort. This where mainly because the range
was scanning most of the rows or using index scan + row lookup and
filesort with table scan is cheaper. (order_by).
- Change in join_cache was because sorting 2 rows is faster than retrieving
10 rows.
- In selectivity_innodb.test one test changed to use a cheaper index.
The sort length is extracted similarly to how sortlength() function does
it. The function makes use of filesort_use_addons function to compute
the length of addon fields. Finally, by calling compute_sort_costs we
get the fastest_sort possible.
Other changes:
* Sort_param::using_addon_fields() assumes addon fields are already
allocated. This makes the use of Sort_param unusable for
compute_sort_costs *if* we don't want to allocate addon fields.
As a preliminary fix, pass "with_addon_fields" as bool value to
compute_sort_costs() and make the internal functions use that value
instead of Sort_param::using_addon_fields() method.
The ideal fix would be to define a "leaner" struct with only the
necessary members, but this can be done as a separate commit.
Reviewer: Monty
No logic changes.
Extract some of init_for_filesort logic into a separate function:
* Sort_param::setup_lengths_and_limit can be used to fill in the various
xxx_length members of Sort_param, without having to allocate any of the
other buffers.
Reviewer: Monty
This is a rework of how filesort calculates costs to allow functions
like test_if_skip_sort_order() to calculate the cost of filesort to
decide between filesort and using a key to resolve ORDER BY.
Changes:
- Split cost calculation of qsort + optional merge sort and priority queue
to dedicated functions.
- Fixed some wrong calculations of cost in old code (use of log() instead
of log2()).
- Added costs realted to fetching the rows if addon fields are not used.
- Updated get_merge_cost() to take into account that we are going to
read data from temporary files in big chuncks (DISK_CHUNCK_SIZE (64K) and
not in IO_SIZE (4K).
- More code documentation including various variables in Sort_param.
One effect of the cost update is that the cost of priority queue
with addon field has decreased slightly and is used in more cases.
When the rowid is large (like with InnoDB where rowid is the priority key),
using addon fields is in many cases preferable.
Reviewer: Monty