The reason for this is that we call file->index_flags(index, 0, 1)
multiple times in best_access_patch()when optimizing a table.
For example, in InnoDB, the calls is not trivial (4 if's and 2 assignments)
Now the function is inlined and is just a memory reference.
Other things:
- handler::is_clustering_key() and pk_is_clustering_key() are now inline.
- Added TABLE::can_use_rowid_filter() to simplify some code.
- Test if we should use a rowid_filter only if can_use_rowid_filter() is
true.
- Added TABLE::is_clustering_key() to avoid a memory reference.
- Simplify some code using the fact that HA_KEYREAD_ONLY is true implies
that HA_CLUSTERED_INDEX is false.
- Added DBUG_ASSERT to TABLE::best_range_rowid_filter() to ensure we
do not call it with a clustering key.
- Reorginized elements in struct st_key to get better memory alignment.
- Updated ha_innobase::index_flags() to not have
HA_DO_RANGE_FILTER_PUSHDOWN for clustered index
This solves the current problem in the optimizer
- SELECT FROM big_table
- SELECT from small_table where small_table.eq_ref_key=big_table.id
The old code assumed that each eq_ref access will cause an IO.
As the cost of IO is high, this dominated the cost for the later table
which caused the optimizer to prefer table scans + join cache over
index reads.
This patch fixes this issue by limit the number of expected IO calls,
for rows and index separately, to the size of the table or index or
the number of accesses that we except in a range for the index.
The major changes are:
- Adding a new structure ALL_READ_COST that is mainly used in
best_access_path() to hold the costs parts of the cost we are
calculating. This allows us to limit the number of IO when multiplying
the cost with the previous row combinations.
- All storage engine cost functions are changed to return IO_AND_CPU_COST.
The virtual cost functions should now return in IO_AND_CPU_COST.io
the number of disk blocks that will be accessed instead of the cost
of the access.
- We are not limiting the io_blocks for table or index scans as we
assume that engines may not store these in the 'hot' part of the
cache. Table and index scan also uses much less IO blocks than
key accesses, so the original issue is not as critical with scans.
Other things:
OPT_RANGE now holds a 'Cost_estimate cost' instead a lot of different
costs. All the old costs, like index_only_read, can be extracted
from 'cost'.
- Added to the start of some functions 'handler *file= table->file'
to shorten the code that is using the handler.
- handler->cost() is used to change a ALL_READ_COST or IO_AND_CPU_COST
to 'cost in milliseconds'
- New functions: handler::index_blocks() and handler::row_blocks()
which are used to limit the IO.
- Added index_cost and row_cost to Cost_estimate and removed all not
needed members.
- Removed cost coefficients from Cost_estimate as these don't make sense
when costs (except IO_BLOCKS) are in milliseconds.
- Removed handler::avg_io_cost() and replaced it with DISK_READ_COST.
- Renamed best_range_rowid_filter_for_partial_join() to
best_range_rowid_filter() as using the old name made rows too long.
- Changed all SJ_MATERIALIZATION_INFO 'Cost_estimate' variables to
'double' as Cost_estimate power was not used for these and thus
just caused storage and performance overhead.
- Changed cost_for_index_read() to use 'worst_seeks' to only limit
IO, not number of table accesses. With this patch worst_seeks is
probably not needed anymore, but I kept it around just in case.
- Applying cost for filter got to be much shorter and easier thanks
to the API changes.
- Adjusted cost for fulltext keys in collaboration with Sergei Golubchik.
- Most test changes caused by this patch is that table scans are changed
to use indexes.
- Added ha_seq::keyread_time() and ha_seq::key_scan_time() to get
make checking number of potential IO blocks easier during debugging.
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)
Added code to support that force index can be used to force an index scan
instead of a full table scan. Currently this code is disable but I added
a test to verify that things works if the code is ever enabled.
Other things:
- FORCE INDEX will now work with "Range checked for each record" and
join cache (see main/type_time_6065)
- Removed code ifdef with BAD_OPTIMIZATION (New cost calculations should
fix this).
- Removed TABLE_LIST->force_index and comment that it should be removed
- Added TABLE->force_index_join and use in the corresponding places.
This means that FORCE INDEX FOR ORDER BY will not affect keys used
in joins anymore.
Remove TODO that the above should be added.
I still kept TABLE->force_index as it's used in
test_if_cheaper_ordering() and opt_range.cc
- Removed setting table->force_index when calling test_quick_select() as
it's not needed (force_index is an argument to test_quick_select())
Variables added:
- optimizer_index_block_copy_cost
- optimizer_key_copy_cost
- optimizer_key_next_find_cost
- optimizer_key_compare_cost
- optimizer_row_copy_cost
- optimizer_where_compare_cost
Some rename of defines was done to make the internal defines similar to
the visible ones:
TIME_FOR_COMPARE -> WHERE_COST; WHERE_COST was also "inverted" to be
a number between 0 and 1 that is multiply with accepted records
(similar to other optimizer variables).
TIME_FOR_COMPARE_IDX -> KEY_COMPARE_COST. This is also inverted,
similar to TIME_FOR_COMPARE.
TIME_FOR_COMPARE_ROWID -> ROWID_COMPARE_COST. This is also inverted,
similar to TIME_FOR_COMPARE.
All default costs are identical to what they where before this patch.
Other things:
- Compare factor in get_merge_buffers_cost() was inverted.
- Changed namespace to static in filesort_utils.cc
Before this patch, when calculating the cost of fetching and using a
row/key from the engine, we took into account the cost of finding a
row or key from the engine, but did not consistently take into account
index only accessed, clustered key or covered keys for all access
paths.
The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently
considered in best_access_path(). TIME_FOR_COMPARE was used in
calculation in other places, like greedy_search(), but was in some
cases (like scans) done an a different number of rows than was
accessed.
The cost calculation of row and index scans didn't take into account
the number of rows that where accessed, only the number of accepted
rows.
When using a filter, the cost of index_only_reads and cost of
accessing and disregarding 'filtered rows' where not taken into
account, which made filters cost less than there actually where.
To remedy the above, the following key & row fetch related costs
has been added:
- The cost of fetching and using a row is now split into different costs:
- key + Row fetch cost (as before) but multiplied with the variable
'optimizer_cache_cost' (default to 0.5). This allows the user to
tell the optimizer the likehood of finding the key and row in the
engine cache.
- ROW_COPY_COST, The cost copying a row from the engine to the
sql layer or creating a row from the join_cache to the record
buffer. Mostly affects table scan costs.
- ROW_LOOKUP_COST, the cost of fetching a row by rowid.
- KEY_COPY_COST the cost of finding the next key and copying it from
the engine to the SQL layer. This is used when we calculate the cost
index only reads. It makes index scans more expensive than before if
they cover a lot of rows. (main.index_merge_myisam)
- KEY_LOOKUP_COST, the cost of finding the first key in a range.
This replaces the old define IDX_LOOKUP_COST, but with a higher cost.
- KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid).
when doing a index scan and comparing the rowid to the filter.
Before this cost was assumed to be 0.
All of the above constants/variables are now tuned to be somewhat in
proportion of executing complexity to each other. There is tuning
need for these in the future, but that can wait until the above are
made user variables as that will make tuning much easier.
To make the usage of the above easy, there are new (not virtual)
cost calclation functions in handler:
- ha_read_time(), like read_time(), but take optimizer_cache_cost into
account.
- ha_read_and_copy_time(), like ha_read_time() but take into account
ROW_COPY_TIME
- ha_read_and_compare_time(), like ha_read_and_copy_time() but take
TIME_FOR_COMPARE into account.
- ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST
into account. This is used with filesort where we don't need
to execute the WHERE clause again.
- ha_keyread_time(), like keyread_time() but take
optimizer_cache_cost into account.
- ha_keyread_and_copy_time(), like ha_keyread_time(), but add
KEY_COPY_COST.
- ha_key_scan_time(), like key_scan_time() but take
optimizer_cache_cost nto account.
- ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add
KEY_COPY_COST & TIME_FOR_COMPARE.
I also added some setup costs for doing different types of scans and
creating temporary tables (on disk and in memory). This encourages
the optimizer to not use these for simple 'a few row' lookups if
there are adequate key lookup strategies.
- TABLE_SCAN_SETUP_COST, cost of starting a table scan.
- INDEX_SCAN_SETUP_COST, cost of starting an index scan.
- HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory
temporary table.
- DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary
table.
When calculating cost of fetching ranges, we had a cost of
IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is
now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) *
optimizer_cache_cost', which matches the cost we use for 'ref' and
other key lookups. The effect is that the cost is now a bit higher
when we have many ranges for a key.
Allmost all calculation with TIME_FOR_COMPARE is now done in
best_access_path(). 'JOIN::read_time' now includes the full
cost for finding the rows in the table.
In the result files, many of the changes are now again close to what
they where before the "Update cost for hash and cached joins" commit,
as that commit didn't fix the filter cost (too complex to do
everything in one commit).
The above changes showed a lot of a lot of inconsistencies in
optimizer cost calculation. The main objective with the other changes
was to do calculation as similar (and accurate) as possible and to make
different plans more comparable.
Detailed list of changes:
- Calculate index_only_cost consistently and correctly for all scan
and ref accesses. The row fetch_cost and index_only_cost now
takes into account clustered keys, covered keys and index
only accesses.
- cost_for_index_read now returns both full cost and index_only_cost
- Fixed cost calculation of get_sweep_read_cost() to match other
similar costs. This is bases on the assumption that data is more
often stored on SSD than a hard disk.
- Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
- Some scan cost estimates did not take into account
TIME_FOR_COMPARE. Now all scan costs takes this into
account. (main.show_explain)
- Added session variable optimizer_cache_hit_ratio (default 50%). By
adjusting this on can reduce or increase the cost of index or direct
record lookups. The effect of the default is that key lookups is now
a bit cheaper than before. See usage of 'optimizer_cache_cost' in
handler.h.
- JOIN_TAB::scan_time() did not take into account index only scans,
which produced a wrong cost when index scan was used. Changed
JOIN_TAB:::scan_time() to take into consideration clustered and
covered keys. The values are now cached and we only have to call
this function once. Other calls are changed to use the cached
values. Function renamed to JOIN_TAB::estimate_scan_time().
- Fixed that most index cost calculations are done the same way and
more close to 'range' calculations. The cost is now lower than
before for small data sets and higher for large data sets as we take
into account how many keys are read (main.opt_trace_selectivity,
main.limit_rows_examined).
- Ensured that index_scan_cost() ==
range(scan_of_all_rows_in_table_using_one_range) +
MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there
is choice of doing a full index scan and a range-index scan over
almost the whole table then index scan will be preferred (no
range-read setup cost). (innodb.innodb, main.show_explain,
main.range)
- Fixed the EQ_REF and REF takes into account clustered and covered
keys. This changes some plans to use covered or clustered indexes
as these are much cheaper. (main.subselect_mat_cost,
main.state_tables_innodb, main.limit_rows_examined)
- Rowid filter setup cost and filter compare cost now takes into
account fetching and checking the rowid (KEY_NEXT_FIND_COST).
(main.partition_pruning heap.heap_btree main.log_state)
- Added KEY_NEXT_FIND_COST to
Range_rowid_filter_cost_info::lookup_cost to account of the time
to find and check the next key value against the container
- Introduced ha_keyread_time(rows) that takes into account finding
the next row and copying the key value to 'record'
(KEY_COPY_COST).
- Introduced ha_key_scan_time() for calculating an index scan over
all rows.
- Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
- Added index_only_fetch_cost() as a convenience function to
OPT_RANGE.
- keyread_time() cost is slightly reduced to prefer shorter keys.
(main.index_merge_myisam)
- All of the above caused some index_merge combinations to be
rejected because of cost (main.index_intersect). In some cases
'ref' where replaced with index_merge because of the low
cost calculation of get_sweep_read_cost().
- Some index usage moved from PRIMARY to a covering index.
(main.subselect_innodb)
- Changed cost calculation of filter to take KEY_LOOKUP_COST and
TIME_FOR_COMPARE into account. See sql_select.cc::apply_filter().
filter parameters and costs are now written to optimizer_trace.
- Don't use matchings_records_in_range() to try to estimate the number
of filtered rows for ranges. The reason is that we want to ensure
that 'range' is calculated similar to 'ref'. There is also more work
needed to calculate the selectivity when using ranges and ranges and
filtering. This causes filtering column in EXPLAIN EXTENDED to be
100.00 for some cases where range cannot use filtering.
(main.rowid_filter)
- Introduced ha_scan_time() that takes into account the CPU cost of
finding the next row and copying the row from the engine to
'record'. This causes costs of table scan to slightly increase and
some test to changed their plan from ALL to RANGE or ALL to ref.
(innodb.innodb_mysql, main.select_pkeycache)
In a few cases where scan time of very small tables have lower cost
than a ref or range, things changed from ref/range to ALL.
(main.myisam, main.func_group, main.limit_rows_examined,
main.subselect2)
- Introduced ha_scan_and_compare_time() which is like ha_scan_time()
but also adds the cost of the where clause (TIME_FOR_COMPARE).
- Added small cost for creating temporary table for
materialization. This causes some very small tables to use scan
instead of materialization.
- Added checking of the WHERE clause (TIME_FOR_COMPARE) of the
accepted rows to ROR costs in get_best_ror_intersect()
- Removed '- 0.001' from 'join->best_read' and optimize_straight_join()
to ensure that the 'Last_query_cost' status variable contains the
same value as the one that was calculated by the optimizer.
- Take avg_io_cost() into account in handler::keyread_time() and
handler::read_time(). This should have no effect as it's 1.0 by
default, except for heap that overrides these functions.
- Some 'ref_or_null' accesses changed to 'range' because of cost
adjustments (main.order_by)
- Added scan type "scan_with_join_cache" for optimizer_trace. This is
just to show in the trace what kind of scan was used.
- When using 'scan_with_join_cache' take into account number of
preceding tables (as have to restore all fields for all previous
table combination when checking the where clause)
The new cost added is:
(row_combinations * ROW_COPY_COST * number_of_cached_tables).
This increases the cost of join buffering in proportion of the
number of tables in the join buffer. One effect is that full scans
are now done earlier as the cost is then smaller.
(main.join_outer_innodb, main.greedy_optimizer)
- Removed the usage of 'worst_seeks' in cost_for_index_read as it
caused wrong plans to be created; It prefered JT_EQ_REF even if it
would be much more expensive than a full table scan. A related
issue was that worst_seeks only applied to full lookup, not to
clustered or index only lookups, which is not consistent. This
caused some plans to use index scan instead of eq_ref (main.union)
- Changed federated block size from 4096 to 1500, which is the
typical size of an IO packet.
- Added costs for reading rows to Federated. Needed as there is no
caching of rows in the federated engine.
- Added ha_innobase::rnd_pos_time() cost function.
- A lot of extra things added to optimizer trace
- More costs, especially for materialization and index_merge.
- Make lables more uniform
- Fixed a lot of minor bugs
- Added 'trace_started()' around a lot of trace blocks.
- When calculating ORDER BY with LIMIT cost for using an index
the cost did not take into account the number of row retrivals
that has to be done or the cost of comparing the rows with the
WHERE clause. The cost calculated would be just a fraction of
the real cost. Now we calculate the cost as we do for ranges
and 'ref'.
- 'Using index for group-by' is used a bit more than before as
now take into account the WHERE clause cost when comparing
with 'ref' and prefer the method with fewer row combinations.
(main.group_min_max).
Bugs fixed:
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
like in optimize_straight_join() and greedy_search()
- Fixed bug in save_explain_data where we could test for the wrong
index when displaying 'Using index'. This caused some old plans to
show 'Using index'. (main.subselect_innodb, main.subselect2)
- Fixed bug in get_best_ror_intersect() where 'min_cost' was not
updated, and the cost we compared with was not the one that was
used.
- Fixed very wrong cost calculation for priority queues in
check_if_pq_applicable(). (main.order_by now correctly uses priority
queue)
- When calculating cost of EQ_REF or REF, we added the cost of
comparing the WHERE clause with the found rows, not all row
combinations. This made ref and eq_ref to be regarded way to cheap
compared to other access methods.
- FORCE INDEX cost calculation didn't take into account clustered or
covered indexes.
- JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
cost of a JT_REF key. This may be true for InnoDB primary key, but
not for other unique keys or other engines. Now we use handler
function to calculate the cost, which allows us to handle
consistently clustered, covered keys and not covered keys.
- ha_start_keyread() didn't call extra_opt() if keyread was already
enabled but still changed the 'keyread' variable (which is wrong).
Fixed by not doing anything if keyread is already enabled.
- multi_range_read_info_cost() didn't take into account io_cost when
calculating the cost of ranges.
- fix_semijoin_strategies_for_picked_join_order() used the wrong
record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH
and SJ_OPT_LOOSE_SCAN.
- Hash joins didn't provide correct best_cost to the upper level, which
means that the cost for hash_joins more expensive than calculated
in best_access_path (a difference of 10x * TIME_OF_COMPARE).
This is fixed in the new code thanks to that we now include
TIME_OF_COMPARE cost in 'read_time'.
Other things:
- Added some 'if (thd->trace_started())' to speed up code
- Removed not used function Cost_estimate::is_zero()
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect().
(No cost changes)
- Moved ha_start_keyread() from join_read_const_table() to join_read_const()
to enable keyread for all types of JT_CONST tables.
- Made a few very short functions inline in handler.h
Notes:
- In main.rowid_filter the join order of order and lineitem is swapped.
This is because the cost of doing a range fetch of lineitem(98 rows) is
almost as big as the whole join of order,lineitem. The filtering will
also ensure that we only have to do very small key fetches of the rows
in lineitem.
- main.index_merge_myisam had a few changes where we are now using
less keys for index_merge. This is because index scans are now more
expensive than before.
- handler->optimizer_cache_cost is updated in ha_external_lock().
This ensures that it is up to date per statements.
Not an optimal solution (for locked tables), but should be ok for now.
- 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of
filesort into consideration when table scan is chosen.
(main.myisam_explain_non_select_all)
- perfschema.table_aggregate_global_* has changed because an update
on a table with 1 row will now use table scan instead of key lookup.
TODO in upcomming commits:
- Fix selectivity calculation for ranges with and without filtering and
when there is a ref access but scan is chosen.
For this we have to store the lowest known value for
'accepted_records' in the OPT_RANGE structure.
- Change that records_read does not include filtered rows.
- test_if_cheaper_ordering() needs to be updated to properly calculate
costs. This will fix tests like main.order_by_innodb,
main.single_delete_update
- Extend get_range_limit_read_cost() to take into considering
cost_for_index_read() if there where no quick keys. This will reduce
the computed cost for ORDER BY with LIMIT in some cases.
(main.innodb_ext_key)
- Fix that we take into account selectivity when counting the number
of rows we have to read when considering using a index table scan to
resolve ORDER BY.
- Add new calculation for rnd_pos_time() where we take into account the
benefit of reading multiple rows from the same page.
The idea is to put Item_direct_ref_to_item as a transparent and
permanent wrapper before a string which require conversion.
So that Item_direct_ref_to_item would be the only place where
the pointer to the string item is stored, this pointer can be changed
and restored during PS execution as needed. And if any permanent
(subquery) optimization would need a pointer to the item,
it'll use a pointer to the Item_direct_ref_to_item - which is
a permanent item and won't go away.
This is a DELETE only case. Normally this statement doesn't make inserts,
but DELETE ... FOR PORTION changes it. UPDATE and INSERT initializes
autoinc by calling handler::info(HA_STATUS_AUTO). Also myisam and innodb
can lazily initialize it in their update_create_info overrides.
The solution is to initialize autoinc during delete preparation,
if period (DELETE FOR PORTION) is specified.
The initial work has been done by Kento Takeuchi by his PR #2048,
however this commit also holds a few technical modifications by
Nikita Malyavin
Specifically:
Revert "MDEV-29664 Assertion `!n_mysql_tables_in_use' failed in innobase_close_connection"
This reverts commit ba875e9396.
Revert "MDEV-29620 Assertion `next_insert_id == 0' failed in handler::ha_external_lock"
This reverts commit aa08a7442a.
Revert "MDEV-29628 Memory leak after CREATE OR REPLACE with foreign key"
This reverts commit c579d66ba6.
Revert "MDEV-29609 create_not_windows test fails with different result"
This reverts commit cb583b2f1b.
Revert "MDEV-29544 SIGSEGV in HA_CREATE_INFO::finalize_locked_tables"
This reverts commit dcd66c3814.
Revert "MDEV-28933 CREATE OR REPLACE fails to recreate same constraint name"
This reverts commit cf6c517632.
Revert "MDEV-28933 Moved RENAME_CONSTRAINT_IDS to include/sql_funcs.h"
This reverts commit f1e1c1335b.
Revert "MDEV-28956 Locking is broken if CREATE OR REPLACE fails under LOCK TABLES"
This reverts commit a228ec80e3.
Revert "MDEV-25292 gcol.gcol_bugfixes --ps fix"
This reverts commit 24fff8267d.
Revert "MDEV-25292 Disable atomic replace for slave-generated or-replace"
This reverts commit 2af15914cb.
Revert "MDEV-25292 backup_log improved"
This reverts commit 34398a20b5.
Revert "MDEV-25292 Atomic CREATE OR REPLACE TABLE"
This reverts commit 93c8252f02.
Revert "MDEV-25292 Table_name class for (db, table_name, alias)"
This reverts commit d145dda9c7.
Revert "MDEV-25292 ha_table_exists() cleanup and improvement"
This reverts commit 409b8a86de.
Revert "MDEV-25292 Cleanups"
This reverts commit 595dad83ad.
Revert "MDEV-25292 Refactoring: moved select_field_count into Alter_info."
This reverts commit f02af1d229.
don't set vers_write=false if one vers column was used explicitly,
instead do vers_update_fields() for columns that do not have explicit
value. So, if row_start has the value and row_end not, row_end will
get max by default.
* clarify the help text for --system-versioning-insert-history
* move the vers_write=false check from Item_field::fix_fields()
next to other vers field checks in find_field_in_table()
* move row_start validation from handler::write_row() next to
vers_update_fields()
* make secure_timestamp check to happen in one place only,
extract it into a function is_set_timestamp_vorbidden().
* overwriting vers fields is an error, just like setting @@timestamp
* don't run vers_insert_history() for every row
To prevent ASAN heap-use-after-poison in the MDEV-16549 part of
./mtr --repeat=6 main.derived
the initialization of Name_resolution_context was cleaned up.
Read the version of the view share when we read definition to prevent
simultaniouse access to a view table SHARE (and so its MEM_ROOT)
from different threads.