Having rows >= 1.0 helps ensure that when we calculate total rows of joins
the number of resulting rows will not be less after the join.
Changes in test cases:
- Join order change for some tables with few records
- 'Filtered' is much higher for tables with few rows, as 1 row is a high
procent of a table with few rows.
Other changes:
- In test_quick_select(), assume that if table->used_stats_records is 0
then the table has 0 rows.
- Fixed prepare_simple_select() to populate table->used_stat_records
- Enusre that set_statistics_for_tables() doesn't cause used_stats_records
to be 0 when using stat_tables.
- To get blackhole to work with replication, set stats.records to 2 so
that test_quick_select() doesn't assume the table is empty.
The MDEV-25004 test innodb_fts.versioning is omitted because ever since
commit 685d958e38 InnoDB would not allow
writes to a database where the redo log file ib_logfile0 is missing.
(Patch from Monty, slightly amended)
Fix rowid filtering optimization in best_access_path():
== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.
The computations produce wrong result when:
- the #records are "clipped down" with s->worst_seeks or
thd->variables.max_seeks_for_key. keyread_tmp is not clipped
this way so the numbers are not comparable.
- access_factor is negative. This means index_only read is
cheaper than non-index-only read.
This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.
== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
MDEV-28073 Slow query performance in MariaDB when using many table
The idea is to prefer and chain EQ_REF tables (tables that uses an
unique key to find a row) when searching for the best table combination.
This significantly reduces row combinations that has to be examined.
This is optimization is enabled when setting optimizer_prune_level=2
(which is now default).
Implementation:
- optimizer_prune_level has a new level, 2, which enables EQ_REF
optimization in addition to the pruning done by level 1.
Level 2 is now default.
- Added JOIN::eq_ref_tables that contains bits of tables that could use
potentially use EQ_REF access in the query. This is calculated
in sort_and_filter_keyuse()
Under optimizer_prune_level=2:
- When the greedy_optimizer notices that the preceding table was an
EQ_REF table, it tries to add an EQ_REF table next. If an EQ_REF
table exists, only this one will be considered at this level.
We also collect all EQ_REF tables chained by the next levels and these
are ignored on the starting level as we have already examined these.
If no EQ_REF table exists, we continue as normal.
This optimization speeds up the greedy_optimizer combination test with
~25%
Other things:
- I ported the changes in MySQL 5.7 to greedy_optimizer.test to MariaDB
to be able to ensure we can handle all cases that MySQL can do.
- I have run all tests with --mysqld=--optimizer_prune_level=1 to verify that
there where no test changes.
MDEV-28073 Slow query performance in MariaDB when using many tables
The faster we can find a good query plan, the more options we have for
finding and pruning (ignoring) bad plans.
This patch adds sorting of plans to best_extension_by_limited_search().
The plans, from best_access_path() are sorted according to the numbers
of found rows. This allows us to faster find 'good tables' and we are
thus able to eliminate 'bad plans' faster.
One side effect of this patch is that if two tables have equal cost,
the table that which was used earlier in the query is preferred.
This allows users to improve plans by reordering eq_ref tables in the
order they would like them to be uses.
Result changes caused by the patch:
- Traces are different as now we print the cost for using tables before
we start considering them in the plan.
- Table order are changed for some plans. In most cases this is because
the plans are equal and tables are in this case sorted according to
their usage in the original query.
- A few plans was changed as the optimizer was able to find a better
plan (that was pruned by the original code).
Other things:
- Added a new statistic variable: "optimizer_join_prefixes_check_calls",
which counts number of calls to best_extension_by_limited_search().
This can be used to check the prune efficiency in greedy_search().
- Added variable "JOIN_TAB::embedded_dependent" to be able to handle
XX IN (SELECT..) in the greedy_optimizer. The idea is that we
should prune a table if any of the tables in embedded_dependent is
not yet read.
- When using many tables in a query, there will be some additional
memory usage as we need to pre-allocate table of
table_count*table_count*sizeof(POSITION) objects (POSITION is 312
bytes for now) to hold the pre-calculated best_access_path()
information. This memory usage is offset by the expected
performance improvement when using many tables in a query.
- Removed the code from an earlier patch to keep the table order in
join->best_ref in the original order. This is not needed anymore as we
are now sorting the tables for each best_extension_by_limited_search()
call.
Part of:
MDEV-28073 Slow query performance in MariaDB when using many tables
s->key_dependent has a list of tables that are compared with key fields
in the current table. However it does not take into account if a key
field could be resolved by another table.
This is because MariaDB expands 'join_tab->keyuse' to include all generated
comparisons.
For example:
SELECT * from t1,t2,t3 where t1.key=t2.key and t2.key=t3.key
In this case keyuse for t1 includes t2.key and t3.key and key_dependent
contains 't2.map | t3.map'
If we in best_extension_by_limited_search() consider t2,t1 then t1's
key is fully defined, but we cannot do any prune of plans as
s->key_dependent indicates that t3 is still needed.
Fixed by calculating in best_access_patch the current key_dependent map
of tables that is needed to satisfy all keys. This allows us to prune
more bad plans earlier as soon as all keys can be used.
We also set key_dependent to 0 if we found an EQ_REF key, as this an
optimal key for the table and there is no reason to check more keys.
In Histogram_json_hb::point_selectivity(), do return selectivity of 0.0
when the histogram says so.
The logic of "Do not return 0.0 estimate as it causes a multiply-by-zero
meltdown in cost and cardinality calculations" is moved into
records_in_column_ranges() where it is one *once* per column pair (as
opposed to doing once per range, which can cause the error to add-up
to large number when there are many ranges)
This essentially reverts commit 4e89ec6692
and only disables InnoDB persistent statistics for tests where it is
desirable. By design, InnoDB persistent statistics will not be updated
except by ANALYZE TABLE or by STATS_AUTO_RECALC.
The internal transactions that update persistent InnoDB statistics
in background tasks (with innodb_stats_auto_recalc=ON) may cause
nondeterministic query plans or interfere with some tests that deal
with other InnoDB internals, such as the purge of transaction history.
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.
Replaced with default_storage_engine.
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
Change the defaults:
-histogram_size=0
+histogram_size=254
-histogram_type=SINGLE_PREC_HB
+histogram_type=DOUBLE_PREC_HB
Adjust the testcases:
- Some have ignorable changes in EXPLAIN outputs and
more counter increments due to EITS table reads.
- Testcases that meaningfully depend on the old defaults
are changed to use the old values.
Alter statement changed the THD structure by setting the value to FIELD_CHECK_WARN
and then not resetting it back. This led ANALYZE to throw a warning which previously
it didn't.
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.