Commit graph

60 commits

Author SHA1 Message Date
Marko Mäkelä
82230aa423 Merge 10.9 into 10.10 2023-06-07 14:48:37 +03:00
Marko Mäkelä
31be25349f Merge 10.6 into 10.9 2023-05-25 09:24:32 +03:00
Marko Mäkelä
270eeeb523 Merge 10.5 into 10.6 2023-05-23 12:25:39 +03:00
Monty
c7e04af8bc Update main.selectivity test and results 2023-05-23 09:16:36 +03:00
Oleksandr Byelkin
16e5bc4cbc Merge branch '10.9' into 10.10 2023-05-04 11:50:34 +02:00
Oleksandr Byelkin
85997115c2 Merge branch '10.6' into 10.8 2023-05-04 11:34:11 +02:00
Oleksandr Byelkin
5dc0f3dafa Merge branch '10.5' into 10.6 2023-05-04 11:26:45 +02:00
Oleksandr Byelkin
749c512911 Merge branch '10.4' into 10.5 2023-05-04 11:23:37 +02:00
Oleksandr Byelkin
62ec258f10 Fix of selectivity test to behave correctly with embedded and view protocols. 2023-05-04 11:20:35 +02:00
Oleksandr Byelkin
652d54bf00 Merge branch '10.5' into 10.6 2023-05-04 07:36:37 +02:00
Oleksandr Byelkin
13a294a2c9 Merge branch '10.9' into 10.10 2023-05-03 14:09:13 +02:00
Oleksandr Byelkin
f0f1f2de0e Merge branch '10.6' into 10.8 2023-05-03 11:33:57 +02:00
Oleksandr Byelkin
043d69bbcc Merge branch '10.5' into 10.6 2023-05-03 09:51:25 +02:00
Oleksandr Byelkin
edf8ce5b97 Merge branch 'bb-10.4-release' into bb-10.5-release 2023-05-02 13:54:54 +02:00
Sergei Petrunia
85cc831880 MDEV-31067: selectivity_from_histogram >1.0 for a DOUBLE_PREC_HB histogram
Variant #2.

When Histogram::point_selectivity() sees that the point value of interest
falls into one bucket, it tries to guess whether the bucket has many
different (unpopular) values or a few popular values. (The number of
rows is fixed, as it's a Height-balanced histogram).
The basis for this guess is the "width" of the value range the bucket
covers. Buckets covering wider value ranges are assumed to contain
values with proportionally lower frequencies.

This is just a [brave] guesswork. For a very narrow bucket, it may
produce an estimate that's larger than total #rows in the bucket
or even in the whole table.

Remove the guesswork and replace it with basic logic: return
either the per-table average selectivity of col=const, or selectivity
of one bucket, whichever is lower.
2023-04-28 22:39:25 +03:00
Marko Mäkelä
a009280e60 Merge 10.9 into 10.10 2023-04-14 12:24:14 +03:00
Marko Mäkelä
1d1e0ab2cc Merge 10.6 into 10.8 2023-04-12 15:50:08 +03:00
Marko Mäkelä
5bada1246d Merge 10.5 into 10.6 2023-04-11 16:15:19 +03:00
Oleksandr Byelkin
ac5a534a4c Merge remote-tracking branch '10.4' into 10.5 2023-03-31 21:32:41 +02:00
Sergei Petrunia
2e6872791a MDEV-30218: Incorrect optimization for rowid_filtering, correction
Final corrections:
- Remove incorrect tracing, "rowid_filter_skipped"
- Put the worst_seeks sanity check back
2023-02-15 16:28:08 +01:00
Igor Babaev
d1a46c68cd MDEV-30218 Incorrect optimization for rowid_filtering
Correction over the last patch for this MDEV.
2023-02-15 16:28:08 +01:00
Marko Mäkelä
cae5a0328b Merge 10.9 into 10.10 2023-01-10 15:06:25 +02:00
Marko Mäkelä
92c8d6f168 Merge 10.7 into 10.8
The MDEV-25004 test innodb_fts.versioning is omitted because ever since
commit 685d958e38 InnoDB would not allow
writes to a database where the redo log file ib_logfile0 is missing.
2023-01-10 14:42:50 +02:00
Marko Mäkelä
e441c32a0b Merge 10.5 into 10.6 2023-01-03 18:13:11 +02:00
Marko Mäkelä
8b9b4ab3f5 Merge 10.4 into 10.5 2023-01-03 17:08:42 +02:00
Sergei Petrunia
87eccd78a7 MDEV-30218: Incorrect optimization for rowid_filtering
(Patch from Monty, slightly amended)

Fix rowid filtering optimization in best_access_path():

== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.

The computations produce wrong result when:

- the #records are "clipped down" with s->worst_seeks or
  thd->variables.max_seeks_for_key. keyread_tmp is not clipped
  this way so the numbers are not comparable.

- access_factor is negative. This means index_only read is
  cheaper than non-index-only read.

This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.

== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
2022-12-13 13:45:54 +02:00
Oleksandr Byelkin
f8997c68fe Merge branch '10.9' into 10.10 2022-11-03 11:47:10 +01:00
Oleksandr Byelkin
33825755c7 Merge branch '10.7' into 10.8 2022-11-02 16:07:38 +01:00
Oleksandr Byelkin
e5aa58190f Merge branch '10.5' into 10.6 2022-11-02 14:33:20 +01:00
Oleksandr Byelkin
4519b42e61 Merge branch '10.4' into 10.5 2022-10-26 15:26:06 +02:00
Monty
b3c74bdc1f Improve pruning in greedy_search by sorting tables during search
MDEV-28073 Slow query performance in MariaDB when using many tables

The faster we can find a good query plan, the more options we have for
finding and pruning (ignoring) bad plans.

This patch adds sorting of plans to best_extension_by_limited_search().
The plans, from best_access_path() are sorted according to the numbers
of found rows.  This allows us to faster find 'good tables' and we are
thus able to eliminate 'bad plans' faster.

One side effect of this patch is that if two tables have equal cost,
the table that which was used earlier in the query is preferred.
This allows users to improve plans by reordering eq_ref tables in the
order they would like them to be uses.

Result changes caused by the patch:
- Traces are different as now we print the cost for using tables before
  we start considering them in the plan.
- Table order are changed for some plans. In most cases this is because
  the plans are equal and tables are in this case sorted according to
  their usage in the original query.
- A few plans was changed as the optimizer was able to find a better
  plan (that was pruned by the original code).

Other things:

- Added a new statistic variable: "optimizer_join_prefixes_check_calls",
  which counts number of calls to best_extension_by_limited_search().
  This can be used to check the prune efficiency in greedy_search().
- Added variable "JOIN_TAB::embedded_dependent" to be able to handle
  XX IN (SELECT..) in the greedy_optimizer.  The idea is that we
  should prune a table if any of the tables in embedded_dependent is
  not yet read.
- When using many tables in a query, there will be some additional
  memory usage as we need to pre-allocate table of
  table_count*table_count*sizeof(POSITION) objects (POSITION is 312
  bytes for now) to hold the pre-calculated best_access_path()
  information.  This memory usage is offset by the expected
  performance improvement when using many tables in a query.
- Removed the code from an earlier patch to keep the table order in
  join->best_ref in the original order.  This is not needed anymore as we
  are now sorting the tables for each best_extension_by_limited_search()
  call.
2022-07-26 22:27:28 +07:00
Marko Mäkelä
0af9346079 Merge 10.7 into 10.8 2022-06-09 14:37:53 +03:00
Michael Widenius
64f24b776d greedy_search() and best_extension_by_limited_search() scrambled table order
best_extension_by_limited_search() assumes that tables should be sorted
according to size to be able to quickly disregard bad plans. However the
current usage of swap_variables() will change the table order to a not
sorted one for the next recursive call. This breaks the assumtion and
causes performance issues when using many tables (we have to examine
many more plans).

This patch fixes this by ensuring that the original table order is kept
for the not yet used tables when best_extension_by_limited_search() is
called.

This was done by always calling swap_variables() for each table and
restoring the original table order at exit.

Some test changed:
- In a majority of the test the change was that two "identical tables"
  where swapped and the optimzer is now using the first/smaller table
- In few test the table order was changed. The new plan looks identical
  or slighly better than the original.
2022-06-07 20:43:10 +03:00
Sergei Petrunia
dae20dde4e MDEV-26901: Estimation for filtered rows less precise ... #4
In Histogram_json_hb::point_selectivity(), do return selectivity of 0.0
when the histogram says so.

The logic of "Do not return 0.0 estimate as it causes a multiply-by-zero
meltdown in cost and cardinality calculations" is moved into
records_in_column_ranges() where it is one *once* per column pair (as
opposed to doing once per range, which can cause the error to add-up
to large number when there are many ranges)
2022-01-19 18:10:12 +03:00
Sergei Petrunia
be55ad0d34 MDEV-27062: Make histogram_type=JSON_HB the new default 2022-01-19 18:10:11 +03:00
Sergei Golubchik
f33e57a9e6 Merge branch '10.4' into 10.5 2021-02-23 13:06:22 +01:00
Sergei Golubchik
e841957416 Merge branch '10.3' into 10.4 2021-02-23 09:25:57 +01:00
Sergei Golubchik
0ab1e3914c Merge branch '10.2' into 10.3 2021-02-22 22:42:27 +01:00
Marko Mäkelä
4a0b56f604 Merge 10.4 into 10.5 2020-05-31 10:28:59 +03:00
Marko Mäkelä
6da14d7b4a Merge 10.3 into 10.4 2020-05-30 11:04:27 +03:00
Marko Mäkelä
dad7a8ee7d Merge 10.2 into 10.3 2020-05-27 17:10:39 +03:00
Monty
eb483c5181 Updated optimizer costs in multi_range_read_info_const() and sql_select.cc
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
  not 1.0.  In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
  TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
  keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
  The default keyread_time() was optimized for blocks and not suitable for
  HEAP. The effect was the HEAP prefered table scans over ranges for btree
  indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
  Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
  we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
  as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
  HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
  This was to ensure that handler::keyread_time() doesn't give
  higher cost for heap tables than for normal tables. One effect of
  this is that heap and derived tables stored in heap will prefer
  key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
  multi_range_read_info_const(). All index cost calculation is now
  done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
  so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
2020-03-27 03:58:32 +02:00
Marko Mäkelä
33cb10d4e9 Merge 10.3 into 10.4 2019-11-12 16:55:44 +02:00
Andrei Elkin
d103c5a489 merge 10.2->10.3 with conflict resolutions 2019-11-11 16:28:21 +02:00
Marko Mäkelä
928abd6967 Merge 10.3 into 10.4 2019-11-06 13:44:56 +02:00
Marko Mäkelä
908ca4668d Merge 10.2 into 10.3 2019-11-06 13:14:31 +02:00
Marko Mäkelä
5a92ccbaea Merge 10.3 into 10.4
Disable MDEV-20576 assertions until MDEV-20595 has been fixed.
2019-09-23 17:35:29 +03:00
Marko Mäkelä
c016ea660e Merge 10.2 into 10.3 2019-09-23 10:25:34 +03:00
Sergei Golubchik
38e21c7000 various test failures post-merge 2019-09-06 20:04:47 +02:00
Sergei Golubchik
244f0e6dd8 Merge branch '10.3' into 10.4 2019-09-06 11:53:10 +02:00