This bug affected queries with IN predicates that contain parameter markers
in the value list. Such queries are executed via prepared statements.
The problem appeared only if the number of elements in the value list
was greater than the set value of the system variable
in_predicate_conversion_threshold.
The patch unconditionally prohibits conversion of an IN predicate to the
equivalent IN predicand if the value list of the IN predicate contains
parameters markers.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
(Backported to 10.3, addressed review input)
Sj_materialization_picker::check_qep(): fix error in cost/fanout
calculations:
- for each join prefix, add #prefix_rows / TIME_FOR_COMPARE to the cost,
like best_extension_by_limited_search does
- Remove the fanout produced by the subquery tables.
- Also take into account join condition selectivity
optimize_wo_join_buffering() (used by LooseScan and FirstMatch)
- also add #prefix_rows / TIME_FOR_COMPARE to the cost of each prefix.
- Also take into account join condition selectivity
in Field_iterator_table::create_item
When IN predicate is converted to IN subquery we have to ensure that
any item from the select list of the subquery has some name and this name
is unique across the select list.
This was not guaranteed by the code before the patch for MDEV-17222.
If the name of an item of the select list was not set, and this happened
for binary constants, then the server crashed. If the first row in the IN
list contained the same constant in two different positions then the server
returned an error message.
This was fixed by providing all constants in the first row of the IN list
with generated names.
value constructor shows wrong number of rows
If the specification of a derived table contained a table value constructor
then the optimizer incorrectly estimated the number of rows in the derived
table. This happened because the optimizer did not take into account the
number of rows in the constructor. The wrong estimate could lead to choosing
inefficient execution plans.