- Fix view-protocol: long expressions in SELECT
list should have "expr AS column_name".
- Also, moved the test from subselect*test to
suite/json/t/json_table.test.
st_select_lex::update_correlated_cache() fails to take JSON_TABLE
functions in subqueries into account.
Reviewed by Sergei Petrunia (sergey@mariadb.com)
Using mysql.slow_log was a test table would generate more than
one row if there was more than one row in the table.
Replace this table with a empty table with PK.
Reviewer: Rex Johnston
The bug is fixed by the patch ported from MySQL. See the comprehensive
description below.
commit 455c4e8810c76430719b1a08a63ca0f69f44678a
Author: Guilhem Bichot <guilhem.bichot@oracle.com>
Date: Fri Mar 13 17:51:27 2015 +0100
Bug#17668844: CRASH/ASSERT AT ITEM_TYPE_HOLDER::VAL_STR IN ITEM.C
We have a predicate of the form:
literal_row <=> (a UNION)
The subquery is constant, so Item_cache objects are used for its
SELECT list.
In order, this happens:
- Item_subselect::fix_fields() calls select_lex_unit::prepare,
where we create Item_type_holder's
(appended to unit->types list), create the tmp table (using type info
found in unit->types), and call fill_item_list() to put the
Item_field's of this table into unit->item_list.
- Item_subselect::fix_length_and_dec() calls set_row() which
makes Item_cache's of the subquery wrap the Item_type_holder's
- When/if a first result row is found for the subquery,
Item_cache's are re-pointed to unit->item_list
(i.e. Item_field objects which reference the UNION's tmp table
columns) (see call to Item_singlerow_subselect::store()).
- In our subquery, no result row is found, so the Item_cache's
still wrap Item_type_holder's; evaluating '<=>' reads the
value of those, but Item_type_holder objects are not expected to be
evaluated.
Fix: instead of putting unit->types into Item_cache, and later
replacing with unit->item_list, put unit->item_list in Item_cache from
the start.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
There where two errors left from the previous fix.
- subselect.test assumes that mysql.slow_log is empty. This was not
enforced.
- subselect.test dropped a file that does not exists (for safety).
This was fixed by ensuring we don't get a warning if the file does
not exist.
MDEV-30668 Set function aggregated in outer select used in view definition
This patch fixes two bugs concerning views whose specifications contain
subqueries with set functions aggregated in outer selects.
Due to the first bug those such views that have implicit grouping were
considered as mergeable. This led to wrong result sets for selects from
these views.
Due to the second bug the aggregation select was determined incorrectly and
this led to bogus error messages.
The patch added several test cases for these two bugs and for four other
duplicate bugs.
The patch also enables view-protocol for many other test cases.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN
(SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE
!(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE.
it should return the precedence of the inner operation.
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
mark big_tables deprecated, the server can put temp tables on disk
as needed avoiding "table full" errors.
in case someone would really need to force a tmp table to be created
on disk from the start and for testing allow tmp_memory_table_size
to be set to 0.
fix tests to use that instead (and add a test that it actually
works).
make sure in-memory TREE size limit is never 0 (it's [ab]using
tmp_memory_table_size at the moment)
remove few sys_vars.*_basic tests
Shift-Reduce conflicts prevented parsing some queries with subqueries that
used set operations when the subqueries occurred in expressions or in IN
predicands.
The grammar rules for query expression were transformed in order to avoid
these conflicts. New grammar rules employ an idea taken from MySQL 8.0.
(Backported to 10.3, addressed review input)
Sj_materialization_picker::check_qep(): fix error in cost/fanout
calculations:
- for each join prefix, add #prefix_rows / TIME_FOR_COMPARE to the cost,
like best_extension_by_limited_search does
- Remove the fanout produced by the subquery tables.
- Also take into account join condition selectivity
optimize_wo_join_buffering() (used by LooseScan and FirstMatch)
- also add #prefix_rows / TIME_FOR_COMPARE to the cost of each prefix.
- Also take into account join condition selectivity
Parenthesis around table names and derived tables should be allowed
in FROM clauses and some other context as it was in earlier versions.
Returned test queries that used such parenthesis in 10.3 to their
original form. Adjusted test results accordingly.
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.