When a prepared statement parameter '?' is used in a CTE that is used
multiple times, the following happens:
- The CTE definition is re-parsed multiple times.
- There are multiple Item_param objects referring to the same "?" in
the original query.
- Prepared_statement::param has a pointer to the first of them, the
others are "clones".
- When prepared statement parameter gets the value, it should be passed
over to clones with param->sync_clones() call.
This call is made in insert_params(), etc. It was not made in
insert_params_with_log().
This would cause Item_param to not have any value which would confuse
the query optimizer.
Added the missing call.
In case of SELECT without tables which returns either 0 or 1 rows,
JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS
is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS()
was giving 0 in the output. Now it checks if the flag is set, if it is set
send_record=1 else 0. 1 is the number of rows that could have been sent
to the client if the SELECT query had SQL_CALC_FOUND_ROWS.
It is 0 when no rows were sent because the SELECT query did not have
SQL_CALC_FOUND_ROWS.
UPDATE gets access to history records because versioning conditions
are not set for VIEW. This leads to endless loop of inserting history
records when clustered index is rebuilt and ha_rnd_next() returns
newly inserted history record.
Return back original behavior of failing on write-locked table in
historical query.
35b679b9 assumed that SELECT_LEX::lock_type influences anything, but
actually at this point table is already locked. Original bug report
was tempesta-tech/mariadb#102
A temporary table is needed for window function computation but if only a NAMED WINDOW SPEC
is used and there is no window function, then there is no need to create a temporary
table as there is no stage to compute WINDOW FUNCTION
1. Code simplification:
Item_default_value handled all these values:
a. DEFAULT(field)
b. DEFAULT
c. IGNORE
and had various conditions to distinguish (a) from (b) and from (c).
Introducing a new abstract class Item_contextually_typed_value_specification,
to handle (b) and (c), so the hierarchy now looks as follows:
Item
Item_result_field
Item_ident
Item_field
Item_default_value - DEFAULT(field)
Item_contextually_typed_value_specification
Item_default_specification - DEFAULT
Item_ignore_specification - IGNORE
2. Introducing a new virtual method is_evaluable_expression() to
determine if an Item is:
- a normal expression, so its val_xxx()/get_date() methods can be called
- or a just an expression substitute, whose value methods cannot be called.
3. Disallowing Items that are not evalualble expressions in table value
constructors.
For a unique key if all the keyparts are NOT NULL or the predicates involving
the keyparts is NULL rejecting, then we can use EQ_REF access instead of ref
access with the unique key
It was:
implicit conversion from 'ha_rows' (aka 'unsigned long long') to 'double'
changes value from 18446744073709551615 to 18446744073709551616
Follow what JOIN::get_examined_rows() does for similar code.
[Variant 2 of the fix: collect the attached conditions]
Problem:
make_join_select() has a section of code which starts with
"We plan to scan all rows. Check again if we should use an index."
the code in that section will [unnecessarily] re-run the range
optimizer using this condition:
condition_attached_to_current_table AND current_table's_ON_expr
Note that the original invocation of range optimizer in
make_join_statistics was done using the whole select's WHERE condition.
Taking the whole select's WHERE condition and using multiple-equalities
allowed the range optimizer to infer more range restrictions.
The fix:
- Do range optimization using a condition that is an AND of this table's
condition and all of the previous tables' conditions.
- Also, fix the range optimizer to prefer SEL_ARGs with type=KEY_RANGE
over SEL_ARGS with type=MAYBE_KEY, regardless of the key part.
Computing
key_and(
SEL_ARG(type=MAYBE_KEY key_part=1),
SEL_ARG(type=KEY_RANGE, key_part=2)
)
will now produce the SEL_ARG with type=KEY_RANGE.
In this scenario:
- There is a possible range access for table T
- And there is a ref access on the same index which uses fewer key parts
- The join optimizer picks the ref access (because it is cheaper)
- make_join_select applies this heuristic to switch to range:
/* Range uses longer key; Use this instead of ref on key */
Join buffer will be used without having called
JOIN_TAB::make_scan_filter(). This means, conditions that should be
checked when reading table T will be checked after T is joined with the
contents of the join buffer, instead.
Fixed this by adding a make_scan_filter() check.
(updated patch after backport to 10.3)
(Fix testcase on Windows)
The issue here is for degenerate joins we should execute the window
function but it is not getting executed in all the cases.
To get the window function values window function needs to be executed
always. This currently does not happen in few cases
where the join would return 0 or 1 row like
1) IMPOSSIBLE WHERE
2) MIN/MAX optimization
3) EMPTY CONST TABLE
The fix is to make sure that window functions get executed
and the temporary table is setup for the execution of window functions
The query requires 2 temporary tables for execution, the window function
is always attached to the last temporary table, but in this case the
result field of the window function points to the first temporary table
rather than the last one.
Fixed this by not changing window function items with temporary table
items of the first temporary table.
Don't do skip_setup_conds() unless all errors are checked.
Fixes following errors:
ER_PERIOD_NOT_FOUND
ER_VERS_QUERY_IN_PARTITION
ER_VERS_ENGINE_UNSUPPORTED
ER_VERS_NOT_VERSIONED
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables
UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().
Additional DML cases in view.test
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.
When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.
Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
In the function prev_record_reads where one finds the different row combinations for a
subset of partial join, it did not take into account the selectivity of tables
involved in the subset of partial join.
selectivity values fails
After having set the assertion that checks validity of selectivity values
returned by the function table_cond_selectivity() a test case from
order_by.tesst failed. The failure occurred because range optimizer could
return as an estimate of the cardinality of the ranges built for an index
a number exceeding the total number of records in the table.
The second bug is more subtle. It may happen when there are several
indexes with same prefix defined on the first joined table t accessed by
a constant ref access. In this case the range optimizer estimates the
number of accessed records of t for each usable index and these
estimates can be different. Only the first of these estimates is taken
into account when the selectivity of the ref access is calculated.
However the optimizer later can choose a different index that provides
a different estimate. The function table_condition_selectivity() could use
this estimate to discount the selectivity of the ref access. This could
lead to an selectivity value returned by this function that was greater
that 1.
best_access_path() is called from two optimization phases:
1. Plan choice phase, in choose_plan(). Here, the join prefix being
considered is in join->positions[]
2. Plan refinement stage, in fix_semijoin_strategies_for_picked_join_order
Here, the join prefix is in join->best_positions[]
It used to access join->positions[] from stage #2. This didnt cause any
valgrind or asan failures (as join->positions[] has been written-to before)
but the effect was similar to that of reading the random data:
The join prefix we've picked (in join->best_positions) could have
nothing in common with the join prefix that was last to be considered
(in join->positions).
(Backported to 10.3, addressed review input)
Sj_materialization_picker::check_qep(): fix error in cost/fanout
calculations:
- for each join prefix, add #prefix_rows / TIME_FOR_COMPARE to the cost,
like best_extension_by_limited_search does
- Remove the fanout produced by the subquery tables.
- Also take into account join condition selectivity
optimize_wo_join_buffering() (used by LooseScan and FirstMatch)
- also add #prefix_rows / TIME_FOR_COMPARE to the cost of each prefix.
- Also take into account join condition selectivity
Current easy fix is not possible, because SELECT clones ha_partition
and then closes the clone which leads to unclosed transaction in
partitions we forcely prune out. We cound solve this by closing these
partitions (and release from transaction) in
change_partitions_to_open() at versioning conditions stage, but this
is problematic because table lock is acquired for each partition at
open stage and therefore must be released when we close partition
handler in change_partitions_to_open(). More details in MDEV-20376.
This should change after MDEV-20250 where mechanism of opening
partitions will be improved.
This reverts commit cdbac54df0.
For MDEV-15955, the fix in create_tmp_field_from_item() would cause a
compilation error. After a discussion with Alexander Barkov, the fix
was omitted and only the test case was kept.
In 10.3 and later, MDEV-15955 is fixed properly by overriding
create_tmp_field() in Item_func_user_var.
When discounting selectivity of ref access, don't discount the
selectivity we've already discounted for range access.
The 10.1 version of the fix. Will need to adjust condition filtering
test results in 10.4
Exclude SELECT and INSERT SELECT from vers_set_hist_part(). We cannot
likewise exclude REPLACE SELECT because it may REPLACE into itself
(and REPLACE generates history).
INSERT also does not generate history, but we have history
modification setting which might be interfered.
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
and WHERE filter afterwards
This patch complements the patch fixing the bug MDEV-6892. The latter
properly handled queries that used mergeable views returning constant
columns as inner tables of outer joins and whose where clause contained
predicates referring to these columns if the predicates of happened not
to be equality predicates. Otherwise the server still could return wrong
result sets for such queries. Besides the fix for MDEV-6892 prevented
some possible conversions of outer joins to inner joins for such queries.
This patch corrected the function check_simple_equality() to handle
properly conjunctive equalities of the where clause that refer to the
constant columns of mergeable views used as inner tables of an outer join.
The patch also changed the code of Item_direct_view_ref::not_null_tables().
This change allowed to take into account predicates containing references
to constant columns of mergeable views when converting outer joins into
inner joins.
Handling of top level conjuncts in WHERE whose used_tables() contained
RAND_TABLE_BIT in the function make_join_select() was incorrect.
As a result if such a conjunct referred to fields non of which belonged
to the last joined table it was pushed twice. (This could be seen
for a test case from subselect.test whose output was changed after this
patch had been applied. In 10.1 when running EXPLAIN FORMAT=JSON for
the query from this test case we clearly see that one of the conjuncts
is pushed twice.) This fact by itself was not good. Besides, if such a
conjunct was pushed to a table that was the result of materialization
of a semi-join the query could return a wrong result set. In particular
we could watch it for queries with semi-join subqueries whose left parts
used stored functions without "deterministic' specifier.
This patch complements the patch that fixes bug MDEV-18479.
This patch takes care of possible overflow when calculating the
estimated number of rows in a materialized derived table / view.
or server crashes in JOIN::fix_all_splittings_in_plan after EXPLAIN
This patch resolves the problem of overflowing when performing
calculations to estimate the cost of an evaluated query execution plan.
The overflowing in a non-debug build could cause different kind of
problems uncluding crashes of the server.
This patch corrects the patch for the bug 10006. The latter incorrectly
calculates the attribute TABLE_LIST::dep_tables for inner tables
of outer joins that are to be converted into inner joins.
As a result after the patch some valid join orders were not evaluated
and the optimizer could choose an execution plan that was far from
being optimal.
The code in best_access_path function, when it does not find a key suitable for ref access
and join_cache_level is set to a value so that hash_join is possible we build a hash key.
Later in the function we compare the cost of ref access with table scan (or index scan
or quick selects). No need to do this when we have got the hash key.
The issue in this case is that we take in account the estimates from quick keys instead of rec_per_key.
The estimates for quick keys are better than rec_per_key only if we have ref(const), so we need to check
that all keyparts in the ref key are of the type ref(const).
Need to call split_sum_func if an aggregate function is part of order by
or partition by clause so that we have the required fields inside the temporary
table, as all the fields inside the partition by and order by clause of the
window function needs to be there in the temp table used for window function
computation.
The issue here is that for a window function in the ORDER BY clause, we were not
creating an extra field in the temporary table for the window function
(which is contained in an expression).
So a call to split_sum_func is added to handle this case
Also we need to update all items that contain a window function
in the temp table during window function computation as filesort would need
these values to be updated to calculate the ORDER BY clause of the select.
For degenerate joins we may have JOIN::table_list as NULL, so instead
of using JOIN::top_join_tab_count use the function JOIN::exec_join_tab_cnt
to get the number of tables joined at the top level.
with GROUP BY + ORDER BY
The method JOIN::create_postjoin_aggr_table() should not call
call JOIN::add_sorting_to_table() unless the first non-constant join
table is passed as the first parameter to the method.
In the function make_cond_for_table_from_pred a call of ix_fields()
missed checking of the return code. As a result an extracted constant
condition could be not well formed and this caused an assertion failure.
There were two newly enabled warnings:
1. cast for a function pointers. Affected sql_analyse.h, mi_write.c
and ma_write.cc, mf_iocache-t.cc, mysqlbinlog.cc, encryption.cc, etc
2. memcpy/memset of nontrivial structures. Fixed as:
* the warning disabled for InnoDB
* TABLE, TABLE_SHARE, and TABLE_LIST got a new method reset() which
does the bzero(), which is safe for these classes, but any other
bzero() will still cause a warning
* Table_scope_and_contents_source_st uses `TABLE_LIST *` (trivial)
instead of `SQL_I_List<TABLE_LIST>` (not trivial) so it's safe to
bzero now.
* added casts in debug_sync.cc and sql_select.cc (for JOIN)
* move assignment method for MDL_request instead of memcpy()
* PARTIAL_INDEX_INTERSECT_INFO::init() instead of bzero()
* remove constructor from READ_RECORD() to make it trivial
* replace some memcpy() with c++ copy assignments
If a splittable materialized derived table / view T is used in a inner nest
of an outer join with impossible ON condition then T is marked as a
constant table. Yet the execution plan to build T is still searched for
in spite of the fact that is not needed. So it should be set.
remove TABLE_SHARE::error_table_name() and TABLE_SHARE::orig_table_name
(that was allocated in a wrong memroot in this bug).
instead, simply set TABLE_SHARE::table_name correctly.
Also fixes:
MDEV-17741 Assertion `thd->Item_change_list::is_empty()' failed in mysql_parse after unsuccessful PS
The problem was introduced by:
commit f033fbd9f2
Changed the test case for MDEV-15571
It was later fixed, but in 10.3 only:
commit ce2cf855bf
MDEV-16043 Assertion thd->Item_change_list::is_empty() failed in mysql_parse
upon SELECT from a view reading from a versioned table
This patch is a backport of ce2cf855bf to 10.2
While calculating distinct with the function remove_dup_with_compare, we don't have rnd_end calls
when we have completed the scan over the temporary table.
Added ha_rnd_end calls when we are done with the scan of the table.
When the with clause of a query contains a recursive CTE that is not used
then processing of EXPLAIN for this query does not require optimization
of the unit specifying this CTE. In this case if 'derived' is the
TABLE_LIST object created for this CTE then derived->derived_result is NULL
and any assignment to derived->derived_result->table causes a crash.
After fixing this problem in the code of st_select_lex_unit::prepare()
EXPLAIN for such a query worked without crashes. Yet an execution
plan for the recursive CTE appeared there. The cause of this problem was
an incorrect condition used in JOIN::save_explain_data_intern() that
determined whether CTE was to be optimized or not. A similar condition was
used in select_describe() and this patch has corrected it as well.
During the optimize state of a query, we come know that the result set
would atmost contain one row, then for such a query we don't need
to compute GROUP BY, ORDER BY and DISTINCT.
Changing the way how a cursor is opened to fetch its structure only,
e.g. for a cursor FOR loop record variable.
The old methods with setting thd->lex->limit_rows_examined to an Item_uint(0)
was not reliable and could push these messages into diagnostics area:
The query examined at least 1 rows, which exceeds LIMIT ROWS EXAMINED (0)
The new method should be more reliable, as it completely prevents the call
of do_select() in JOIN::exec_inner() during the cursor structure discovery,
so the execution of the cursor SELECT query returns immediately after the
preparation step (when the result row structure becomes known),
without even entering the code that fetches the result rows.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
Users expect window functions to produce a certain ordering of rows in
the final result set. Although the standard does not require this, we
already have the filesort result done for when we computed the window
function. If there is no ORDER BY attached to the query, just keep it
till the SELECT is completely evaluated and use that to print the
result.
Update test cases as many did not take care to guarantee a stable
result.
The ONLY_FULL_GROUP_BY mode states that for SELECT ... GROUP BY queries,
disallow SELECTing columns which are not referred to in the GROUP BY clause,
unless they are passed to an aggregate function like COUNT() or MAX().
This holds only for the GROUP BY clause of the query.
The code also checks this for the partition clause of the window function which is
incorrect.
When we have a query which has implicit_grouping then we are sure that we would end up with only one
row so there is no point to do DISTINCT computation
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
Change for the test case in 10.3: splitting must be turned off to preserve
the explain.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
The optimizer erroneously allowed to use join cache when joining a
splittable materialized table together with splitting optimization.
As a consequence in some rare cases the server returned wrong result
sets for queries with materialized derived.
This patch allows to use either join cache without usage of splitting
technique for materialization of a splittable derived table or splitting
without usage of join cache when joining such table. The costs the these
alternatives are compared and the best variant is chosen.
The bug was this scenario:
1. Join optimizer picks a range plan on index IDX1
(This index doesn't match the ORDER BY clause, so sorting will be needed)
2. Index Condition Pushdown pushes a part of WHERE down. The pushed
condition is removed from SQL_SELECT::cond
3. test_if_skip_sort_order() figures that it's better to use IDX2
(as it will match ORDER BY ... LIMIT and so will execute faster)
3.1 It sees that there was a possible range access on IDX2. It tries to
construct it by calling SQL_SELECT::test_quick_select(), but alas,
SQL_SELECT::cond doesn't have all parts of WHERE anymore.
So it uses full index scan which is slow.
(The execution works fine because there's code further in test_if_skip_sort_order()
which "Unpushes" the index condition and restores the original WHERE clause.
It was just the test_quick_select call that suffered).
using INSERT INTO
This patch allows condition pushdown into a materialized derived / view when
this table is used in INSERT SELECT, multi-table UPDATE and multi-table DELETE.
After the commit b76b69cd5f
loose index scan for queries with DISTINCT stopped working.
That is why that commit has to be reverted.
Additionally this patch fixes the problem of MDEV-10880.