This bug could cause a crash for any query that used a derived table/view/CTE
whose specification was a SELECT with a GROUP BY clause and a FROM list
containing 32 or more table references.
The problem appeared only in the cases when the splitting optimization
could be applied to such derived table/view/CTE.
join_cache_level=6+
The patch fixes two similar bugs in the commit 8eeb689e9f
that added multi_range_read support to partitions. The commit opened
a possibility to join a partition table using BKA+MRR. However in some
cases it could lead to wrong results or even crashes.
This could happened when
- index condition pushdown was used to join the table or
- the joined table was an inner table of an outer join and 'not exist'
optimization was applied or
- the join table was the inner table of a semi-join and the first match
optimization was applied
The bugs were in the code of the call-back functions
- partition_multi_range_key_skip_record() and
- partition_multi_range_key_skip_index_tuple().
Each of this function consist only of an invocation of another function.
Yet a wrong parameter was passed at this invocation.
The fix was suggested by Sergey Petrunia and it is apparently in line
with original design.
The corresponding comprehensive test cases demonstrating the problems
caused by the bugs were constructed by me.
(Backport to 10.3)
Partitioning storage now supports MRR but doesn't support Index Condition
Pushdown (aka ICP). This causes counter-intuitive query plans for queries
that use BKA and conditions that depend on index fields:
- If the condition refers to other tables, BKA's variant of ICP is used
to handle it.
- If the condition depends on this table only, the optimizer will try to
use regular ICP for it, which will fail because the storage engine
doesn't support ICP.
Make the optimizer be smarter in the second case: if we were not able to
use regular ICP, use BKA's variant of ICP..
Do not materialize a semi-join nest if it contains a materialized derived
table /view that potentially can be subject to the split optimization.
Splitting of materialization of such nest would help, but currently there
is no code to support this technique.
[Variant 2 of the fix: collect the attached conditions]
Problem:
make_join_select() has a section of code which starts with
"We plan to scan all rows. Check again if we should use an index."
the code in that section will [unnecessarily] re-run the range
optimizer using this condition:
condition_attached_to_current_table AND current_table's_ON_expr
Note that the original invocation of range optimizer in
make_join_statistics was done using the whole select's WHERE condition.
Taking the whole select's WHERE condition and using multiple-equalities
allowed the range optimizer to infer more range restrictions.
The fix:
- Do range optimization using a condition that is an AND of this table's
condition and all of the previous tables' conditions.
- Also, fix the range optimizer to prefer SEL_ARGs with type=KEY_RANGE
over SEL_ARGS with type=MAYBE_KEY, regardless of the key part.
Computing
key_and(
SEL_ARG(type=MAYBE_KEY key_part=1),
SEL_ARG(type=KEY_RANGE, key_part=2)
)
will now produce the SEL_ARG with type=KEY_RANGE.
In this scenario:
- There is a possible range access for table T
- And there is a ref access on the same index which uses fewer key parts
- The join optimizer picks the ref access (because it is cheaper)
- make_join_select applies this heuristic to switch to range:
/* Range uses longer key; Use this instead of ref on key */
Join buffer will be used without having called
JOIN_TAB::make_scan_filter(). This means, conditions that should be
checked when reading table T will be checked after T is joined with the
contents of the join buffer, instead.
Fixed this by adding a make_scan_filter() check.
(updated patch after backport to 10.3)
(Fix testcase on Windows)
Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
Fix incorrect change introduced in the fix for MDEV-20109.
The patch tried to compute a more precise estimate for the record_count
value in SJ-Materialization-Scan strategy (in
Sj_materialization_picker::check_qep). However the new formula is worse
as it produces extremely optimistic results in common cases where
SJ-Materialization-Scan should be used)
The old formula produces pessimistic results in cases when Sj-Materialization-
Scan is unlikely to be a good choice anyway. So, the old behavior is better.
Partition table with the AUTO_INCREMENT column we ahve to check if the
max value is properly loaded. So we need to open all tables in INSERT
PARTITION statement if necessary. Also we need to check if some
tables are pruned away and not count the max autoincrement in this case.
The code in convert_charset_partition_constant() did not
take into account that the call for item->safe_charset_converter()
can return NULL when conversion is not safe.
Note, 10.2 was not affected. The test for NULL presents in 10.2,
but it disappeared in 10.3 in a mistake. Restoring the test.
MDEV-20589: Server still crashes in Field::set_warning_truncated_wrong_value
- Use dbug_tmp_use_all_columns() to mark that all fields can be used
- Remove field->is_stat_field (not needed)
- Remove extra arguments to Field::clone() that should not be there
- Safety fix for Field::set_warning_truncated_wrong_value() to not crash
if table is zero in production builds (We have got crashes several times
here so better to be safe than sorry).
- Threat wrong character string warnings identical to other field
conversion warnings. This removes some warnings we before got from
internal conversion errors. There is no good reason why a user would
get an error in case of 'key_field='wrong-utf8-string' but not for
'field=wrong-utf8-string'. The old code could also easily give
thousands of no-sence warnings for one single statement.
A CTE can be defined as a table values constructor. In this case the CTE is
always materialized in a temporary table.
If the definition of the CTE contains a list of the names of the CTE
columns then the query expression that uses this CTE can refer to the CTE
columns by these names. Otherwise the names of the columns are taken from
the names of the columns in the result set of the query that specifies the
CTE.
Thus if the column names of a CTE are provided in the definition the
columns of result set should be renamed. In a general case renaming of
the columns is done in the select lists of the query specifying the CTE.
If a CTE is specified by a table value constructor then there are no such
select lists and renaming is actually done for the columns of the result
of materialization.
Now if a view is specified by a query expression that uses a CTE specified
by a table value constructor saving the column names of the CTE in the
stored view definition becomes critical: without these names the query
expression is not able to refer to the columns of the CTE.
This patch saves the given column names of CTEs in stored view definitions
that use them.