When index_merge_sort_union is turned off only ror scans were considered for range
scans, which is wrong.
To fix the problem ensure both ror scans and non ror scans are considered for range
access
The problem happened in these line:
uval0= (ulonglong) (val0_negative ? -val0 : val0);
uval1= (ulonglong) (val1_negative ? -val1 : val1);
return check_integer_overflow(val0_negative ? -(longlong) res : res,
!val0_negative);
when unary minus was performed on -9223372036854775808.
This behavior is undefined in C/C++.
This bug could manifest itself in a very rare cases when the optimizer
chose an execution plan by which a joined table was accessed by a table
scan and the optimizer was checking whether ranges checked for each record
could improve this plan. In such cases the optimizer evaluates range
conditions over a table that depend on other tables. For such conditions
the constructed SEL_ARG trees are marked as MAYBE_KEY. If a SEL_ARG object
constructed for a sargable condition marked as RANGE_KEY had the same
first key part as a MAYBE_KEY SEL_ARG object and the key_and() function
was called for this pair of SEL_ARG objects then an invalid SEL_ARG
object could be constructed that ultimately could lead to a crash before
the execution phase.
'index_merge_sort_union=off'
When index_merge_sort_union is set to 'off' and index_merge_union is set
to 'on' then any evaluated index merge scan must consist only of ROR scans.
The cheapest out of such index merges must be chosen. This index merge
might not be the cheapest index merge.
* `--defaults-file` option is showed only in `--help --verbose` if
applied
* `--default-extra-file` is showing correctly now in `--help --verbose`,
previously it was treated as a directory with appended `my.cnf`
Remove the offending test case. This sort of error is hard to test in
all possible corner cases and thus makes the test less valuable. The
overflow error will be covered by warnings generated by the compiler,
which is much more reliable in the general case.
Set read_set bitmap for view from the JOIN::all_fields list instead of JOIN::fields_list
as split_sum_func would have added items to the all_fields list.
Item_ref::val_(datetime|time)_packed() erroneously called
(*ref)->val_(datetime|time)_packed().
- Fixing to call (*ref)->val_(datetime|time)_packed_result().
- Backporting Item::val_(datetime|time)_packed_result() from 10.3.
- Fixing Item_field::get_date_result() to handle null_value in the same
way how Item_field::get_date() does.
Make sure that the sort buffers can store atleast one sort key.
This is needed to make sure that all merge buffers are read else
with no sort keys some merge buffers are skipped because the code
makes a conclusion there is no data to be read.
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.
When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.
Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
In the function prev_record_reads where one finds the different row combinations for a
subset of partial join, it did not take into account the selectivity of tables
involved in the subset of partial join.
using a specially crafted strings one could overflow `shift`
variable and cause a crash by dereferencing d10[-2147483648]
(on a sufficiently old gcc).
This is a correct fix and a test case for
Bug #29723340: MYSQL SERVER CRASH AFTER SQL QUERY WITH DATA ?AST
Apply the correct pattern for debug instrumentation:
SET @save_dbug=@@debug_dbug;
SET debug_dbug='+d,...';
...
SET debug_dbug=@save_dbug;
Numerous tests use statements of the form
SET debug_dbug='-d,...';
which will inadvertently enable all DBUG tracing output,
causing unnecessary waste of resources.
The test main.index_merge_innodb is taking very much time,
especially on later versions (10.2 and 10.3).
Some of this could be attributed to the use of INSERT...SELECT,
which is time-consumingly creating explicit record locks in InnoDB
for the locking read in the SELECT part.
In 10.3 and later, some slowness can be attributed to MDEV-12288,
which makes the InnoDB purge thread spend time to reset transaction
identifiers in the inserted records. If we prevent purge from
running before all tables are dropped, the test seems to be
10% faster on an unoptimized debug build on 10.5. (A proper fix
would be to implement MDEV-515 and stop writing row-level undo log
records for inserts into an empty table or partition.)
At the same time, it should not hurt to make main.index_merge_myisam
to use the sequence engine. Not only could it be a little faster,
but the test would be slightly more readable.
Eliminate one InnoDB table with 128*16384 rows, and use
the sequence engine instead. Also, run everything in a single
transaction, to prevent purge from running concurrently
unnecessarily. (Starting with MariaDB Server 10.3, purge would
reset the DB_TRX_ID after INSERT.)
Also fixes:
MDEV-20560 Assertion `precision > 0' failed in decimal_bin_size upon SELECT with MOD short unsigned decimal
Changing the way how Item_func_mod calculates its max_length.
It now uses decimal_precision(), decimal_scale() and unsigned_flag
of its arguments, like all other Item_num_op descendants do.
In the function test_if_cheaper_ordering we make a decision if using an index is better than
using filesort for ordering. If we chose to do range access then in test_quick_select we
should make sure that cost for table scan is set to DBL_MAX so that it is not picked.
selectivity values fails
After having set the assertion that checks validity of selectivity values
returned by the function table_cond_selectivity() a test case from
order_by.tesst failed. The failure occurred because range optimizer could
return as an estimate of the cardinality of the ranges built for an index
a number exceeding the total number of records in the table.
The second bug is more subtle. It may happen when there are several
indexes with same prefix defined on the first joined table t accessed by
a constant ref access. In this case the range optimizer estimates the
number of accessed records of t for each usable index and these
estimates can be different. Only the first of these estimates is taken
into account when the selectivity of the ref access is calculated.
However the optimizer later can choose a different index that provides
a different estimate. The function table_condition_selectivity() could use
this estimate to discount the selectivity of the ref access. This could
lead to an selectivity value returned by this function that was greater
that 1.
This patch corrects the fix of the patch for mdev-19421 that resolved
the problem of parsing some embedded join expressions such as
t1 join t2 left join t3 on t2.a=t3.a on t1.a=t2.a.
Yet the patch contained a bug that prevented proper context analysis
of the queries where such expressions were used together with comma
separated table references in from clauses.
When discounting selectivity of ref access, don't discount the
selectivity we've already discounted for range access.
The 10.1 version of the fix. Will need to adjust condition filtering
test results in 10.4