If a select query contained an ORDER BY clause that followed a LIMIT clause
or an ORDER BY clause or ORDER BY with LIMIT the EXPLAIN output for the
query showed an execution plan different from that was actually executed.
Approved by Roman Nozdrin <roman.nozdrin@mariadb.com>
For a correlated subquery filesort is executed multiple times.
During each execution, sortlength() computed total sort key length in
Sort_keys::sort_length, without resetting it first.
Eventually Sort_keys::sort_length got larger than @@sort_buffer_size, which
caused filesort() to be aborted with error.
Fixed by making sortlength() to compute lengths only during the first
invocation. Subsequent invocations return pre-computed values.
and
MDEV-23414 Assertion `res->charset() == item->collation.collation' failed in Type_handler_string_result::make_packed_sort_key_part
pack_sort_string() *must* take a collation from the Item, not from the
String value. Because when casting a string to _binary the original
String is not copied for performance reasons, it's reused but its
collation does not match Item's collation anymore.
Note, that String's collation cannot be simply changed to _binary,
because for an Item_string literal the original String must stay
unchanged for the duration of the query.
this partially reverts 61c15ebe32
An overflow was happening with LONGTEXT columns, when the length was converted to the length
in the strxfrm form (mem-comparable keys).
Introduced a function to truncate the length to the max_sort_length before calculating
the length of the strxfrm form.
In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks.
The chunks have a start and end position (m_buffer_start and m_buffer_end).
Then we read the as many records that fit in this buffer for a chunk of the file.
The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was
read, this was causing a problem because with dynamic size of sort keys it is possible that later
we would not be able to accommodate even one key inside a chunk of file.
So the fix was to not reset the end of buffer for a chunk of file.
The issue here is charset for Sort_param::tmp_buffer is cleared when bzero is done for Sort_param.
Make sure to set the charset explicitly in the constructor for tmp_buffer.
For character sets and collation where character to weight mapping > 1,
there we need to make sure while creating a sort key,
a temporary buffer is created to store the value of the item by val_str function
and then copy that value back to the sort buffer.
In this case when using a priority queue Sort_param::tmp_buffer was not allocated.
Minor refactoring:
Changed Sort_param::tmp_buffer from char* to String
This task deals with packing the sort key inside the sort buffer, which would
lead to efficient usage of the memory allocated for the sort buffer.
The changes brought by this feature are
1) Sort buffers would have sort keys of variable length
2) The format for sort keys inside the sort buffer would look like
|<sort_length><null_byte><key_part1><null_byte><key_part2>.......|
sort_length is the extra bytes that are required to store the variable
length of a sort key.
3) When packing of sort key is done we store the ORIGINAL VALUES inside
the sort buffer and not the STRXFRM form (mem-comparable sort keys).
4) Special comparison function packed_keys_comparison() is introduced
to compare 2 sort keys.
This patch also contains contributions from Sergei Petrunia.
The issue in this case is that we take in account the estimates from quick keys instead of rec_per_key.
The estimates for quick keys are better than rec_per_key only if we have ref(const), so we need to check
that all keyparts in the ref key are of the type ref(const).
Change the defaults:
-histogram_size=0
+histogram_size=254
-histogram_type=SINGLE_PREC_HB
+histogram_type=DOUBLE_PREC_HB
Adjust the testcases:
- Some have ignorable changes in EXPLAIN outputs and
more counter increments due to EITS table reads.
- Testcases that meaningfully depend on the old defaults
are changed to use the old values.
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.