Heap tables are allocated blocks to store rows according to
my_default_record_cache (mapped to the server global variable
read_buffer_size).
This causes performance issues when the record length is big
(> 1000 bytes) and the my_default_record_cache is small.
Changed to instead split the default heap allocation to 1/16 of the
allowed space and not use my_default_record_cache anymore when creating
the heap. The allocation is also aligned to be just under a power of 2.
For some test that I have been running, which was using record length=633,
the speed of the query doubled thanks to this change.
Other things:
- Fixed calculation of max_records passed to hp_create() to take
into account padding between records.
- Updated calculation of memory needed by heap tables. Before we
did not take into account internal structures needed to access rows.
- Changed block sized for memory_table from 1 to 16384 to get less
fragmentation. This also avoids a problem where we need 1K
to manage index and row storage which was not counted for before.
- Moved heap memory usage to a separate test for 32 bit.
- Allocate all data blocks in heap in powers of 2. Change reported
memory usage for heap to reflect this.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Improve performance of queries like
SELECT * FROM t1 WHERE field = NAME_CONST('a', 4);
by, in this example, replacing the WHERE clause with field = 4
in the case of ref access.
The rewrite is done during fix_fields and we disambiguate this
case from other cases of NAME_CONST by inspecting where we are
in parsing. We rely on THD::where to accomplish this. To
improve performance there, we change the type of THD::where to
be an enumeration, so we can avoid string comparisons during
Item_name_const::fix_fields. Consequently, this patch also
changes all usages of THD::where to conform likewise.
select_union_direct::send_data() only sends a record when
the LIMIT ... OFFSET clause of the individual select won't skip it.
Thus, select_union_direct::send_data() should not do any actions
related to a sending a record if the offset of a select isn't
reached yet
Nowdays subquery in a UNION's ORDER BY placed correctly in fake select,
the only problem was incorrect Name_resolution_contect is fixed by this
patch in parsing, so we do not need scanning/reseting of ORDER BY of
a union.
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
constellation
Analysis: The decimals is set to NOT_FIXED_DEC for Field_str even if it is
NULL. Unsigned has decimals=0. So Type_std_attributes::decimals is set to 39
(maximum between 0 and 39). This results in incorrect number of decimals
when we have union of unsigned and NULL type.
Fix: Check if the field is created from NULL value. If yes, set decimals to 0
otherwise set it to NOT_FIXED_DEC.
Step 1: Removal of ORDER BY [LIMIT] from the subquery should be done
earlier and for broader class of subqueries.
The rewrite was done in Item_in_subselect::select_in_like_transformer(),
but this had problems:
- It didn't cover EXISTS subqueries
- It covered IN-subqueries, but was done after the semi-join transformation
was considered inapplicable, because ORDER BY was present.
Remaining issue:
- EXISTS->IN transformation happens before
check_and_do_in_subquery_rewrites() is called, so it is still prevented
by the present ORDER BY.
Parenthesis around table names and derived tables should be allowed
in FROM clauses and some other context as it was in earlier versions.
Returned test queries that used such parenthesis in 10.3 to their
original form. Adjusted test results accordingly.
This bug is caused by pushdown from HAVING into WHERE.
It appears because condition that is pushed wasn't fixed.
It is also discovered that condition pushdown from HAVING into
WHERE is done wrong. There is no need to build clones for some
conditions that can be pushed. They can be simply moved from HAVING
into WHERE without cloning.
build_pushable_cond_for_having_pushdown(),
remove_pushed_top_conjuncts_for_having() methods are changed.
It is found that there is no transformation made for fields of
pushed condition.
field_transformer_for_having_pushdown transformer is added.
New tests are added. Some comments are changed.
with UNION ALL after INTERSECT
EXPLAIN EXTENDED erroneously showed UNION instead of UNION ALL in
the warning if UNION ALL followed INTERSECT or EXCEPT operations.
The bug was in the function st_select_lex_unit::print() that printed
the text of the query used in the warning.
Optimized the code that removed multiple equalities pushed from HAVING
into WHERE. Now this removal is postponed until all multiple equalities
are eliminated in substitute_for_best_equal_field().
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.
How the pushdown is performed on the example:
SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);
=>
SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);
The implementation scheme:
1. Extract the most restrictive condition cond from the HAVING clause of
the select that depends only on the fields that are used in the GROUP BY
list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
of the select
3. Remove cond from the HAVING clause if it is possible
The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().
New test file having_cond_pushdown.test is created.