This will avoid issues like MDEV-32486
IDs used in
- spider_alloc_calc_mem_init()
- spider_string::init_calc_mem()
- spider_malloc()
- spider_bulk_alloc_mem()
- spider_bulk_malloc()
Spider GBH's query rewrite of table joins is overly complex and
error-prone. We replace it with something closer to what
dbug_print() (more specifically, print_join()) does, but catered to
spider.
More specifically, we replace the body of
spider_db_mbase_util::append_from_and_tables() with a call to
spider_db_mbase_util::append_join(), and remove downstream append_X
functions.
We make it handle const tables by rewriting them as (select 1). This
fixes the main issue in MDEV-26247.
We also ban semijoin from spider gbh, which fixes MDEV-31645 and
MDEV-30392, as semi-join is an "internal" join, and "semi join" does
not parse, and it is different from "join" in that it deduplicates the
right hand side
Not all queries passed to a group by handler are valid (MDEV-32273),
for example, a join on expr may refer outer fields not in the current
context. We detect this during the handler creation when walking the
join. See also gbh_outer_fields_in_join.test.
It also skips eliminated tables, which fixes MDEV-26193.
Spider gbh query rewrite should get table for fields in a simple way.
Add a method spider_fields::find_table that searches its table holders
to find table for a given field. This way we will be able to get rid
of the first pass during the gbh creation where field_chains and
field_holders are created.
We also check that the field belongs to a spider table while walking
through the query, so we could remove
all_query_fields_are_query_table_members(). However, this requires an
earlier creation of the table_holder so that tables are added before
checking. We do that, and in doing so, also decouple table_holder and
spider_fields
Remove unused methods and fields. Add comments.
Two methods from spider_fields. There are probably more of these
conn_holder related methods that can be removed
reappend_tables_part()
reappend_tables()
The direct aggregate mechanism sems to be only intended to work when
otherwise a full table scan query will be executed from the spider
node and the aggregation done at the spider node too. Typically this
happens in sub_select(). In the test spider.direct_aggregate_part
direct aggregate allows to send COUNT statements directly to the data
nodes and adds up the results at the spider node, instead of iterating
over the rows one by one at the spider node.
By contrast, the group by handler (GBH) typically sends aggregated
queries directly to data nodes, in which case DA does not improve the
situation here.
That is why we should fix it by disabling DA when GBH is used.
There are other reasons supporting this change. First, the creation of
GBH results in a call to change_to_use_tmp_fields() (as opposed to
setup_copy_fields()) which causes the spider DA function
spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
the spider DA function only calls direct_add() on the items, and the
follow-up add() needs to be called by the sql layer code. In
do_select(), after executing the query with the GBH, it seems that the
required add() would not necessarily be called.
Disabling DA when GBH is used does fix the bug. There are a few
other things included in this commit to improve the situation with
spider DA:
1. Add a session variable that allows user to disable DA completely,
this will help as a temporary measure if/when further bugs with DA
emerge.
2. Move the increment of direct_aggregate_count to the spider DA
function. Currently this is done in rather bizarre and random
locations.
3. Fix the spider_db_mbase_row creation so that the last of its row
field (sentinel) is NULL. The code is already doing a null check, but
somehow the sentinel field is on an invalid address, causing the
segfaults. With a correct implementation of the row creation, we can
avoid such segfaults.
when generating a query to send to a remote server, spider generates
new aliases for all tables in the query (at least in the group_by handler).
First it walks all the expressions and create a list of new table aliases
to use for each field. Then - in init_scan() - it actually generates the
query, taking for each field the next alias from the list.
It dives recursively into functions, for example, for func(f1) it'll
go in, will see the field f1 and append to the list the new name for
the table of f1. This works fine for non-aggregate functions and
for aggregate functions in the SELECT list. But aggregate functions
in the ORDER BY are always references to the select list, they never
need to be qualified with a table name. That is, even if there is a
field name as an argument of an aggregate function in the ORDER BY
it must not append a table alias to the list. Let's just skip
aggregate functions when analyzing ORDER BY for table aliases.
This fixes spider/bugfix.mdev_29008
(was observed on aarch64, x86, ppc64le, and amd64 --rr)
When executing a query like "select id, 0 as const, val from ...", there are 3 columns(items) in Query->select at handlerton->create_group_by(). After that, MariaDB makes a temporary table with 2 columns. The skipped items are const item, so fixing Spider to skip const items for items at Query->select.
Change the following function for batch call instead of each partition
- store_lock
- external_lock
- start_stmt
- extra
- cond_push
- info_push
- top_table
When executing a query like "select id, 0 as const, val from ...", there are 3 columns(items) in Query->select at handlerton->create_group_by(). After that, MariaDB makes a temporary table with 2 columns. The skipped items are const item, so fixing Spider to skip const items for items at Query->select.
Add a system variable spider_slave_trx_isolation.
- spider_slave_trx_isolation
The transaction isolation level when Spider table is used by slave SQL thread.
-1 : OFF
0 : READ UNCOMMITTED
1 : READ COMMITTED
2 : REPEATABLE READ
3 : SERIALIZABLE
The default value is -1
Miscellaneous Spider typos
Add a system variable spider_slave_trx_isolation.
- spider_slave_trx_isolation
The transaction isolation level when Spider table is used by slave SQL thread.
-1 : OFF
0 : READ UNCOMMITTED
1 : READ COMMITTED
2 : REPEATABLE READ
3 : SERIALIZABLE
The default value is -1
Miscellaneous Spider typos
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 4885baf on branch bb-10.3-MDEV-16889
The SELECT with the INNER JOIN is executed with one of the two tables being
optimized as a constant table, which is pre-read. Spider nevertheless attempts
to push down the join to the data node. The crash occurs because the constant
table is excluded from the optimized query that Spider attempts to push down.
In order for Spider to be able to push down a join, the following conditions
need to be met:
- All of the tables involved in the join need to be included in the optimized
query that Spider pushes down. When any of the tables involved in the join
is a constant table, it is excluded from the optimized query that Spider
attempts to push down.
- All fields involved in the query need to be members of tables included in the
optimized query.
I fixed the problem by preventing Spider from pushing down queries that include
a field that is not a member of a table included in the optimized query. This
solution fixes the reported problem and also fixes other potential problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
The problem occurred because the Spider node was incorrectly handling
timestamp values sent to and received from the data nodes.
The problem has been corrected as follows:
- Added logic to set and maintain the UTC time zone on the data nodes.
To prevent timestamp ambiguity, it is necessary for the data nodes to use
a time zone such as UTC which does not have daylight savings time.
- Removed the spider_sync_time_zone configuration variable, which did not
solve the problem and which interfered with the solution.
- Added logic to convert to the UTC time zone all timestamp values sent to
and received from the data nodes. This is done for both unique and
non-unique timestamp columns. It is done for WHERE clauses, applying to
SELECT, UPDATE and DELETE statements, and for UPDATE columns.
- Disabled Spider's use of direct update when any of the columns to update is
a timestamp column. This is necessary to prevent false duplicate key value
errors.
- Added a new test spider.timestamp to thoroughly test Spider's handling of
timestamp values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit 97cc9d3 on branch bb-10.3-MDEV-16246
The problem occurred because the Spider node was incorrectly handling
timestamp values sent to and received from the data nodes.
The problem has been corrected as follows:
- Added logic to set and maintain the UTC time zone on the data nodes.
To prevent timestamp ambiguity, it is necessary for the data nodes to use
a time zone such as UTC which does not have daylight savings time.
- Removed the spider_sync_time_zone configuration variable, which did not
solve the problem and which interfered with the solution.
- Added logic to convert to the UTC time zone all timestamp values sent to
and received from the data nodes. This is done for both unique and
non-unique timestamp columns. It is done for WHERE clauses, applying to
SELECT, UPDATE and DELETE statements, and for UPDATE columns.
- Disabled Spider's use of direct update when any of the columns to update is
a timestamp column. This is necessary to prevent false duplicate key value
errors.
- Added a new test spider.timestamp to thoroughly test Spider's handling of
timestamp values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 97cc9d3 on branch bb-10.3-MDEV-16246
Includes Spider patches
- 062_mariadb-10.2.0.direct_join_1and3.diff
- 063_mariadb-10.2.0.direct_join_for_single_partition.diff
- Test cases from Kentoku
Allows Spider to push full joins to the Spider engine trough the
create_group_by interface.
Other things:
- Increased MYSQL_VERSION_ID to check for 10211 (latest 10.2 version)
- Fix for const_table at calling create_group_by().
Original author: Kentoku SHIBA