and failing spider partition test.
With some small datatype changes to the Linux/Solaris my_gethwaddr implementation
the hardware address of AIX can be returned. This is an important aspect
in Spider (and UUID).
Spider test change reviewed by Nayuta Yanagisawa.
my_gethwaddr review by Monty in #2081
Creating a temporary table with Spider is non-sense because a Spider
table cannot hold any physical data and it requires an additional
effort to manage even if it is configured correctly.
Set HTON_TEMPORARY_NOT_SUPPORTED to spider_hton->flags.
Reviewed-by: nayuta.yanagisawa@hey.com
Co-authored-by: d8sk4ueun@gmail.com
The check on the SQL command type, in ha_spider::external_lock() is deleted
by e954d9de. This resulted in the wrong call of spider_internal_start_trx()
(and thus Ha_trx_info::register_ha()).
I reverted the check and refactored ha_spider::external_lock().
The original query "SELECT IF(COUNT(a.`id`)>=0,'Y','N') FROM t" is
transformed to "SELECT COUNT(a.`id`), IF(ref >= 0, 'Y', 'N') FROM t",
where ref is Item_ref to "COUNT(a.`id`)", by split_sum_func().
Spider walks the item list twice, invoking spider_db_print_item_type().
The first invocation is in spider_create_group_by_handler() with
str == NULL. The second one is in spider_group_by_handler::init_scan()
with str != NULL.
spider_db_print_item_type() prints nothing at the first invocation,
and it prints item at the second invocation. However, at the second
invocation, the above mentioned ref to "COUNT(a.`id`)" points to
a field in a temporary table where the result will be stored. Thus,
to look behind the item_ref, Spider need to generate the query earlier.
A possible fix would be to generate a query to send in
spider_create_group_by_handler(). However, the fix requires a
considerable amount of changes of the Spider's GROUP BY handler.
I'd like to avoid that.
So, I fix the problem by not to use the GROUP BY handler when a
query contains Item_ref whose table_name, name, and alias_name_used
are not set.
The commit e954d9de gave different lifetime to wide_share and
partition_handler_share. This introduced the possibility that
partition_handler_share could be accessed even after it was freed.
We stop sharing partitoiin_handler_share and make it belong to
a single wide_handler to fix the problem.
The Spider storage engine ignored the implicit grouping when
aggregation was converted to constant by the query optimizer.
As a result, the Spider SE returned rows more than expected.
To fix the problem, we notify the Spider SE of the existence of
the implicit grouping via Query::distinct.
The reason of the double lock was an extraneous ha_flush_logs().
Unlike the upstream it is unnecessary in Mariadb that exploits a binlog
checkpoint mechanism for not letting PURGE or RESET-MASTER to trouble
transaction recovery. That is in case should a trx
be prepared but its binlog file gone, the trx then is committed on disk too.
Those facts have been always verified by existing tests of
binlog.binlog_{checkpoint,xa_recover}.test.
A regression test for the bug is included though.
The server crashes due to passing NULL to spider_free().
In some cases, this == pt_handler_share_handlers[0] at the label
error_get_share in ha_spider::open().
In such cases, to nullify pt_handler_share_handlers[0]->wide_handler
is nothing but to nullify this->wide_handler. We should not do this
before freeing this->wide_handler.
Spider accesses a freed connection in ha_spider::end_bulk_insert()
and results in SIGSEGV.
The cause of the bug is that ha_spider::is_bulk_insert_exec_period()
wrongly returns TRUE when the bulk insertion has not yet started.
Spider decides whether it is during the bulk insertion or not by
the value of insert_pos, but the variable is not reset in a case,
and this result in the bug.
The server crashes if ALTER TABLE, which accesses physical data
placed at data nodes, is performed on a Spider table.
The cause of the bug is that spider_check_trx_and_get_conn() does
not allocate connections if sql_command == SQLCOM_ALTER_TABLE.
Some ALTER TABLE statements, like ALTER TABLE ... CHECK PARTITION,
access data nodes. So, we need to allocate a new connection before
performing ALTER TABLEs.
- Handle stored function conditions correctly, with the same logic as with UDFs.
- When running queries on Spider SE, by default, we do not push down WHERE conditions containing usage of UDFs/stored functions to remote data nodes, unless the user demands (by setting spider_use_pushdown_udf).
- Disable direct update/delete when a udf condition is skipped.
- Handle stored function conditions correctly, with the same logic as with UDFs.
- When running queries on Spider SE, by default, we do not push down WHERE conditions containing usage of UDFs/stored functions to remote data nodes, unless the user demands (by setting spider_use_pushdown_udf).
The server crashed when SPIDER_DIRECT_SQL UDF was called with
non-existing temporary table.
The bug has been introduced by 91ffdc8. The commit removed
the check, from THD::open_temporary_table(), which ensure that
the target temporary tables exist.
We can fix the bug by adding the check before the call of
THD::open_temporary_table().