Change the following function for batch call instead of each partition
- store_lock
- external_lock
- start_stmt
- extra
- cond_push
- info_push
- top_table
MDEV-22531 Remove maria::implicit_commit()
MDEV-22607 Assertion `ha_info->ht() != binlog_hton' failed in
MYSQL_BIN_LOG::unlog_xa_prepare
From the handler point of view, Aria now looks like a transactional
engine. One effect of this is that we don't need to call
maria::implicit_commit() anymore.
This change also forces the server to call trans_commit_stmt() after doing
any read or writes to system tables. This work will also make it easier
to later allow users to have system tables in other engines than Aria.
To handle the case that Aria doesn't support rollback, a new
handlerton flag, HTON_NO_ROLLBACK, was added to engines that has
transactions without rollback (for the moment only binlog and Aria).
Other things
- Moved freeing of MARIA_SHARE to a separate function as the MARIA_SHARE
can be still part of a transaction even if the table has closed.
- Changed Aria checkpoint to use the new MARIA_SHARE free function. This
fixes a possible memory leak when using S3 tables
- Changed testing of binlog_hton to instead test for HTON_NO_ROLLBACK
- Removed checking of has_transaction_manager() in handler.cc as we can
assume that as the transaction was started by the engine, it does
support transactions.
- Added new class 'start_new_trans' that can be used to start indepdendent
sub transactions, for example while reading mysql.proc, using help or
status tables etc.
- open_system_tables...() and open_proc_table_for_Read() doesn't anymore
take a Open_tables_backup list. This is now handled by 'start_new_trans'.
- Split thd::has_transactions() to thd::has_transactions() and
thd::has_transactions_and_rollback()
- Added handlerton code to free cached transactions objects.
Needed by InnoDB.
squash! 2ed35999f2a2d84f1c786a21ade5db716b6f1bbc
All changes (except one) is of type
thd->transaction. -> thd->transaction->
thd->transaction points by default to 'thd->default_transaction'
This allows us to 'easily' have multiple active transactions for a
THD object, like when reading data from the mysql.proc table
When executing a query like "select id, 0 as const, val from ...", there are 3 columns(items) in Query->select at handlerton->create_group_by(). After that, MariaDB makes a temporary table with 2 columns. The skipped items are const item, so fixing Spider to skip const items for items at Query->select.
Prototype change:
- virtual ha_rows records_in_range(uint inx, key_range *min_key,
- key_range *max_key)
+ virtual ha_rows records_in_range(uint inx, const key_range *min_key,
+ const key_range *max_key,
+ page_range *res)
The handler can ignore the page_range parameter. In the case the handler
updates the parameter, the optimizer can deduce the following:
- If previous range's last key is on the same block as next range's first
key
- If the current key range is in one block
- We can also assume that the first and last block read are cached!
This can be used for a better calculation of IO seeks when we
estimate the cost of a range index scan.
The parameter is fully implemented for MyISAM, Aria and InnoDB.
A separate patch will update handler::multi_range_read_info_const() to
take the benefits of this change and also remove the double
records_in_range() calls that are not anymore needed.
This was to remove a performance regression between 10.3 and 10.4
In 10.5 we will have a better implementation of records_in_range
that will enable us to get more statistics.
This change was not done in 10.4 because the 10.5 will be part of
a larger change that is not suitable for the GA 10.4 version
Other things:
- Changed default handler block_size to 8192 to fix things statistics
for engines that doesn't set the block size.
- Fixed a bug in spider when using multiple part const ranges
(Patch from Kentoku)
Add the following parameter.
- spider_sync_sql_mode
Local sql_mode synchronous existence to remote server.
0 : It doesn't synchronize.
1 : It synchronizes.
The default value is 1
Add the following parameters.
- spider_remote_wait_timeout
Set remote wait_timeout at connecting for improvement performance of
connection if you know.
-1,0 : No set.
1 or more : Seconds.
The default value is -1
- spider_wait_timeout
The wait time to remote servers.
-1,0 : No set.
1 or more : Seconds.
The default value is 604800
Change default value of the followings
quick_mode 0 -> 3
quick_page_size 100 -> 1024
Add the following parameter for limiting result page size by byte
- quick_page_byte(qpb)
Number of bytes in a page when acquisition one by one.
When quick_mode is 1 or 2, Spider stores at least 1 record even if
quick_page_byte is smaller than 1 record. When quick_mode is 3,
quick_page_byte is used for judging using temporary table.
That is given to priority when server parameter spider_quick_page_byte
is set.
The default value is 10485760
Fix "out of sync" issue at using quick_mode = 1 or 2
Change default value of the followings
quick_mode 0 -> 3
quick_page_size 100 -> 1024
Add the following parameter for limiting result page size by byte
- quick_page_byte(qpb)
Number of bytes in a page when acquisition one by one.
When quick_mode is 1 or 2, Spider stores at least 1 record even if
quick_page_byte is smaller than 1 record. When quick_mode is 3,
quick_page_byte is used for judging using temporary table.
That is given to priority when server parameter spider_quick_page_byte
is set.
The default value is 10485760
Fix "out of sync" issue at using quick_mode = 1 or 2
The problem occurred because the Spider node was incorrectly handling
timestamp values sent to and received from the data nodes.
The problem has been corrected as follows:
- Added logic to set and maintain the UTC time zone on the data nodes.
To prevent timestamp ambiguity, it is necessary for the data nodes to use
a time zone such as UTC which does not have daylight savings time.
- Removed the spider_sync_time_zone configuration variable, which did not
solve the problem and which interfered with the solution.
- Added logic to convert to the UTC time zone all timestamp values sent to
and received from the data nodes. This is done for both unique and
non-unique timestamp columns. It is done for WHERE clauses, applying to
SELECT, UPDATE and DELETE statements, and for UPDATE columns.
- Disabled Spider's use of direct update when any of the columns to update is
a timestamp column. This is necessary to prevent false duplicate key value
errors.
- Added a new test spider.timestamp to thoroughly test Spider's handling of
timestamp values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit 97cc9d3 on branch bb-10.3-MDEV-16246
The problem occurred because the Spider node was incorrectly handling
timestamp values sent to and received from the data nodes.
The problem has been corrected as follows:
- Added logic to set and maintain the UTC time zone on the data nodes.
To prevent timestamp ambiguity, it is necessary for the data nodes to use
a time zone such as UTC which does not have daylight savings time.
- Removed the spider_sync_time_zone configuration variable, which did not
solve the problem and which interfered with the solution.
- Added logic to convert to the UTC time zone all timestamp values sent to
and received from the data nodes. This is done for both unique and
non-unique timestamp columns. It is done for WHERE clauses, applying to
SELECT, UPDATE and DELETE statements, and for UPDATE columns.
- Disabled Spider's use of direct update when any of the columns to update is
a timestamp column. This is necessary to prevent false duplicate key value
errors.
- Added a new test spider.timestamp to thoroughly test Spider's handling of
timestamp values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 97cc9d3 on branch bb-10.3-MDEV-16246
When an attempt to connect to the remote server fails, Spider retries to
connect to the remote server 1000 times or until the connection attempt
succeeds. This is perceived as a hang if the remote server remains
unavailable.
I have introduced changes in Spider's table status handler to fix this problem.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 6ee6933 on branch bb-10.3-MDEV-15712
When an attempt to connect to the remote server fails, Spider retries to
connect to the remote server 1000 times or until the connection attempt
succeeds. This is perceived as a hang if the remote server remains
unavailable.
I have introduced changes in Spider's table status handler to fix this problem.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Includes Spider patches
- 062_mariadb-10.2.0.direct_join_1and3.diff
- 063_mariadb-10.2.0.direct_join_for_single_partition.diff
- Test cases from Kentoku
Allows Spider to push full joins to the Spider engine trough the
create_group_by interface.
Other things:
- Increased MYSQL_VERSION_ID to check for 10211 (latest 10.2 version)
- Fix for const_table at calling create_group_by().
Original author: Kentoku SHIBA
Add support for direct update and direct delete requests for spider.
A direct update/delete request handles all qualified rows in a single
operation rather than one row at a time.
Contains Spiral patches:
006_mariadb-10.2.0.direct_update_rows.diff MDEV-7704
008_mariadb-10.2.0.partition_direct_update.diff MDEV-7706
010_mariadb-10.2.0.direct_update_rows2.diff MDEV-7708
011_mariadb-10.2.0.aggregate.diff MDEV-7709
027_mariadb-10.2.0.force_bulk_update.diff MDEV-7724
061_mariadb-10.2.0.mariadb-10.1.8.diff MDEV-12870
- The differences compared to the original patches:
- Most of the parameters of the new functions are unnecessary. The
unnecessary parameters have been removed.
- Changed bit positions for new handler flags upon consideration of
handler flags not needed by other Spiral patches and handler flags
merged from MySQL.
- Added info_push() (Was originally part of bulk access patch)
- Didn't include code related to handler socket
- Added HA_CAN_DIRECT_UPDATE_AND_DELETE
Original author: Kentoku SHIBA
First reviewer: Jacob Mathew
Second reviewer: Michael Widenius