Analysis: Problem is that punch hole does not know the actual page size
of the page and does the page belong to an data file or to a log file.
Fix: Pass down the file type and page size to os layer to be used
when trim is called. Also fix unsafe null pointer access to
actual write_size.
The assumption is that the engine should not need to
evaluate HAVING on the table->record[0] - the engine either
can evaluate HAVING internally before writing it to the
table->record[0], or it should leave it to the server,
that will evaluate HAVING(table->record[0]).
Similarly the engine should not need to evaluate ORDER
on the table->record[0]. Either it returns the data already
sorted, or the server will sort the table.
This task is to allow storage engines that can execute GROUP BY or
summary queries efficiently to intercept a full query or sub query from
MariaDB and deliver the result either to the client or to a temporary
table for further processing.
- Added code in sql_select.cc to intercept GROUP BY queries.
Creation of group_by_handler is done after all optimizations to allow
storage engine to benefit of an optimized WHERE clause and suggested
indexes to use.
- Added group by handler to sequence engine and a group_by test suite as
a way to test the new interface.
- Intercept EXPLAIN with a message "Storage engine handles GROUP BY"
libmysqld/CMakeLists.txt:
Added new group_by_handler files
sql/CMakeLists.txt:
Added new group_by_handler files
sql/group_by_handler.cc:
Implementation of group_by_handler functions
sql/group_by_handler.h:
Definition of group_by_handler class
sql/handler.h:
Added handlerton function to create a group_by_handler, if the storage
engine can intercept the query.
sql/item_cmpfunc.cc:
Allow one to evaluate item_equal any time.
sql/sql_select.cc:
Added code to intercept GROUP BY queries
- If all tables are from the same storage engine and the query is
using sum functions, call create_group_by() to check if the storage
engine can intercept the query.
- If yes:
- create a temporary table to hold a GROUP_BY row or result
- In do_select() intercept normal query execution by instead
calling the group_by_handler to get the result
- Intercept EXPLAIN
sql/sql_select.h:
Added handling of group_by_handler
Added caching of the original join tab (needed for cleanup after
group_by handler)
storage/sequence/mysql-test/sequence/group_by.result:
Test group_by_handler interface
storage/sequence/mysql-test/sequence/group_by.test:
Test group_by_handler interface
storage/sequence/sequence.cc:
Added simple group_by_engine for handling COUNT(*) and
SUM(primary_key). This was done as a test of the group_by_handler
interface
if we didn't write the CREATE TEMPORARY TABLE statement.
- Enable old code from one of my older changesets that didn't make into 10.0
- Fix test cased that failed as they expected DROP TEMPORARY TABLE in the log.
- Turning get_mm_tree_for_const() from a static function into
a protected method in Item.
- Adding a new class Item_bool_func2_with_rev, for the functions and operators
that have a reverse function and can use the range optimizer for
to optimize "value OP field" as "field REV_OP value". Deriving
Item_bool_rowready_func2 and Item_funt_spatial_rel from the new class.
- Removing Item_bool_func2::have_rev_func().
from Item_bool_func::get_mm_leaf() into Field_xxx::can_optimize_range().
This reduces the total amount of virtual calls. Also, it's a prerequisite
change for the pluggable data types.
Fixed race condition in code filling INFORMATION_SCHEMA.PROCESSLIST.INFO_BINARY.
When loading query string/length of another connection one must have
LOCK_thd_data locked.
Analysis: When a page is read from encrypted table and page can't be
decrypted because of bad key (or incorrect encryption algorithm or
method) page was incorrectly left on buffer pool.
Fix: Remove page from buffer pool and from pending IO.
Item_func_hybrid_field_type did not return correct field_type(), cmp_type()
and result_type() in some cases, because cached_result_type and
cached_field_type were set in independent pieces of the code and
did not properly match to each other.
Fix:
- Removing Item_func_hybrid_result_type
- Deriving Item_func_hybrid_field_type directly from Item_func
- Introducing a new class Type_handler which guarantees that
field_type(), cmp_type() and result_type() are always properly synchronized
and using the new class in Item_func_hybrid_field_type.
Added per-table boolean IETF_QUOTES variable to CSV storage engine. It allows to
enable IETF-compatible parsing of embedded quote and comma characters. Disabled
by default.
This patch is based on Percona revision:
b32fbf0276
Note that original patch adds server variable, while this patch adds per-table
variable.
Addendum:
* Before calling THD::init_for_queries(), flip the current_thd to wsrep
thread so that memory gets allocated for the right THD.
* Use wsrep_creating_startup_threads instead of plugins_are_initialized
as the condition for the execution of THD::init_for_queries() within
start_wsrep_THD(), as use of latter could still leave some room for
race.
Problem:
When mysqld starts as a galera node, it creates 2 system threads
(applier & rollbacker) using start_wsrep_THD(). These threads are
created before plugin initialization (plugin_init()) for SST methods
like rsync and xtrabackup.
The threads' initialization itself can proceed in parallel to mysqld's
main thread of execution. As a result, the thread initialization code
(start_wsrep_THD()) can end up accessing some un/partially initialized
structures (like maria_hton, in this particular case) resulting in
segfault.
Solution:
Fixed by calling THD::init_for_queries() (which accesses maria_hton)
only after the plugins have been initialized.