JOIN_CACHE::alloc_buffer() used wrong logic when calculating the size
of all join buffers. Then, it computed the ratio by which
JOIN::shrink_join_buffers() should shrink the buffers.
shrink_join_buffers() ended up in a situation where buffers would not
fit into the total quota after shrinking, which resulted in negative
buffer sizes. Due to use of unsigned integers it would cause very large
buffers to be used instead.
Make JOIN_CACHE::alloc_buffer() use the same logic as
JOIN::shrink_join_buffers() when it calculates the total size of
all join buffers so far.
Also, add a safety check in JOIN::shrink_join_buffers()
This patch doesn't include a testcase, because the original test dataset
is too big and fragile. We have dbt3_s001.inc but I wasn't able to demonstrate
the issue with it.
Detailed description:
- Added more function comments and fixed types in some old comments
- Removed an outdated comment
- Cleaned up some functions in records.cc
- Replaced "while" with "if"
- Reused error code
- Made functions similar
- Added caching of pfs_batch_update()
- Simplified some rowid_filter code
- Only call build_range_rowid_filter() if rowid filter will be used
- Replaced tab->is_rowid_filter_built with need_to_build_rowid_filter.
We only have to test need_to_build_rowid_filter to know if we have
to build the filter. Old code needed two tests
- Added function 'clear_range_rowid_filter' to disable rowid filter.
Made things simpler as we can now clear all rowid filter variables
in one place.
- Removed some 'if' in sub_select()
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)
- Updated comments
- Added some extra DEBUG
- Indentation changes and break long lines
- Trivial code changes like:
- Combining 2 statements in one
- Reorder DBUG lines
- Use a variable to store a pointer that is used multiple times
- Moved declaration of variables to start of loop/function
- Removed dead or commented code
- Removed wrong DBUG_EXECUTE code in best_extension_by_limited_search()
This bug affected queries with nested left joins having the same last inner
table such that not_exists optimization could be applied to the most inner
outer join when optimizer chose to use join buffers. The bug could lead to
producing wrong a result set.
If the WHERE condition a query contains a conjunctive IS NULL predicate
over a non-nullable column of an inner table of a not nested outer join
then not_exists optimization can be applied to tho the outer join. With
this optimization when looking for matches for a certain record from the
outer table of the join the records of the inner table can be ignored
right after the first match satisfying the ON condition is found.
In the case of nested outer joins having the same last inner table this
optimization still can be applied but only if all ON conditions of the
embedding outer joins are satisfied. Such check was missing in the code
that tried to apply not_exists optimization when join buffers were used
for outer join operations.
This problem has been already fixed in the patch for bug MDEV-7992. Yet
there it was resolved only for the cases when join buffers were not used
for outer joins.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
by compound index
This typo bug may lead to wrong result sets for equi-join queries where
the join operation is supported by a compound index such that the order of
its components differs from the order of the corresponding columns in
the table the index belongs to. The bug manifests itself only when usage
of the BNLH algorithm is forced.
The fix for the bug was provided by Chu Huaxing.
This bug could affect multi-way join queries with embedded outer joins that
contained a conjunctive IS NULL predicate over a non-nullable column from
inner table of an outer join. The predicate could occur in WHERE condition
or in ON condition. Due to this bug a wrong result set could be returned by
the query. The bug manifested itself only when join buffers were employed
for join operations.
The problem appeared because
- a bug in the function JOIN_CACHE::get_match_flag_by_pos that not always
returned proper match flags for embedding outer joins stored together
with table rows put a join buffer.
- bug in the function JOIN_CACHE::join_matching_records that not always
correctly determined that a row from the buffer could be skipped due
to applied 'not_exists' optimization.
Example:
SELECT * FROM t1 LEFT JOIN ((t2 LEFT JOIN t3 ON c = d) JOIN t4) ON b = e
WHERE e IS NULL;
The patch introduces a new function that finds the match flag for a record
from join buffer specifying the buffer where this flag has to be found.
The function is called JOIN_CACHE::get_match_flag_by_pos_from_join_buffer().
Now this function rather than JOIN_CACHE::get_match_flag_by_pos() is used
in JOIN_CACHE::skip_if_matched() to check whether a record from the join
buffer must be ignored when extending the record by null complements.
Also the code of the function JOIN_CACHE::skip_if_not_needed_match() has
been changed. The function checks whether a record from the join buffer
still may produce some useful extensions.
Also some clarifying comments has been added.
Approved by monty@mariadb.com.
This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.
After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
This patch fixes the following defects/bugs.
1. If BKA[H] algorithm was used to join a table for which the optimizer
had decided to employ a rowid filter the filter actually was not built.
2. The patch for the bug MDEV-21356 that added the code canceling pushing
rowid filter into an engine for the table joined with employment of
BKA[H] and MRR was not quite correct for Innodb engine because this
cancellation was done after InnoDB code had already bound the the pushed
filter to internal InnoDB structures.
This bug could happen when queries with nested outer joins were
executed employing join buffers. At such an execution if the method
JOIN_CACHE::join_records() is called when a join buffer has become
full no 'first_unmatched' field should be cleaned up in the JOIN_TAB
structure to which the join cache with this buffer is attached.
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
* get_rec_bits() was always reading two bytes, even if the
bit field contained only of one byte
* In various places the code used field->pack_length() bytes
starting from field->ptr, while it should be field->pack_length_in_rec()
* Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
an argument to memcmp(), but field_length is the number of bits!
Most "new" failures fixed in the following files:
- sql_select.cc
- item.cc
- item_func.cc
- opt_subselect.cc
Other things:
- Allocate udf_handler strings in mem_root
- Required changes in sql_string.h
- Add mem_root as argument to some new [] calls
- Mark udf_handler strings as thread specific
- Removed some comment blocks with code
The issue was that r_loops, r_rows and r_filtered in ANALYZE FORMAT= JSON were not
calculated for the table on which we were performing the MRR scan in the BKA join
Fixed this by adding respective counter in the JOIN_TAB_SCAN_MRR::open and JOIN_TAB_SCAN_MRR::next
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h