Commit graph

206 commits

Author SHA1 Message Date
Sergei Petrunia
e8c7222ba3 MDEV-30603: Wrong result with non-default JOIN_CACHE_LEVEL=[4|5] ...
JOIN_CACHE::alloc_buffer() used wrong logic when calculating the size
of all join buffers. Then, it computed the ratio by which
JOIN::shrink_join_buffers() should shrink the buffers.

shrink_join_buffers() ended up in a situation where buffers would not
fit into the total quota after shrinking, which resulted in negative
buffer sizes. Due to use of unsigned integers it would cause very large
buffers to be used instead.

Make JOIN_CACHE::alloc_buffer() use the same logic as
JOIN::shrink_join_buffers() when it calculates the total size of
all join buffers so far.

Also, add a safety check in JOIN::shrink_join_buffers()

This patch doesn't include a testcase, because the original test dataset
is too big and fragile. We have dbt3_s001.inc but I wasn't able to demonstrate
the issue with it.
2023-02-15 11:36:12 +03:00
Monty
3316a54db3 Code cleanups and add some caching of functions to speed up things
Detailed description:
- Added more function comments and fixed types in some old comments
- Removed an outdated comment
- Cleaned up some functions in records.cc
  - Replaced "while" with "if"
  - Reused error code
  - Made functions similar
- Added caching of pfs_batch_update()
- Simplified some rowid_filter code
  - Only call build_range_rowid_filter() if rowid filter will be used
  - Replaced tab->is_rowid_filter_built with need_to_build_rowid_filter.
    We only have to test need_to_build_rowid_filter to know if we have
    to build the filter. Old code needed two tests
  - Added function 'clear_range_rowid_filter' to disable rowid filter.
    Made things simpler as we can now clear all rowid filter variables
    in one place.
- Removed some 'if' in sub_select()
2023-02-10 12:59:36 +02:00
Monty
b66cdbd1ea Changing all cost calculation to be given in milliseconds
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.

- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
  - Most engine costs has been found with this program. All steps to
    calculate the new costs are documented in Docs/optimizer_costs.txt

- User optimizer_cost variables are given in microseconds (as individual
  costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
  (9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
  This makes it easy to apply different cost modifiers in ha_..time()
  functions for io and cpu costs.
  - scan_time()
  - rnd_pos_time() & rnd_pos_call_time()
  - keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
  of keys with a given number of ranges and optional number of blocks that
  need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
  thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
  Used heap table costs for json_table. The rest are using default engine
  costs.
- Added the following new optimizer variables:
  - optimizer_disk_read_ratio
  - optimizer_disk_read_cost
  - optimizer_key_lookup_cost
  - optimizer_row_lookup_cost
  - optimizer_row_next_find_cost
  - optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
  recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
  This allows one to change costs without having to compile a lot of
  files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
  for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
  using 'records_out' (min rows seen for table) to calculate filtering.
  This greatly simplifies the filtering code in
  JOIN_TAB::save_explain_data().

This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed.  These fixes are in the following commits.  To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.

InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
  prevent that the optimizer is trying to use index-only scans on
  the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
  ha_innobase::rnd_pos_time() as the default engine cost functions now
  works good for InnoDB.

Other things:
- Added  --show-query-costs (\Q) option to mysql.cc to show the query
  cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
  the value that user is given. This is used to change cost from
  microseconds (user input) to milliseconds (what the server is
  internally using).
- Added include/my_tracker.h  ; Useful include file to quickly test
  costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
  shown in microseconds for the user but stored as milliseconds.
  This is to make the numbers easier to read for the user (less
  pre-zeros).  Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
  for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
  check_index_intersect_extension() and similar functions to be able
  to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
  to calculate usage space of keys in b-trees. (Before we used numeric
  constants).
- Removed code that assumed that b-trees has similar costs as binary
  trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
  more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
  based on the new fields from best_access_patch()

Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
  setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.

Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure.  When a
  handlerton is created, we also created a new cost variable for the
  handlerton. We also create a new variable if the user changes a
  optimizer cost for a not yet loaded handlerton either with command
  line arguments or with SET
  @@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
  default_optimizer_costs   The default costs + changes from the
                            command line without an engine specifier.
  heap_optimizer_costs      Heap table costs, used for temporary tables
  tmp_table_optimizer_costs The cost for the default on disk internal
                            temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
  accesses the handler has a pointer to this. The cost is copied
  to the table on first access. If one wants to change the cost one
  must first update the global engine cost and then do a FLUSH TABLES.
  This was done to be able to access the costs for an open table
  without any locks.
- When a handlerton is created, the cost are updated the following way:
  See sql/keycaches.cc for details:
  - Use 'default_optimizer_costs' as a base
  - Call hton->update_optimizer_costs() to override with the engines
    default costs.
  - Override the costs that the user has specified for the engine.
  - One handler open, copy the engine cost from handlerton to TABLE_SHARE.
  - Call handler::update_optimizer_costs() to allow the engine to update
    cost for this particular table.
  - There are two costs stored in THD. These are copied to the handler
    when the table is used in a query:
    - optimizer_where_cost
    - optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
  structure. (Idea/Suggestion by Igor)
2023-02-02 23:54:45 +03:00
Monty
4062fc28bd Optimizer code cleanups, no logic changes
- Updated comments
- Added some extra DEBUG
- Indentation changes and break long lines
- Trivial code changes like:
  - Combining 2 statements in one
  - Reorder DBUG lines
  - Use a variable to store a pointer that is used multiple times
- Moved declaration of variables to start of loop/function
- Removed dead or commented code
- Removed wrong DBUG_EXECUTE code in best_extension_by_limited_search()
2023-01-30 15:22:21 +02:00
Marko Mäkelä
73ecab3d26 Merge 10.4 into 10.5 2023-01-13 10:18:30 +02:00
Sergei Golubchik
fdcfc25127 Merge branch '10.3' into 10.4 2023-01-10 21:04:17 +01:00
Igor Babaev
b21832ef15 MDEV-27624 Wrong result for nested left join using not_exists optimization
This bug affected queries with nested left joins having the same last inner
table such that not_exists optimization could be applied to the most inner
outer join when optimizer chose to use join buffers. The bug could lead to
producing wrong a result set.
If the WHERE condition a query contains a conjunctive IS NULL predicate
over a non-nullable column of an inner table of a not nested outer join
then not_exists optimization can be applied to tho the outer join. With
this optimization when looking for matches for a certain record from the
outer table of the join the records of the inner table can be ignored
right after the first match satisfying the ON condition is found.
In the case of nested outer joins having the same last inner table this
optimization still can be applied but only if all ON conditions of the
embedding outer joins are satisfied. Such check was missing in the code
that tried to apply not_exists optimization when join buffers were used
for outer join operations.
This problem has been already fixed in the patch for bug MDEV-7992. Yet
there it was resolved only for the cases when join buffers were not used
for outer joins.

Approved by Oleksandr Byelkin <sanja@mariadb.com>
2023-01-06 14:20:42 -08:00
Marko Mäkelä
80459bcbd4 Merge 10.4 into 10.5 2021-03-27 17:37:42 +02:00
Marko Mäkelä
7ae37ff74f Merge 10.3 into 10.4 2021-03-27 17:12:28 +02:00
Marko Mäkelä
3157fa182a Merge 10.2 into 10.3 2021-03-27 16:11:26 +02:00
Igor Babaev
8f7a6cde58 MDEV-24767 Wrong result when forced BNLH is used for join supported
by compound index

This typo bug may lead to wrong result sets for equi-join queries where
the join operation is supported by a compound index such that the order of
its components differs from the order of the corresponding columns in
the table the index belongs to. The bug manifests itself only when usage
of the BNLH algorithm is forced.

The fix for the bug was provided by Chu Huaxing.
2021-03-22 22:04:54 -07:00
Marko Mäkelä
be881ec457 Merge 10.4 into 10.5 2021-03-19 13:09:21 +02:00
Marko Mäkelä
44d70c01f0 Merge 10.3 into 10.4 2021-03-19 11:42:44 +02:00
Marko Mäkelä
19052b6deb Merge 10.2 into 10.3 2021-03-18 12:34:48 +02:00
Igor Babaev
90780bb5a9 MDEV-21104 Wrong result (extra rows and wrong values) with incremental BNLH
This bug could affect multi-way join queries with embedded outer joins that
contained a conjunctive IS NULL predicate over a non-nullable column from
inner table of an outer join. The predicate could occur in WHERE condition
or in ON condition. Due to this bug a wrong result set could be returned by
the query. The bug manifested itself only when join buffers were employed
for join operations.

The problem appeared because
- a bug in the function JOIN_CACHE::get_match_flag_by_pos that not always
  returned proper match flags for embedding outer joins stored together
  with table rows put a join buffer.
- bug in the function JOIN_CACHE::join_matching_records that not always
  correctly determined that a row from the buffer could be skipped due
  to applied 'not_exists' optimization.
Example:
  SELECT * FROM t1 LEFT JOIN ((t2 LEFT JOIN t3 ON c = d) JOIN t4) ON b = e
    WHERE e IS NULL;

The patch introduces a new function that finds the match flag for a record
from join buffer specifying the buffer where this flag has to be found.
The function is called JOIN_CACHE::get_match_flag_by_pos_from_join_buffer().
Now this function rather than JOIN_CACHE::get_match_flag_by_pos() is used
in JOIN_CACHE::skip_if_matched() to check whether a record from the join
buffer must be ignored when extending the record by null complements.
Also the code of the function JOIN_CACHE::skip_if_not_needed_match() has
been changed. The function checks whether a record from the join buffer
still may produce some useful extensions.
Also some clarifying comments has been added.

Approved by monty@mariadb.com.
2021-03-10 17:28:40 -08:00
Marko Mäkelä
133b4b46fe Merge 10.4 into 10.5 2020-11-03 16:24:47 +02:00
Marko Mäkelä
533a13af06 Merge 10.3 into 10.4 2020-11-03 14:49:17 +02:00
Marko Mäkelä
c7f322c91f Merge 10.2 into 10.3 2020-11-02 15:48:47 +02:00
Marko Mäkelä
8036d0a359 MDEV-22387: Do not violate __attribute__((nonnull))
This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.

After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
2020-11-02 14:19:21 +02:00
Oleksandr Byelkin
fad47df995 Merge branch '10.4' into 10.5 2020-03-11 17:52:49 +01:00
Sergei Golubchik
0d837e8153 perfschema table io instrumentation related changes 2020-03-10 19:24:23 +01:00
Sergei Golubchik
7c58e97bf6 perfschema memory related instrumentation changes 2020-03-10 19:24:22 +01:00
Igor Babaev
2fb881df1d MDEV-21610 Different query results from 10.4.11 to 10.4.12
This patch fixes the following defects/bugs.
1. If BKA[H] algorithm was used to join a table for which the optimizer
had decided to employ a rowid filter the filter actually was not built.
2. The patch for the bug MDEV-21356 that added the code canceling pushing
rowid filter into an engine for the table joined with employment of
BKA[H] and MRR was not quite correct for Innodb engine because this
cancellation was done after InnoDB code had already bound the the pushed
filter to internal InnoDB structures.
2020-02-18 22:51:07 -08:00
Faustin Lammler
2df2238cb8 Lintian complains on spelling error
The lintian check complains on spelling error:
https://salsa.debian.org/mariadb-team/mariadb-10.3/-/jobs/95739
2019-12-02 12:41:13 +02:00
Oleksandr Byelkin
4a3d51c76c Merge branch '10.2' into 10.3 2019-06-14 07:36:47 +02:00
Oleksandr Byelkin
50653e021f Merge branch '10.1' into 10.2 2019-06-13 16:42:21 +02:00
Oleksandr Byelkin
5b65d61d93 Merge branch '5.5' into 10.1 2019-06-12 22:54:46 +02:00
Igor Babaev
eb09580b67 MDEV-19588 Wrong results from query, using left join.
This bug could happen when queries with nested outer joins were
executed employing join buffers. At such an execution if the method
JOIN_CACHE::join_records() is called when a join buffer has become
full no 'first_unmatched' field should be cleaned up in the JOIN_TAB
structure to which the join cache with this buffer is attached.
2019-05-28 14:53:08 -07:00
Marko Mäkelä
be85d3e61b Merge 10.2 into 10.3 2019-05-14 17:18:46 +03:00
Marko Mäkelä
26a14ee130 Merge 10.1 into 10.2 2019-05-13 17:54:04 +03:00
Vicențiu Ciorbaru
f177f125d4 Merge branch '5.5' into 10.1 2019-05-11 19:15:57 +03:00
Michal Schorm
17b4f99928 Update FSF address
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.

Th command line used to generate this diff was:

find ./ -type f \
  -exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
  -exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
  -exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
  -exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
  -exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
  -exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
2019-05-10 20:52:00 +03:00
Monty
58721c3e38 MDEV-16286 Killed CREATE SEQUENCE leaves sequence in unusable state
Fixed by deleting the sequence if we where not able to initialize it

I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
2018-05-27 19:47:17 +03:00
Monty
30ebc3ee9e Add likely/unlikely to speed up execution
Added to:
- if (error)
- Lex
- sql_yacc.yy and sql_yacc_ora.yy
- In header files to alloc() calls
- Added thd argument to thd_net_is_killed()
2018-05-07 00:07:32 +03:00
Marko Mäkelä
b006d2ead4 Merge bb-10.2-ext into 10.3 2018-02-15 10:22:03 +02:00
Sergei Golubchik
4771ae4b22 Merge branch 'github/10.1' into 10.2 2018-02-06 14:50:50 +01:00
Vladislav Vaintroub
6c279ad6a7 MDEV-15091 : Windows, 64bit: reenable and fix warning C4267 (conversion from 'size_t' to 'type', possible loss of data)
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.

This fix excludes rocksdb, spider,spider, sphinx and connect for now.
2018-02-06 12:55:58 +00:00
Sergei Golubchik
d4df7bc9b1 Merge branch 'github/10.0' into 10.1 2018-02-02 10:09:44 +01:00
Vicențiu Ciorbaru
d833bb65d5 Merge remote-tracking branch '5.5' into 10.0 2018-01-24 12:29:31 +02:00
Sergei Golubchik
444587d8a3 BIT field woes
* get_rec_bits() was always reading two bytes, even if the
  bit field contained only of one byte
* In various places the code used field->pack_length() bytes
  starting from field->ptr, while it should be field->pack_length_in_rec()
* Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
  an argument to memcmp(), but field_length is the number of bits!
2018-01-16 23:29:48 +01:00
Michael Widenius
87933d5261 Handle failures from malloc
Most "new" failures fixed in the following files:
- sql_select.cc
- item.cc
- item_func.cc
- opt_subselect.cc

Other things:
- Allocate udf_handler strings in mem_root
  - Required changes in sql_string.h
- Add mem_root as argument to some new [] calls
- Mark udf_handler strings as thread specific
- Removed some comment blocks with code
2017-11-17 07:30:05 +02:00
Alexander Barkov
835cbbcc7b Merge remote-tracking branch 'origin/bb-10.2-ext' into 10.3
TODO: enable MDEV-13049 optimization for 10.3
2017-10-30 20:47:39 +04:00
Vladislav Vaintroub
dc93ce8dea Windows : Fix truncation warnings in sql/ 2017-10-10 06:19:50 +00:00
Marko Mäkelä
2c1067166d Merge bb-10.2-ext into 10.3 2017-10-04 08:24:06 +03:00
Vladislav Vaintroub
7354dc6773 MDEV-13384 - misc Windows warnings fixed 2017-09-28 17:20:46 +00:00
Marko Mäkelä
4a32e2395e Merge bb-10.2-ext into 10.3 2017-09-25 22:05:56 +03:00
Marko Mäkelä
7dcb8816a1 Merge 10.1 into 10.2 2017-09-25 13:46:54 +03:00
Varun Gupta
f91eb71e1c MDEV-8840: ANALYZE FORMAT=JSON produces wrong data with BKA
The issue was that r_loops, r_rows and r_filtered in ANALYZE FORMAT= JSON were not
calculated for the table on which we were performing the MRR scan in the BKA join
Fixed this by adding respective counter in the JOIN_TAB_SCAN_MRR::open and JOIN_TAB_SCAN_MRR::next
2017-09-24 23:37:57 +05:30
Eugene Kosov
5dd8e1bf2d simplify READ_RECORD usage NFC
READ_RECORD read_record;
...
// this
// read_record.read_record(&read_record);
// becomes just
read_record.read_record();
2017-08-31 13:46:30 +04:00
Michael Widenius
4aaa38d26e Enusure that my_global.h is included first
- Added sql/mariadb.h file that should be included first by files in sql
  directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
  that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
2017-08-24 01:05:44 +02:00