JOIN_CACHE::alloc_buffer() used wrong logic when calculating the size
of all join buffers. Then, it computed the ratio by which
JOIN::shrink_join_buffers() should shrink the buffers.
shrink_join_buffers() ended up in a situation where buffers would not
fit into the total quota after shrinking, which resulted in negative
buffer sizes. Due to use of unsigned integers it would cause very large
buffers to be used instead.
Make JOIN_CACHE::alloc_buffer() use the same logic as
JOIN::shrink_join_buffers() when it calculates the total size of
all join buffers so far.
Also, add a safety check in JOIN::shrink_join_buffers()
This patch doesn't include a testcase, because the original test dataset
is too big and fragile. We have dbt3_s001.inc but I wasn't able to demonstrate
the issue with it.
Do not compile wsrep_provider plugin if WITH_WSREP is not enabled.
We should not enable wsrep_provider plugin if WSREP_ON=OFF and
at that case we can only print information that Plugin
'wsrep-provider' is disabled.
Make sure tests require Galera library 26.4.14 if needed.
- Provider options are read from the provider during
startup, before plugins are initialized.
- New wsrep_provider plugin for which sysvars are generated
dynamically from options read from the provider.
- The plugin is enabled by option plugin-wsrep-provider=ON.
If enabled, wsrep_provider_options can no longer be used,
(an error is raised on attempts to do so).
- Each option is either string, integer, double or bool
- Options can be dynamic / readonly
- Options can be deprecated
Limitations:
- We do not check that the value of a provider option falls
within a certain range. This type of validation is still
done in Galera side.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
- During non-last batch of multi-batch recovery, InnoDB holds
log_sys.mutex and preallocates the block which may intiate
page flush, which may initiate log flush, which requires
log_sys.mutex to acquire again. This leads to assert failure.
So InnoDB recovery should release log_sys.mutex before
preallocating the block.
Task:
=====
Update tests to reflect MDEV-20122, deprecation of master_use_gtid=current_pos.
Change Master (CM) statements were either removed or modified with
current_pos --> slave_pos based on original intention of the test.
Reviewed by:
============
Brandon Nesterenko <brandon.nesterenko@mariadb.com>
For both Deb and RPM, create mariadb-client-compat and
mariadb-server-compat containing the mysql links to the mariadb
named executables/scripts.
The mariadb-client-core mysqlcheck was moved to mariadb-client-compat.
The symlinks in MYSQL_ADD_EXECUTABLE is tagged as a
{Client,Server}Symlinks component and placed in
the symlinks packages.
Man pages are restructured be installed into compat package
if that matches the executable.
Columnstore has a workaround as it doesn't use the cmake/plugin.cmake.
Scripts likewise have compatibility symlinks are in
the {server,client}-compat packages.
Co-author: Andrew Hutchings <andrew@linuxjedi.co.uk>
Closes#2390
* move them from ManPagesX component to X (works better for plugins),
but keep ManPagesDevelopment as C/C is using it
* move backup manpages to Backup
* move plugin manpages (s3, rocksdb) to plugins
DuplicateWeedout semi-join optimization requires that the tables in
the parent subquery provide rowids that can be compared across table
scans. Most engines support this, federated is the only exception.
DuplicateWeedout is the default catch-all semi-join strategy, which
must be always available. If it is not available for some edge case,
it's better to disable semi-join conversion altogether.
This is what was done in the fix for MDEV-30395. However that fix
has put the check before the view processing, so it didn't detect
federated tables inside mergeable VIEWs.
This patch moves the check to be done at a later phase, when mergeable
views are already merged.
Make get_best_group_min_max() exit early if the table has
table->records()=0. Attempting to compute loose scan over 0
groups eventually causes an assert when trying to get the
cost of reading 0 ranges.
Extended keys works by first checking if the engine supports extended
keys.
If yes, it extends secondary key with primary key components and mark the
secondary keys as HA_EXT_NOSAME (unique).
If we later notice that there where no primary key, the extended key
information for secondary keys in share->key_info is reset. However the
key_info->flag HA_EXT_NOSAME was not reset!
This causes some strange things to happen:
- Tables that have no primary key or secondary index that contained the
primary key would be wrongly optimized as the secondary key could be
thought to be unique when it was not and not unique when it was.
- The problem was not shown in EXPLAIN because of a bug in
create_ref_for_key() that caused EQ_REF to be displayed by EXPLAIN as REF
when extended keys where used and the secondary key contained the primary
key.
This is fixed with:
- Removed wrong test in make_join_select() which did not detect that key
where unique when a secondary key contains the primary.
- Moved initialization of extended keys from create_key_infos() to
init_from_binary_frm_image() after we know if there is a usable primary
key or not. One disadvantage with this approach is that
key_info->key_parts may have not used slots (for keys we thought could
be extended but could not). Fixed by adding a check for unused key_parts
to copy_keys_from_share().
Other things:
- Simplified copying of first key part in create_key_infos().
- Added a lot of code comments in code that I had to check as part of
finding the issue.
- Fixed some indentation.
- Replaced a couple of looks using references to pointers in C
context where the reference does not give any benefit.
- Updated Aria and Maria to not assume the all key_info->rec_per_key
are in one memory block (this could happen when using dervived
tables with many keys).
- Fixed a bug where key_info->rec_per_key where not allocated
- Optimized TABLE::add_tmp_key() to only call alloc() once.
(No logic changes)
Test case changes:
- innodb_mysql.test changed index as an index the optimizer thought
was unique, was not. (Table had no primary key)
TODO:
- Move code that checks for partial or too long keys to the primary loop
earlier that initally decides if we should add extended key fields.
This is needed to ensure that HA_EXT_NOSAME is not set for partial or
too long keys. It will also shorten the current code notable.
Some tables where not eliminated when they could have been.
This was caused because HA_KEYREAD_ONLY is not set anymore for InnoDB
clustered index and the elimination code was depending on
field->part_of_key_not_clustered which was not set if HA_KEYREAD_ONLY
is not present.
Fixed by moving out field->part_of_key and
field->part_of_key_not_clustered from under HA_KEYREAD_ONLY (which
they should never have been part of).
Other things:
- Fixed a bug in make_join_select() that caused range to be used when
there where elminiated or constant tables present (Caused wrong
change of plans in join_outer_innodb.test). This also affected
show_explain.test and subselct_sj_mat.test where wrong 'range's where
replaced with index scans.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
The original code was there to favor index search over table scan.
This is not needed anymore as the cost calculations for table scans
and index lookups are now more exact.
avoid contaminating my_getopt with sysvar implementation details.
adjust variable values after my_getopt, like it's done for others.
this fixes --help to show correct values.
matching_candidates_in_table() computes the number of rows one
gets from the current table after applying the WHERE clause on
just this table
The function had a "found_counstraint heuristic" which reduced the
number of rows after WHERE check by 25% if there were comparisons
between key parts in table T and previous tables, like WHERE
T.keyXpartY= func(prev_table.cols)
Note that such comparisons can only be checked when the row of
table T is joined with rows of the previous tables. It is wrong
to apply the selectivity before the join operation.
Fixed by moving the 'found_constraint' code to a separate function
and only reducing the #rows in 'records_out'.
Renamed matching_candidates_in_table() to apply_selectivity_for_table() as
the function now either applies selectivity on the rows (depending
on the value of thd->variables.optimizer_use_condition_selectivity)
or uses the selectivity from the available range conditions.
The reason things fails in 10.5 and above is that test_quick_select()
returns -1 (impossible range) for empty tables if there are any
conditions attached.
This didn't happen in 10.4 as the cost for a range was more than for
a table scan with 0 rows and get_key_scan_params() did not create any
range plans and thus did not mark the range as impossible.
The code that checked the 'impossible range' conditions did not take
into account all cases of LEFT JOIN usage.
Adding an extra check if the table is used with an ON condition in case
of 'impossible range' fixes the issue.
Detailed description:
- Added more function comments and fixed types in some old comments
- Removed an outdated comment
- Cleaned up some functions in records.cc
- Replaced "while" with "if"
- Reused error code
- Made functions similar
- Added caching of pfs_batch_update()
- Simplified some rowid_filter code
- Only call build_range_rowid_filter() if rowid filter will be used
- Replaced tab->is_rowid_filter_built with need_to_build_rowid_filter.
We only have to test need_to_build_rowid_filter to know if we have
to build the filter. Old code needed two tests
- Added function 'clear_range_rowid_filter' to disable rowid filter.
Made things simpler as we can now clear all rowid filter variables
in one place.
- Removed some 'if' in sub_select()
The problem was that make_join_select() called test_quick_select() outside
of best_access_path(). This could use indexes that where not taken into
account before and this caused changes to selectivity and 'records_out'.
Fixed by updating records_out if test_quick_select() was called.
The assert was there to check that engines reports sensible numbers for IO.
However this does not work in case of optimizer_disk_read_ratio=0.
Fixed by removing the assert.
The bug was some old code that, without any explanation, reset
PART_KEY_FLAG from fields in temporary tables. This caused
join_tab->key_dependent to not be updated properly, which caused
an assert.
Added comments that not used keys of derivied tables will be deleted.
Added some comments about checking if pos_in_table_list is 0.
Other things:
- Added a marker (DBTYPE_IN_PREDICATE) in TABLE_LIST->derived_type
to indicate that the table was generated from IN (list). This is
useful for debugging and can later be used by explain if needed.
- Removed a not needed test of table->pos_in_table_list as it should
always be valid at this point in time.
The problem was an assignment in test_quick_select() that flagged empty
tables with "Impossible where". This test was however wrong as it
didn't work correctly for left join.
Removed the test, but added checking of empty tables in DELETE and UPDATE
to get similar EXPLAIN as before.
The new tests is a bit more strict (better) than before as it catches all
cases of empty tables in single table DELETE/UPDATE.
Fixes also
MDEV-30104 Server crashes in handler_rowid_filter_check upon ANALYZE TABLE
cancel_pushed_rowid_filter() didn't inform the handler that rowid_filter
was canceled.