Re-enable main.mysql_client_test on all builders, because
at the moment we do not run any --big-test on buildbot
due to resource constraints.
A number of tests were declared big in
commit eeee1832d7
in an attempt to save resources on buildbot.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
Fixed also get_sweep_read_cost() for HEAP tables.
- Move testing of my_writer to inline functions to avoid calls
- Made more functions inline. Especially thd->thread_started()
is now very optimized!
- Moved Opt_trace_stmt classe to opt_trace_context.h to get critical
functions inline
- Added unlikely() to optimize for not having optimizer trace enabled
- Made THD::trace_started() inline
- Added 'if (trace_enabled())' around some potentially expensive code
(not many found)
- Added ASSERT's to ensure we don't call expensive optimizer trace calls
if optimizer trace is not enabled
- Added length to Json_writer functions to speed up buffer writes
when optimizer trace is enabled.
- Changed LEX_CSTRING argument handling to not send full struct to writer
function on_add_str() functions now trusts length arguments
Stop masking the Data_free values, because innodb_file_per_table=1
is the default.
Also, do mask Update_time after updating tables, even though for
some reason it does appear to matter.
Cleanup install_layout to account for multi-arch setup and remove
redundant defines in debian rules.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
status threads_connected can temporarily be bigger than max_connections+1
If SHOW STATUS LIKE "Threads_connected" comes after
ER_CON_COUNT_ERROR is sent to the client, but before the counter is
decremented, Threads_connected can differ from the expected value.
ha_partition: Remove redundant 'virtual' keywords and add
missing 'override'.
FIXME: handler::table_type() is not declared virtual, yet ha_partition
and ha_sequence are seemingly trying to override it.
- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension
for partition table. Basically all partition supports
can_be_converted_by_engine() then it should be set to
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
added cmake checks for pam_ext.h and pam_appl.h headers
added check for pam_syslog()
added pam_syslog() if doesn't exist
all cmake checks performed from inside the plugin
join_cache_level=6+
The patch fixes two similar bugs in the commit 8eeb689e9f
that added multi_range_read support to partitions. The commit opened
a possibility to join a partition table using BKA+MRR. However in some
cases it could lead to wrong results or even crashes.
This could happened when
- index condition pushdown was used to join the table or
- the joined table was an inner table of an outer join and 'not exist'
optimization was applied or
- the join table was the inner table of a semi-join and the first match
optimization was applied
The bugs were in the code of the call-back functions
- partition_multi_range_key_skip_record() and
- partition_multi_range_key_skip_index_tuple().
Each of this function consist only of an invocation of another function.
Yet a wrong parameter was passed at this invocation.
The fix was suggested by Sergey Petrunia and it is apparently in line
with original design.
The corresponding comprehensive test cases demonstrating the problems
caused by the bugs were constructed by me.
If async replication slave thread conflicts with cluster replication,
then the async slave transaction should be BF aborted, and depending on the
state of async slave transaction execution, potentially also replayed.
There were problems in such BF abort implementation and the replaying was not
started.
This pull request contains fixes which make sure that if async slave thread is
marked to abort and replay, it will complete carry out the rollback and
release all locks and resources before starting the replaying. After replaying,
async slave transactions is treated as successful, so the slave thread will
continue as usual, handling next replication event.
There is also new mtr test: galera.galera_slave_replay, which stresses both a
certification failure for async slave thread and a successful BF abort
followed by replaying.
* `--defaults-file` option is showed only in `--help --verbose` if
applied
* `--default-extra-file` is showing correctly now in `--help --verbose`,
previously it was treated as a directory with appended `my.cnf`
Part#2: cleanup:
In the part 1 of the fix, DS-MRR implementation would peek into
the JOIN_TAB to get the rowid filter from
table->reginfo.join_tab->rowid_filter
This doesn't look good from code isolation standpoint (why should a
storage engine assume it is used through a JOIN_TAB?).
Fixed this by storing the 'un-pushed' rowid_filter in the DsMrr_impl
structure. The filter survives across multi_range_read_init() calls.
It is discarded when somebody calls index_end() or rnd_end() and cleans
up the DsMrr_impl.
Detecting the cpus based on sysconf of the online CPUs can significantly
over estimate the number of cpus available.
Wheither via numactl, cgroups, taskset, systemd constraints, docker
containers and probably other mechanisms, the number of threads mysqld
can be run on can be quite less.
As such we use the pthread_getaffinity_np function on Linux and FreeBSD
(identical API) to get the number of CPUs.
The number of CPUs is the default for the thread_pool_size and a too
high default will resulting in large memory usage and high context
switching overhead.
Closes PR #922
(Backport to 10.3)
Partitioning storage now supports MRR but doesn't support Index Condition
Pushdown (aka ICP). This causes counter-intuitive query plans for queries
that use BKA and conditions that depend on index fields:
- If the condition refers to other tables, BKA's variant of ICP is used
to handle it.
- If the condition depends on this table only, the optimizer will try to
use regular ICP for it, which will fail because the storage engine
doesn't support ICP.
Make the optimizer be smarter in the second case: if we were not able to
use regular ICP, use BKA's variant of ICP..
This patch fixes the following defects/bugs.
1. If BKA[H] algorithm was used to join a table for which the optimizer
had decided to employ a rowid filter the filter actually was not built.
2. The patch for the bug MDEV-21356 that added the code canceling pushing
rowid filter into an engine for the table joined with employment of
BKA[H] and MRR was not quite correct for Innodb engine because this
cancellation was done after InnoDB code had already bound the the pushed
filter to internal InnoDB structures.
Do not rebuild index when it's key part converted from utf8mb3 to utf8mb4
but key part stays the same.
dict_index_add_to_cache(): assert that prefix_len is divided by mbmaxlen
ha_innobase::compare_key_parts(): compare key part lenght in symbols instead
of bytes.