MDEV-17058: Test failure on wsrep.variables
MDEV-17060: Test failure on galera.galera_var_slave_threads
Fix incorrect calculation of increased applier (slave) threads.
Note that increase change takes effect "immediately" but we should
use proper wait condition to wait it. Reducing the number of
slave threads is not immediate as thread will only exit after a
replication event.
Problem was that in SST log_bin_index name and directory was not
handled and passed to rsync SST script.
wsrep_sst_common.sh
Read binlog index dirname and filename if --binlog-index
parameter is provided. Read binlog filenames from that file
from donor and write transfered binlog filenames to that
file in joiner.
mysqld.cc, mysqld.h
Moved opt_binlog_index_name from static to global and added
it to extern.
wsrep_sst.cc
generate_binlog_index_opt_val
New function to generate binlog index name if opt_binlog_index_name is
given on configuration.
sst_prepare_other
Add binlog index configuration to SST command.
wsrep_sst.h
Add new SST parameter --binlog-index
Add test case.
The regression that was introduced in
commit 723f87e9d3
was fixed as part of MDEV-13333
(commit 3b37edee1a)
without a test case, because the MDEV-13333 test case
is even less deterministic than these ones.
ALTER TABLE locks the table with TL_READ_NO_INSERT, to prevent the
source table modifications while it's being copied. But there's an
indirect way of modifying a table, via cascade FK actions.
After previous commits, an attempt to modify an FK parent table
will cause FK children to be prelocked, so the table-being-altered
cannot be modified by a cascade FK action, because ALTER holds a
lock and prelocking will wait.
But if a new FK is being added by this very ALTER, then the target
table is not locked yet (it's a temporary table). So, we have to
lock FK parents explicitly.
table_already_fk_prelocked() was looking for a table in the wrong
list (not the complete list of prelocked tables, but only in its tail,
starting from the current table - which is always empty for the last
added table), so for circular FKs it kept adding same tables to the list
indefinitely.
Backport of d6d7e169fb
If a mtr test case has started two mysqld processes (replication tests),
then kills the first one and kills the second one before starting the
first (so at some point there are two mysqlds down), then the ./mtr
waiting process bricks and forgets to monitor the "expect" file of the
first mysqld, so it never gets started again, even when its contents is
changed to "restart".
A victim of this deficiency is at least galera.galera_gcache_recover.
The fix is to keep a list of all mysqlds we should wait to start, not
just one (the last one killed).
The problem was that join_columns creation was not finished due to error of notfound column in USING, but next execution tried to use join_columns lists.
Solution is cleanup the lists on error. It can eat memory in statement MEM_ROOT but it is an error and error will be fixed or statement/procedure removed/altered.
Problem was that SQL level tried to read a record with rnd_pos()
that was already deleted by the same statement.
In the case where the page for the record had been deleted, this
caused an assert.
Fixed by extending the assert to also handle empty pages and
return HA_ERR_RECORD_DELETED for reads to deleted pages.
Field_iterator_table_ref::set_field_iterator
Several functions that processed different prepare statements missed
the DT_INIT flag in last parameter of the open_normal_and_derived_tables()
calls. It made context analysis of derived tables dependent on the order in
which the derived tables were processed by mysql_handle_derived(). This
order was induced by the order of SELECTs in all_select_list.
In 10.4 the order of SELECTs in all_select_list became different and lack
of the DT_INIT flags in some open_normal_and_derived_tables() call became
critical as some derived tables were not identified as such.
optimizer_use_condition_selectivity>=3
Selectivity analysis should be disabled for Geometrical columns
for the case like geometric_field= string_constant.
Currently for selectivity calculation we perform range analysis for a column even when we don't have any statistics(EITS).
This makes less sense but is used to catch contradiction for WHERE condition.
So the solution is to not perform range analysis for selectivity calculation for columns that do not have statistics.
Problem was that a parallel open of a table, overwrote info->state that
was in used by repair.
Fixed by changing _ma_tmp_disable_logging_for_table() to use
a new state buffer state.no_logging to store the temporary state.
Other things:
- Use original number of rows when retrying repair to get rid of a
potential warning "Number of rows changed from X to Y"
- Changed maria_commit() to make it easier to merge with 10.4
- If table is not locked (like with show commands), use the global
number of rows as the local number may not be up to date.
(Minor not critical fix)
- Added some missing DBUG_RETURN
Current versions of xtrabackup-v2 and mariabackup support the option
--innodb-data-home-dir, but this parameter is not passed to them from
the SST script, since the SST script does not receive this information
from mysqld. The transfer of this information to the SST is already
fixed by the MDEV-10754 patch, but we need to process it in the SST
script. Also, we should take into account that on the joiner side
the corresponding information is not read yet from the configuration
file (at the mysqld side) during the start of SST, so the script must
itself read it.
https://jira.mariadb.org/browse/MDEV-10756
The bug appears because of the Item_func_in::build_clone() method.
The 'array' field for the Item_func_in item that can be pushed into
the materialized view/derived table was built in the wrong way.
It becomes lame after the pushdown of the condition into the first
SELECT that defines that view/derived table. The server crashes in
the pushdown into the next SELECT while trying to use already lame
'array' field.
To fix it Item_func_in::build_clone() was changed.
Problem was that Create_field::create_length_to_internal_length()
calculated a different pack_length for NEWDECIMAL compared to
Field_new_decimal constructor which lead to some unused bytes
in the middle of the record, which Aria didn't like.
Problem was that the number of NULL bit's was record wrong in the
.frm file because there could be more fields marked NOT_NULL after the
number of not_null fields where recorded.
Fixed by copying test for virtual fields from prepare_create_field()
The code change, only the test, doesn't have to be merged to 10.3
as this is fixed there.
and use_stat_tables= PREFERABLY
Currently the code that calculates selectivity for a table does not take into account the case when
we can have GROUP BY optimization (looses index scan).
After iterating all fields and setting PART_INDIRECT_KEY_FLAG as
necessary, TABLE::mark_columns_used_by_virtual_fields() remembers
in TABLE_SHARE that this operation was done and need not be repeated.
But as the flag is set in TABLE_SHARE, PART_INDIRECT_KEY_FLAG must
be set in TABLE_SHARE::field[], not only in TABLE::field[].
Otherwise all new TABLEs opened from this TABLE_SHARE will
never have it.
The bug appears because of the wrong pushdown into the WHERE clause of the
materialized derived table/view work. For the excl_dep_on_grouping_fields()
method that checks if the condition can be pushed into the WHERE clause
the case when Item_cond is used is missing. For Item_cond elements this
method always returns positive result (that condition can be pushed).
So this condition is pushed even if is shouldn't be pushed.
To fix it new Item_cond::excl_dep_on_grouping_fields() method is added.
using INSERT INTO
This patch allows condition pushdown into a materialized derived / view when
this table is used in INSERT SELECT, multi-table UPDATE and multi-table DELETE.
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.