Currently for selectivity calculation we perform range analysis for a column even when we don't have any statistics(EITS).
This makes less sense but is used to catch contradiction for WHERE condition.
So the solution is to not perform range analysis for selectivity calculation for columns that do not have statistics.
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
for blob column
ANALYZE TABLE <table> does not collect statistical data on min/max values
for BLOB columns of <table>. However these values can be added into
mysql.column_stats manually by executing proper statements.
Unfortunately this led to a memory leak because the memory allocated
for these values was never freed.
This patch provides the server with a function to free memory allocated
for min/max statistical values of BLOB types.
Temporarily changed the test case until MDEV-16711 is fixed as without
this fix the test case for MDEV-16757 did not fail only for 10.0.
for blob column
ANALYZE TABLE <table> does not collect statistical data on min/max values
for BLOB columns of <table>. However these values can be added into
mysql.column_stats manually by executing proper statements.
Unfortunately this led to a memory leak because the memory allocated
for these values was never freed.
This patch provides the server with a function to free memory allocated
for min/max statistical values of BLOB types.
Encountered illegal value '' when converting to DECIMAL
The issue was that EITS data was allocated but then not read for some reason (one being to avoid a deadlock),
then the optimizer was using these bzero'ed buffers as EITS statistics.
This should not be allowed, we should use statistcs for a table only when we have successfully loaded/read
the stats from the statistical tables.
Functions from sql/statistics.cc don't seem to expect
stat tables to fail or to have inadequate structure.
Table open errors suppressed and some validity
checks added. Invalid tables reported to the server
log.
Internal updates to system statistical tables could wrongly
trigger an additional total-order replication if wsrep_repli
-cate_myisam is enabled.
Fixed by adding a check to skip total-order replication for
stat tables.
Test: galera.galera_var_replicate_myisam_on
1. When min/max value is provided the null flag for it must be set to 0
in the bitmap Culumn_statistics::column_stat_nulls.
2. When the calculation of the selectivity of the range condition
over a column requires min and max values for the column then we
have to check that these values are provided.
- Fix the crash by making get_column_range_cardinality()
to handle the special case where Column_stats objects
is an all-zeros object (the question of what is the point
of having Field::read_stats point to such object remains a
mystery)
- Added a few comments. Learning the code still.
mysql.column_stats wasn't stored/restored properly on big-endian
with histogram_type=DOUBLE_PREC_HB.
Store histogram values using int2store()/uint2korr().
Note that this patch invalidates previously calculated histogram
values on big-endian.
- Histogram::find_bucket() should not walk off the end of the value range.
- Address review feedback in Histogram::point_selectivity(): different handling
for zero-width buckets, and explanations.
- Fix Histogram::point_selectivity() to work in the case where the
passed value_pos=0 (or 1) and the first (or the last) bucket in the
histogram has zero value-range (i.e one value).
[Attempt #2]
- Use a new selectivity calculation formula in Histogram::point_selectivity.
The formula is different from the old one because it was developed from scratch.
it doesn't have any possible division-by-zero problems.
This is port of fix for MySQL BUG#17647863.
revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM
Rename test() macro to MY_TEST() to avoid conflict with libc++.
includes:
* remove some remnants of "Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING"
* introduce LOCK_share, now LOCK_ha_data is strictly for engines
* rea_create_table() always creates .par file (even in "frm-only" mode)
* fix a 5.6 bug, temp file leak on dummy ALTER TABLE
When calculating the selectivity of a range in the function
get_column_range_cardinality a check whether NULL values are
included into into the range must be done.
Don't try to a histogram if it is not read into the cache for statistical data.
It may happen so if optimizer_use_condition_selectivity is set to 3. This
setting orders the optimizer not use histograms to calculate selectivity.
Made allocation of memory for statistical data in a table share to be thread safe.
This memory is now allocated in a special MEM_ROOT that is created for each
table share.
that introduced engine independent persistent statistics.
In particular:
- added an enumeration type for possible values of the system
variable use_stat_tables
- renamed KEY::real_rec_per_key to KEY::actual_rec_per_key
- optimized the collection of statistical data for any primary
key defined only on one column.
- Moved the definitions of the classes to store data from persistent
statistical tables into statistics.h, leaving in other internal
data structures only references to the corresponding objects.
- Defined class Column_statistics_collected derived from the class
Column_statistics. This is a helper class to collect statistics
on columns.
- Moved references to read statistics to TABLE SHARE, leaving the
the reference to the collected statistics in TABLE.
- Added a new clone method for the class Field allowing to clone
fields attached to table shares. It was was used to create
fields for min/max values in the memory of the table share.
A lso:
- Added procedures to allocate memory for statistical data in
the table share memory and in table memory.
Also:
- Added a test case demonstrating how ANALYZE could work in parallel
to collect statistics on different indexes of the same table.
- Added a test two demonstrate how two connections working
simultaneously could allocate memory for statistical data in the
table share memory.