Following reasons caused mismatches:
- different handling of invalid values;
- different CAST results with fractional seconds;
- microseconds support in MariaDB;
- different algorithm of comparing temporal values;
- differences in error and warning texts and codes;
- different approach to truncating datetime values to time;
- additional collations;
- different record order for queries without ORDER BY;
- MySQL bug#66034.
More details in MDEV-369 comments.
make CMakeLists.txt to detect if the installed boost can be compiled with the
installed compile and specified set of compiler options.
Background: even sufficiently new Boost cannot be compiled with the sufficiently old gcc
in the presence of -fno-rtti
mysql_rm_table_no_locks() function was modified.
When we construct log record for the DROP TABLE, now we
look if there's a comment before the first table name and
add it to the record if so.
per-file comments:
sql/sql_table.cc
MDEV-340 Save replication comments for DROP TABLE.
comment_length() function implemented to find comments in the query,
call it in mysql_rm_table_no_locks() and use the result to form log record.
mysql-test/suite/binlog/r/binlog_drop_if_exists.result
MDEV-340 Save replication comments for DROP TABLE.
test result updated.
mysql-test/suite/binlog/t/binlog_drop_if_exists.test
MDEV-340 Save replication comments for DROP TABLE.
test case added.
Following reasons caused mismatches:
- different handling of invalid values;
- different CAST results with fractional seconds;
- microseconds support in MariaDB;
- different algorithm of comparing temporal values;
- differences in error and warning texts and codes;
- different approach to truncating datetime values to time;
- additional collations;
- different record order for queries without ORDER BY;
- MySQL bug#66034.
More details in MDEV-369 comments.
- Moved the definitions of the classes to store data from persistent
statistical tables into statistics.h, leaving in other internal
data structures only references to the corresponding objects.
- Defined class Column_statistics_collected derived from the class
Column_statistics. This is a helper class to collect statistics
on columns.
- Moved references to read statistics to TABLE SHARE, leaving the
the reference to the collected statistics in TABLE.
- Added a new clone method for the class Field allowing to clone
fields attached to table shares. It was was used to create
fields for min/max values in the memory of the table share.
A lso:
- Added procedures to allocate memory for statistical data in
the table share memory and in table memory.
Also:
- Added a test case demonstrating how ANALYZE could work in parallel
to collect statistics on different indexes of the same table.
- Added a test two demonstrate how two connections working
simultaneously could allocate memory for statistical data in the
table share memory.
- index_merge/intersection is unable to work on GIS indexes, because:
1. index scans have no Rowid-Ordered-Retrieval property
2. When one does an index-only read over a GIS index, they do not
get the index tuple, because index only contains bounding box of the geometry.
This is why key_copy() call crashed.
This patch fixes#1, which makes the problem go away. Theoretically, it would
be nice to check #2, too, but SE API semantics is not sufficiently precise to do it.
Now partition engine adds underlying tables to the QC and ask underlying tables engine permittion to cache the query and return result of the query.
Incorrect QC cleanup in case of table registration failure fixe.
Unified interface for myisammrg & partitioned engnes for QC.
COUNT DISTINCT GROUP BY
PROBLEM:
To calculate the final result of the count(distinct(select 1))
we call 'end_send' function instead of 'end_send_group'.
'end_send' cannot be called if we have aggregate functions
that need to be evaluated.
ANALYSIS:
While evaluating for a possible loose_index_scan option for
the query, the variable 'is_agg_distinct' is set to 'false'
as the item in the distinct clause is not a field. But, we
choose loose_index_scan by not taking this into
consideration.
So, while setting the final 'select_function' to evaluate
the result, 'precomputed_group_by' is set to TRUE as in
this case loose_index_scan is chosen and we do not have
agg_distinct in the query (which is clearly wrong as we
have one).
As a result, 'end_send' function is chosen as the final
select_function instead of 'end_send_group'. The difference
between the two being, 'end_send_group' evaluates the
aggregates while 'end_send' does not. Hence the wrong result.
FIX:
The variable 'is_agg_distinct' always represents if
'loose_idnex_scan' can be chosen for aggregate_distinct
functions present in the select.
So, we check for this variable to continue with
loose_index_scan option.
sql/opt_range.cc:
Do not continue if is_agg_distinct is not set in case
of agg_distinct functions.
Now when a table is dropped the statistics on the table is removed
from the statistical tables. If the table is altered in such a way
that a column is dropped or the type of the column is changed then
statistics on the column is removed from the table column_stat.
It also triggers removal of the statistics on the indexes who use
this column as its component.
Added procedures that changes the names of the tables or columns
in the statistical tables for.
These procedures are used when tables/columns are renamed.
Also partly re-factored the code that introduced the persistent
statistical tables.
Added test cases into statistics.test to cover the new code.