The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.
Solution is to mark constant only top equalities connected with AND.
Create an Item_cache based on item's cmp_type, not result_type in
subselect_engine.
Use result_field in Item_cache_temporal::cache_value(),
just like all other Item_cache*::cache_value() do.
Points and lines should disappear if we got negative D.
To make it work properly inside the GEOMETRYCOLLECTION,
we add the empty operation there.
bug #986977 Assertion `!cur_p->event' failed in Gcalc_scan_iterator::arrange_event(int, int).
The double->inernal coord conversion produced -0 (minus zero) on some data.
That minus-zero produces invalid comparison results when compared agains plus-zero.
So we fixed the gcalc_set_double() to avoid it.
per-file comments:
mysql-test/r/gis-precise.result
result updated.
mysql-test/t/gis-precise.test
tests for #977021 and #986977 added.
sql/gcalc_slicescan.cc
bug #986977. The gcalc_set_double fixed to not produce minus-zero.
sql/item_geofunc.cc
bug #977021. Add the NOOP for the disappearing features.
Analysis:
The reason for the wrong result is the interaction between constant
optimization (in this case 1-row table) and subquery optimization.
- First the outer query is optimized, and 'make_join_statistics' finds that
table t2 has one row, reads that row, and marks the whole table as constant.
This also means that all fields of t2 are constant.
- Next, we optimize the subquery in the end of the outer 'make_join_statistics'.
The field 'f2' is considered constant, with value '3'. The subquery predicate
is rewritten as the constant TRUE.
- The outer query execution detects early that the whole query result is empty
and calls 'return_zero_rows'. Since the query is with implicit grouping, we
have to produce one row with special values for the aggregates (depending on
each aggregate function), and NULL values for all non-aggregate fields. This
function calls 'no_rows_in_result' to set each aggregate function to the
default value when it aggregates over an empty result, and then calls
'send_data', which in turn evaluates each Item in the SELECT list.
- When evaluation reaches the subquery predicate, it executes the subquery
with field 'f2' having a constant value '3', and the subquery produces the
incorrect result '7'.
Solution:
Implement Item::no_rows_in_result for all subquery predicates. In order to
make this work, it is also needed to make all val_* methods of all subquery
predicates respect the Item_subselect::forced_const flag. Otherwise subqueries
are executed anyways, and override the default value set by no_rows_in_result
with whatever result is produced from the subquery evaluation.
As part of derived tables redesign, values for VIEW_ALGORITHM_MERGE and VIEW_ALGORITHM_TMPTABLE have changed from (former values 1 rsp 2 , new values 5 rsp 9).
This lead to the problem that views, created with version 5.2 or earlier would not work in all situations (e.g "SHOW CREATE VIEW"), or with mysqldump.
The fix is to restore backward compatibility for the from file, and convert algorithm={1,2} in the frm to {5,9} when reading .frm from disk, and store backward compatible values when writing from to disk.
Also allow processing correct processing for "invalid" .frms created with MariaDB 5.3/5.5 GA releases (where algorithm stored in memory matched the one stored in frm).
Fixed incorrect type casting which made all fields (except very first) changes to materialized table incorrect.
Saved list of view/derived table used items after expanding '*'.
Part#1: make EXPLAIN's plan match the one by actual execution:
Item_subselect::used_tables() should return the same value irrespectively
of whether we're running an EXPLAIN or a SELECT.
The previous patch for the bug (that erroneously identified the bug as
bug 972973 in its comment) was incorrect.
It turned out that the code that triggered the abort complain reported for
the bug was not needed at all.
When the function free_tmp_table deletes the handler object for
a temporary table the field TABLE::file for this table should be
set to NULL. Otherwise an assertion failure may occur.
This bug happened because the function find_field_in_view formed
autogenerated names of view columns without a possibility to roll
them back. In some situation it could cause memory misuses reported
by valgrind or even crashes.
RB-tree index in the MEMORY table fails if it grews over 4G.
That happened because the old_allocated variable in hp_rb_write_key()
had the uint type. Changed with the 'size_t' type to be same as the
'rb_tree.allocated'.
per-file comments:
storage/heap/hp_write.c
MDEV-80 Memory engine table full at much less than max_heap_table_size with btree index.
uint->size_t for the 'old_allocated'.
When a view/derived table is converted from merged to materialized the
items from the used_item lists are substituted for items referring to
the fields of the result of the materialization. The problem appeared
with queries employing natural joins. Since the resolution of a natural
join was performed only once the used_item list formed at the second
execution of the query lacked the references to the fields that were
used only in the equality predicates generated for the natural join.
The main problem was a bug in CSV where it provided wrong statistics (it claimed the table was empty when it wasn't)
I also fixed wrong freeing of blob's in the CSV handler. (Any call to handler::read_first_row() on a CSV table with blobs would fail)
mysql-test/r/csv.result:
Added new test case
mysql-test/r/partition_innodb.result:
Updated test results after fixing bug with impossible partitions and const tables
mysql-test/t/csv.test:
Added new test case
sql/sql_select.cc:
Cleaned up code for handling of partitions.
Fixed also a bug where we didn't threat a table with impossible partitions as a const table.
storage/csv/ha_tina.cc:
Allocate blobroot onces.