Invalid memory read if HANDLER ... READ NEXT is executed
after failed (e.g. empty table) HANDLER ... READ FIRST.
The problem was that we attempted to perform READ NEXT,
whereas there is no pivot available from failed READ FIRST.
With this fix READ NEXT after failed READ FIRST equals
to READ FIRST.
This bug affects MyISAM tables only.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
Problem: involving a spatial index for "non-spatial" queries
(that don't containt MBRXXX() functions) may lead to failed assert.
Fix: don't use spatial indexes in such cases.
the Point() and Linestring() functions create WKB representation of an
object instead of an real geometry object.
That produced bugs when these were inserted into tables.
GIS tests fixed accordingly.
per-file messages:
mysql-test/r/gis-rtree.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/r/gis.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/t/gis-rtree.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - GeomFromWKB invocations removed
mysql-test/t/gis.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - AsWKB invocations added
sql/item_geofunc.cc
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
Point() and similar functions to create a proper object
Fixed the usage of spatial data (and Point in specific) with
non-spatial indexes.
Several problems :
- The length of the Point class was not updated to include the
spatial reference system identifier. Fixed by increasing with 4
bytes.
- The storage length of the spatial columns was not accounting for
the length that is prepended to it. Fixed by treating the
spatial data columns as blobs (and thus increasing the storage
length)
- When creating the key image for comparison in index read wrong
key image was created (the one needed for and r-tree search,
not the one for b-tree/other search). Fixed by treating the
spatial data columns as blobs (and creating the correct kind of
image based on the index type).
As the result of DOUBLE claculations can be bigger
than DBL_MAX constant we use in code, we shouldn't use this constatn
as a biggest possible value.
Particularly the rtree_pick_key function set 'min_area= DBL_MAX' relying
that any rtree_area_increase result will be less so we return valid
key. Though in rtree_area_increase function we calculate the area
of the rectangle, so the result can be 'inf' if the rectangle is
huge enough, which is bigger than DBL_MAX.
Code of the rtree_pick_key modified so we always return a valid key.
It was syntactically correct to define
spatial keys over parts of columns (e.g.
ALTER TABLE t1 ADD x GEOMETRY NOT NULL,
ADD SPATIAL KEY (x(32))).
This may lead to undefined results and/or
interpretation.
Fixed by not allowing partial column
specification in a SPATIAL index definition.
Different set of conditions is used to verify
the validity of index definitions over a GEOMETRY
column in ALTER TABLE and CREATE TABLE.
The difference was on how sub-keys notion validity
is checked.
Fixed by extending the CREATE TABLE condition to
support the cases allowed in ALTER TABLE.
Made the SHOW CREATE TABLE not to display spatial
indexes using the sub-key notion.
Different set of conditions is used to verify
the validity of index definitions over a GEOMETRY
column in ALTER TABLE and CREATE TABLE.
The difference was on how sub-keys notion validity
is checked.
Fixed by extending the CREATE TABLE condition to
support the cases allowed in ALTER TABLE.
Made the SHOW CREATE TABLE not to display spatial
indexes using the sub-key notion.
incorrect key file for table
In certain cases it could happen that deleting a row could
corrupt an RTREE index.
According to Guttman's algorithm, page underflow is handled
by storing the page in a list for later re-insertion. The
keys from the stored pages have to be inserted into the
remaining pages of the same level of the tree. Hence the
level number is stored in the re-insertion list together
with the page.
In the MySQL RTree implementation the level counts from zero
at the root page, increasing numbers for levels down the tree.
If during re-insertion of the keys the tree height grows, all
level numbers become invalid. The remaining keys will be
inserted at the wrong level.
The fix is to increment the level numbers stored in the
reinsert list after a split of the root block during reinsertion.
spatial index
While executing OPTIMIZE TABLE on MyISAM tables the server re-creates the
index file(s) in order to sort them physically by the key. This cannot be
done for R-tree indexes as it makes no sense.
The server was not checking the type of the index and was accessing an
R-tree index as if it was a B-tree.
Fixed by preventing sorting the index file if it contains an R-tree index.
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
When a default of '' was specified for TEXT/BLOB columns, the specification
was silently ignored. This is presumably to be nice to applications (or
people) who generate their column definitions in a not-very-clever fashion.
For clarity, doing this now results in a warning, or an error in strict
mode.
CHECK TABLE could complain about a fully intact spatial index.
A wrong comparison operator was used for table checking.
The result was that it checked for non-matching spatial keys.
This succeeded if at least two different keys were present,
but failed if only the matching key was present.
I fixed the key comparison.
mysqldump / SHOW CREATE TABLE will show the NEXT available value for
the PK, rather than the *first* one that was available (that named in
the original CREATE TABLE ... AUTO_INCREMENT = ... statement).
This should produce correct and robust behaviour for the obvious use
cases -- when no data were inserted, then we'll produce a statement
featuring the same value the original CREATE TABLE had; if we dump
with values, INSERTing the values on the target machine should set the
correct next_ID anyway (and if not, we'll still have our AUTO_INCREMENT =
... to do that). Lastly, just the CREATE statement (with no data) for
a table that saw inserts would still result in a table that new values
could safely be inserted to).
There seems to be no robust way however to see whether the next_ID
field is > 1 because it was set to something else with CREATE TABLE
... AUTO_INCREMENT = ..., or because there is an AUTO_INCREMENT column
in the table (but no initial value was set with AUTO_INCREMENT = ...)
and then one or more rows were INSERTed, counting up next_ID. This
means that in both cases, we'll generate an AUTO_INCREMENT =
... clause in SHOW CREATE TABLE / mysqldump. As we also show info on,
say, charsets even if the user did not explicitly give that info in
their own CREATE TABLE, this shouldn't be an issue.
As per above, the next_ID will be affected by any INSERTs that have
taken place, though. This /should/ result in correct and robust
behaviour, but it may look non-intuitive to some users if they CREATE
TABLE ... AUTO_INCREMENT = 1000 and later (after some INSERTs) have
SHOW CREATE TABLE give them a different value (say, CREATE TABLE
... AUTO_INCREMENT = 1006), so the docs should possibly feature a
caveat to that effect.
It's not very intuitive the way it works now (with the fix), but it's
*correct*. We're not storing the original value anyway, if we wanted
that, we'd have to change on-disk representation?
If we do dump/load cycles with empty DBs, nothing will change. This
changeset includes an additional test case that proves that tables
with rows will create the same next_ID for AUTO_INCREMENT = ... across
dump/restore cycles.
Confirmed by support as likely solution for client's problem.