If a set function with a outer reference s(outer_ref) cannot be aggregated
the outer query against which the reference has been resolved then MySQL
interpretes s(outer_ref) in the same way as it would interpret s(const).
Hovever the standard requires throwing an error in this situation.
Added some code to support this requirement in ansi mode.
Corrected another minor bug in Item_sum::check_sum_func.
- mysqldump executes a SHOW CREATE VIEW statement to generate the text
that it outputs. When the function name is retrieved it's database
name is unconditionally prepended. This change causes the function's
database name to be prepended only when it was used to define the
function.
When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
- 1.84e+15 converted to unsigned bigint should be
18400000000000000000 < 18446744073709551615.
- The test will still fail on windows, and is extracted
into a new bug report.
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
Problem: GROUP BY on empty ucs2 strings crashed server.
Reason: sometimes mi_unique_hash() is executed with
ptr=null and length=0, which means "empty string".
The branch of code handling UCS2 character set
was not safe against ptr=null and fell into and
endless loop even if length=0 because of poiter
arithmetic overflow.
Fix: adding special check for length=0 to avoid pointer arithmetic
overflow.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
Fix is to rewrite the MBR::overlaps() function, to compute the dimension of both
arguments, and the dimension of the intersection; test that all three dimensions are the
same (e.g., all are Polygons).
Add tests for all MBR* functions for various combinations of shapes, lines and points.
context was used as an argument of GROUP_CONCAT.
Ensured correct setting of the depended_from field in references
generated for set functions aggregated in outer selects.
A wrong value of this field resulted in wrong maps returned by
used_tables() for these references.
Made sure that a temporary table field is added for any set function
aggregated in outer context when creation of a temporary table is
needed to execute the inner subquery.
Apply the following InnoDB snapshots:
innodb-5.0-ss1319
innodb-5.0-ss1331
innodb-5.0-ss1333
innodb-5.0-ss1341
Fixes:
- Bug #21409: Incorrect result returned when in READ-COMMITTED with query_cache ON
At low transaction isolation levels we let each consistent read set
its own snapshot.
- Bug #23666: strange Innodb_row_lock_time_% values in show status; also millisecs wrong
On Windows ut_usectime returns secs and usecs relative to the UNIX
epoch (which is Jan, 1 1970).
- Bug #25494: LATEST DEADLOCK INFORMATION is not always cleared
lock_deadlock_recursive(): When the search depth or length is exceeded,
rewind lock_latest_err_file and display the two transactions at the
point of aborting the search.
- Bug #25927: Foreign key with ON DELETE SET NULL on NOT NULL can crash server
Prevent ALTER TABLE ... MODIFY ... NOT NULL on columns for which
there is a foreign key constraint ON ... SET NULL.
- Bug #26835: Repeatable corruption of utf8-enabled tables inside InnoDB
The bug could be reproduced as follows:
Define a table so that the first column of the clustered index is
a VARCHAR or a UTF-8 CHAR in a collation where sequences of bytes
of differing length are considered equivalent.
Insert and delete a record. Before the delete-marked record is
purged, insert another record whose first column is of different
length but equivalent to the first record. Under certain conditions,
the insertion can be incorrectly performed as update-in-place.
Likewise, an operation that could be done as update-in-place can
unnecessarily be performed as delete and insert, but that would not
cause corruption but merely degraded performance.
another user.
When the DEFINER clause isn't specified in the ALTER statement then it's loaded
from the view definition. If the definer differs from the current user then
the error is thrown because only a super-user can set other users as a definers.
Now if the DEFINER clause is omitted in the ALTER VIEW statement then the
definer from the original view is used without check.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.