precision > 0 && scale <= precision'.
A sign of a resulting item of the IFNULL function was not
updated and the maximal length of this result was calculated
improperly. Correct algorithm was copy&pasted from the IF
function implementation.
Fixed the usage of spatial data (and Point in specific) with
non-spatial indexes.
Several problems :
- The length of the Point class was not updated to include the
spatial reference system identifier. Fixed by increasing with 4
bytes.
- The storage length of the spatial columns was not accounting for
the length that is prepended to it. Fixed by treating the
spatial data columns as blobs (and thus increasing the storage
length)
- When creating the key image for comparison in index read wrong
key image was created (the one needed for and r-tree search,
not the one for b-tree/other search). Fixed by treating the
spatial data columns as blobs (and creating the correct kind of
image based on the index type).
Bug#29816 Syntactically wrong query fails with misleading error message
The core problem is that an SQL-invoked function name can be a <schema
qualified routine name> that contains no <schema name>, but the mysql
parser insists that all stored procedures (function, procedures and
triggers) must have a <schema name>, which is not true for functions.
This problem is especially visible when trying to create a function
or when a query contains a syntax error after a function call (in the
same query), both will fail with a "No database selected" message if
the session is not attached to a particular schema, but the first
one should succeed and the second fail with a "syntax error" message.
Part of the fix is to revamp the sp name handling so that a schema
name may be omitted for functions -- this means that the internal
function name representation may not have a dot, which represents
that the function doesn't have a schema name. The other part is
to place schema checks after the type (function, trigger or procedure)
of the routine is known.
Even though it returns NULL, the MAKETIME function did not have this property set,
causing a failed assertion (designed to catch exactly this).
Fixed by setting the nullability property of MAKETIME().
bitmap_is_set(table->write_set, fiel
Problem: creating a temporary table we allocate the group buffer if needed
followed by table bitmaps (see create_tmp_table()). Reserving less memory for
the group buffer than actually needed (used) for values retrieval may lead
to overlapping with followed bitmaps in the memory pool that in turn leads
to unpredictable consequences.
As we use Item->max_length sometimes to calculate group buffer size,
it must be set to proper value. In this particular case
Item_datetime_typecast::max_length is too small.
Another problem is that we use max_length to calculate the group buffer
key length for items represented as DATE/TIME fields which is superfluous.
Fix: set Item_datetime_typecast::max_length properly,
accurately calculate the group buffer key length for items
represented as DATE/TIME fields in the buffer.
Item_sum_distinct::setup(THD*): Assertion
There was an assertion to detect a bug in ROLLUP
implementation. However the assertion is not true
when used in a subquery context with non-cacheable
statements.
Fixed by turning the assertion to accepted case
(just like it's done for the other aggregate functions).
when a divisor is less than 1 and its fractional part is very long.
For example:
1 % .123456789123456789123456789123456789123456789123456789123456789123456789123456789;
Stack buffer overflow has been fixed in the do_div_mod function.
Problem: creating an rb-tree key we store length (2 bytes) before the actual data for
varchar key parts. The fact was missed for NULL key parts, when we set NULL byte and
skip the rest.
Fix: take into account the length of the varchar key parts for NULLs.
As the result of DOUBLE claculations can be bigger
than DBL_MAX constant we use in code, we shouldn't use this constatn
as a biggest possible value.
Particularly the rtree_pick_key function set 'min_area= DBL_MAX' relying
that any rtree_area_increase result will be less so we return valid
key. Though in rtree_area_increase function we calculate the area
of the rectangle, so the result can be 'inf' if the rectangle is
huge enough, which is bigger than DBL_MAX.
Code of the rtree_pick_key modified so we always return a valid key.
This actually, fix for the patch for bug-27354. The problem with
the patch was that Item_func_sp::used_tables() was updated, but
Item_func_sp::const_item() was not. So, for Item_func_sp, we had
the following inconsistency:
- used_tables() returned RAND_TABLE, which means that the item
can produce "random" results;
- but const_item() returned TRUE, which means that the item is
a constant one.
The fix is to change Item_func_sp::const_item() behaviour: it must
return TRUE (an item is a constant one) only if a stored function
is deterministic and each of its arguments (if any) is a constant
item.
Bug#28878: InnoDB tables with UTF8 character set and indexes cause wrong result for DML
When making key reference buffers over CHAR fields whitespace (0x20) must be used to fill in the remaining space in the field's buffer. This is what Field_string::store() does. Fixed Field_string::get_key_image() to do the same.
This is for bug #29446 "Specifying a myisam_sort_buffer > 4GB on 64 bit machines not possible". Support for myisam_sort_buffer_size > 4 GB on 64-bit Windows will be looked at later in 5.2.
The function str_to_date has a field to say whether it's invoked constant
arguments. But this member was not initialized, causing the function to
think that it could use a cache of the format type when said cache was in
fact not initialized.
Fixed by initializing the field to false.