When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
- Define Sql_alloc::operator new() as thow() so that C++ compiler
handles NULL return values
(there is no testcase as there is no portable way to set limit on the
amount of memory that a process can allocate)
Geometry fields have a result type string and a
special subclass to cater for the differences
between them and the base class (just like
DATE/TIME).
When creating temporary tables for results of
functions that return results of type GEOMETRY
we must construct fields of the derived class
instead of the base class.
Fixed by creating a GEOMETRY field (Field_geom)
instead of a generic BLOB (Field_blob) in temp
tables for the results of GIS functions that
have GEOMETRY return type (Item_geometry_func).
If a set function with a outer reference s(outer_ref) cannot be aggregated
the outer query against which the reference has been resolved then MySQL
interpretes s(outer_ref) in the same way as it would interpret s(const).
Hovever the standard requires throwing an error in this situation.
Added some code to support this requirement in ansi mode.
Corrected another minor bug in Item_sum::check_sum_func.
When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
from func_group.test after the patch for bug #27229 had been applied.
The memory corruption happened because in some rare cases the function
count_field_types underestimated the number of elements in
in the array param->items_to_copy.
context was used as an argument of GROUP_CONCAT.
Ensured correct setting of the depended_from field in references
generated for set functions aggregated in outer selects.
A wrong value of this field resulted in wrong maps returned by
used_tables() for these references.
Made sure that a temporary table field is added for any set function
aggregated in outer context when creation of a temporary table is
needed to execute the inner subquery.
another user.
When the DEFINER clause isn't specified in the ALTER statement then it's loaded
from the view definition. If the definer differs from the current user then
the error is thrown because only a super-user can set other users as a definers.
Now if the DEFINER clause is omitted in the ALTER VIEW statement then the
definer from the original view is used without check.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
The problem in this bug is when we create temporary tables. When
temporary tables are created for unions, there is some
inferrence being carried out regarding the type of the column.
Whenever this column type is inferred to be REAL (i.e. FLOAT or
DOUBLE), MySQL will always try to maintain exact precision, and
if that is not possible (there are hardware limits, since FLOAT
and DOUBLE are stored as approximate values) will switch to
using approximate values. The problem here is that at this point
the information about number of significant digits is not
available. Furthermore, the number of significant digits should
be increased for the AVG function, however, this was not properly
handled. There are 4 parts to the problem:
#1: DOUBLE and FLOAT fields don't display their proper display
lengths in max_display_length(). This is hard-coded as 53 for
DOUBLE and 24 for FLOAT. Now changed to instead return the
field_length.
#2: Type holders for temporary tables do not preserve the
max_length of the Item's from which they are created, and is
instead reverted to the 53 and 24 from above. This causes
*all* fields to get non-fixed significant digits.
#3: AVG function does not update max_length (display length)
when updating number of decimals.
#4: The function that switches to non-fixed number of
significant digits should use DBL_DIG + 2 or FLT_DIG + 2 as
cut-off values (Since fixed precision does not use the 'e'
notation)
Of these points, #1 is the controversial one, but this
change is preferred and has been cleared with Monty. The
function causes quite a few unit tests to blow up and they had
to b changed, but each one is annotated and motivated. We
frequently see the magical 53 and 24 give way to more relevant
numbers.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0