conditions.
When allocating memory for KEY_FIELD/SARGABLE_PARAM structures the
function update_ref_and_keys did not take into account the fact that
a single row equality could be replaced by several simple equalities.
Fixed by adjusting the counter cond_count accordingly for each subquery
when performing substitution of a row equality for simple equalities.
Pushbuild fixes:
- Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is
more than it makes sense to do in typical cases)
- Don't call sel_arg->test_use_count() if we've already allocated
more than MAX_SEL_ARGs elements. The test will succeed but will take
too much time for the test suite (and not provide much value).
NO_AUTO_VALUE_ON_ZERO mode.
In the NO_AUTO_VALUE_ON_ZERO mode the table->auto_increment_field_not_null
variable is used to indicate that a non-NULL value was specified by the user
for an auto_increment column. When an INSERT .. ON DUPLICATE updates the
auto_increment field this variable is set to true and stays unchanged for the
next insert operation. This makes the next inserted row sometimes wrongly have
0 as the value of the auto_increment field.
Now the fill_record() function resets the table->auto_increment_field_not_null
variable before filling the record.
The table->auto_increment_field_not_null variable is also reset by the
open_table() function for a case if we missed some auto_increment_field_not_null
handling bug.
Now the table->auto_increment_field_not_null is reset at the end of the
mysql_load() function.
Reset the table->auto_increment_field_not_null variable after each
write_row() call in the copy_data_between_tables() function.
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
Problem: SOUNDEX returned an invalid string for international
characters in multi-byte character sets.
For example: for a Chinese/Japanese 3-byte long character
_utf8 0xE99885 it took only the very first byte 0xE9,
put it into the outout string and then appended with three
DIGIT ZERO characters, so the result was 0xE9303030 - which
is an invalide utf8 string.
Fix: make SOUNDEX() multi-byte aware and - put only complete
characters into result, thus return only valid strings.
This patch also makes SOUNDEX() compatible with UCS2.
Geometry fields have a result type string and a
special subclass to cater for the differences
between them and the base class (just like
DATE/TIME).
When creating temporary tables for results of
functions that return results of type GEOMETRY
we must construct fields of the derived class
instead of the base class.
Fixed by creating a GEOMETRY field (Field_geom)
instead of a generic BLOB (Field_blob) in temp
tables for the results of GIS functions that
have GEOMETRY return type (Item_geometry_func).
If a set function with a outer reference s(outer_ref) cannot be aggregated
the outer query against which the reference has been resolved then MySQL
interpretes s(outer_ref) in the same way as it would interpret s(const).
Hovever the standard requires throwing an error in this situation.
Added some code to support this requirement in ansi mode.
Corrected another minor bug in Item_sum::check_sum_func.
When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
Problem: GROUP BY on empty ucs2 strings crashed server.
Reason: sometimes mi_unique_hash() is executed with
ptr=null and length=0, which means "empty string".
The branch of code handling UCS2 character set
was not safe against ptr=null and fell into and
endless loop even if length=0 because of poiter
arithmetic overflow.
Fix: adding special check for length=0 to avoid pointer arithmetic
overflow.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
context was used as an argument of GROUP_CONCAT.
Ensured correct setting of the depended_from field in references
generated for set functions aggregated in outer selects.
A wrong value of this field resulted in wrong maps returned by
used_tables() for these references.
Made sure that a temporary table field is added for any set function
aggregated in outer context when creation of a temporary table is
needed to execute the inner subquery.