- Change 'print_buffer_to_nt_event_log' to overwrite the string
if the buffer is not long enough to hold the ending CR/LF's
- Make functions static
- Remove the "hack" intended to force 'print_buffer_to_nt_event_log'
never to use "new"
- Interpret the pointer passed to 'mysql_options' for
MYSQL_OPT_SSL_VERIFY_SERVER_CERT as a my_bool
- In 5.1 the mysql_options signature will be chanegd to take
a 'void*' in order to further emphasize the need for a pointer
to correct type
If a set function with a outer reference s(outer_ref) cannot be aggregated
the outer query against which the reference has been resolved then MySQL
interpretes s(outer_ref) in the same way as it would interpret s(const).
Hovever the standard requires throwing an error in this situation.
Added some code to support this requirement in ansi mode.
Corrected another minor bug in Item_sum::check_sum_func.
- mysqldump executes a SHOW CREATE VIEW statement to generate the text
that it outputs. When the function name is retrieved it's database
name is unconditionally prepended. This change causes the function's
database name to be prepended only when it was used to define the
function.
When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
- 1.84e+15 converted to unsigned bigint should be
18400000000000000000 < 18446744073709551615.
- The test will still fail on windows, and is extracted
into a new bug report.
Bug#27047[partial]: INFORMATION_SCHEMA table cannot have BIGINT \
fields
No Information_schema table has ever needed floating-point data
before. Transforming all floating point to a string and back to a
number causes a real data problem on Windows, where the libc may
pad the exponent with more leading zeroes than we expect and the
significant digits are truncated away.
This also makes interpreting an unimplemented type as a string into
a fatal error in debug builds. Thus, we will catch problems when we
try to use those types in new I_S tables.
- Fixing utf8_general_cs according to recent changes.
- Compiling utf8_general_cs in pentium-debug-max configuration
to avoid these problems in the future.
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
Problem: GROUP BY on empty ucs2 strings crashed server.
Reason: sometimes mi_unique_hash() is executed with
ptr=null and length=0, which means "empty string".
The branch of code handling UCS2 character set
was not safe against ptr=null and fell into and
endless loop even if length=0 because of poiter
arithmetic overflow.
Fix: adding special check for length=0 to avoid pointer arithmetic
overflow.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
Pass ME_NOREFRESH flag to an error handler in my_malloc() and _mymalloc() in case of memory allocation failure, so that it gets logged to the error log.