Time values were compared as strings. This led to a wrong comparison
result when comparing values one of which is under 100 hours and another is
over 100 hours.
Now when the Arg_comparator::set_cmp_func function sees that both items to
compare are of the TIME type it sets the comparator to the
Arg_comparator::compare_e_int or the Arg_comparator::compare_int_unsigned
functions.
The special `zero' enum value was coerced to the normal
empty string enum value during a field-to-field copy.
This bug affected CREATE ... SELECT statements and
SELECT aggregate GROUP BY enum field statements.
Also this bug made unnecessary warnings during
the execution of CREATE ... SELECT statements:
Warning 1265 Data truncated for column...
By default MyISAM overwrites .MYD and .MYI files no
DATA DIRECTORY option is used. This can lead to two tables
using the same .MYD and .MYI files (that can't be dropped).
To prevent CREATE TABLE from overwriting a file a new option
is introduced : keep_files_on_create
When this is on the CREATE TABLE throws an error if either
the .MYD or .MYI exists for a MyISAM table.
The option is off by default (resulting in compatible behavior).
The Ctrl-C handler in mysql closes the console while ReadConsole()
waits for console input.
But the main thread was detecting that ReadConsole() haven't read
anything and was processing as if there're data in the buffer.
Fixed to handle correctly this error condition.
No test case added as the test relies on Ctrl-C sent to the client
from its console.
with the space character.
When the my_strnncollsp_simple function compares two strings and one is a prefix
of another then this function compares characters in the rest of longer key
with the space character to find whether the longer key is greater or less.
But the sort order of the collation isn't used in this comparison. This may
lead to a wrong comparison result, wrongly created index or wrong order of the
result set of a query with the ORDER BY clause.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
An assertion abort could occur for some grouping queries that employed
decimal user variables with assignments to them.
The problem appeared the constructors of the class Field_new_decimal
because the function my_decimal_length_to_precision did not guarantee
returning decimal precision not greater than DECIMAL_MAX_PRECISION.
The cast operation ignored the cases when the precision and/or the scale exceeded
the limits, 65 and 30 respectively. No errors were reported in these cases.
For some queries this may lead to an assertion abort.
Fixed by throwing errors for such cases.
For GCov builds, if the server crashes, the normal exit handler for writing
coverage information is not executed due to the abnormal termination.
Fix this by explicitly calling the __gcov_flush function in our crash handler.
The SELECT INTO OUTFILE FIELDS ENCLOSED BY digit or minus sign,
followed by the same LOAD DATA INFILE statement, used wrond encoding
of non-string fields contained the enclosed character in their text
representation.
Example:
SELECT 15, 9 INTO OUTFILE 'text' FIELDS ENCLOSED BY '5';
Old encoded result in the text file:
5155 595
^ was decoded as the 1st enclosing character of the 2nd field;
^ was skipped as garbage;
^ ^ was decoded as a pair of englosing characters of the 1st field;
^ was decoded as traling space of the first field;
^^ was decoded as a doubled enclosed character.
New encoded result in the text file:
51\55 595
^ ^ pair of enclosing characters of the 1st field;
^^ escaped enclosed character.
AsText() needs to know the maximum number of
characters a IEEE double precision value can
occupy to make sure there's enough buffer space.
The number was too small to hold all possible
values and this caused buffer overruns.
Fixed by correcting the calculation of the
maximum digits in a string representation of an
IEEE double precision value as printed by
String::qs_append(double).
Dropping an user defined function may cause server crash in
case this function is still in use by another thread.
The problem was that our hash implementation didn't update
hash link list properly when hash_update() was called.
This bug may manifest itself for select queries over a multi-table view
that includes an ORDER BY clause in its definition. If the select list of
the query contains references to the same view column with different
aliases the names of the columns in the result output will be nevertheless
the same, coinciding with one of the alias.
The bug happened because the method Item_ref::get_tmp_table_item that
was inherited by the class Item_direct_view_ref ignored the fact that
the name of the view column reference must be inherited by the fields
of the temporary table that was created in order to get the result rows
sorted.
The `SELECT 'r' INTO OUTFILE ... FIELDS ENCLOSED BY 'r' ' statement
encoded the 'r' string to a 4 byte string of value x'725c7272'
(sequence of 4 characters: r\rr).
The LOAD DATA statement decoded this string to a 1 byte string of
value x'0d' (ASCII Carriage Return character) instead of the original
'r' character.
The same error also happened with the FIELDS ENCLOSED BY clause
followed by special characters: 'n', 't', 'r', 'b', '0', 'Z' and 'N'.
NOTE 1: This is a result of the undocumented feature: the LOAD DATA INFILE
recognises 2-byte input sequences like \n, \t, \r and \Z in addition
to documented 2-byte sequences: \0 and \N. This feature should be
documented (here backspace character is a default ESCAPED BY character,
in the real-life example it may be any ESCAPED BY character).
NOTE 2, changed behaviour:
Now the `SELECT INTO OUTFILE' statement with the `FIELDS ENCLOSED BY'
clause followed by one of: 'n', 't', 'r', 'b', '0', 'Z' or 'N' characters
encodes this special character itself by doubling it ('r' --> 'rr'),
not by prepending it with an escape character.