Time values were compared by the BETWEEN function as strings. This led to a
wrong result in cases when some of arguments are less than 100 hours and other
are greater.
Now if all 3 arguments of the BETWEEN function are of the TIME type then
they are compared as integers.
A bug in the restore_prev_nj_state function allowed interleaving
inner tables of outer join operations with outer tables. With the
current implementation of the nested loops algorithm it could lead
to wrong result sets for queries with nested outer joins.
Another bug in this procedure effectively blocked evaluation of some
valid execution plans for queries with nested outer joins.
Time values were compared as strings. This led to a wrong comparison
result when comparing values one of which is under 100 hours and another is
over 100 hours.
Now when the Arg_comparator::set_cmp_func function sees that both items to
compare are of the TIME type it sets the comparator to the
Arg_comparator::compare_e_int or the Arg_comparator::compare_int_unsigned
functions.
The special `zero' enum value was coerced to the normal
empty string enum value during a field-to-field copy.
This bug affected CREATE ... SELECT statements and
SELECT aggregate GROUP BY enum field statements.
Also this bug made unnecessary warnings during
the execution of CREATE ... SELECT statements:
Warning 1265 Data truncated for column...
- make merge_buffers():sort_length have type size_t as this type is
expected by, e.g. ptr_compare_1, which will receive pointer to
sort_length as comparison parameter.
By default MyISAM overwrites .MYD and .MYI files no
DATA DIRECTORY option is used. This can lead to two tables
using the same .MYD and .MYI files (that can't be dropped).
To prevent CREATE TABLE from overwriting a file a new option
is introduced : keep_files_on_create
When this is on the CREATE TABLE throws an error if either
the .MYD or .MYI exists for a MyISAM table.
The option is off by default (resulting in compatible behavior).
The Ctrl-C handler in mysql closes the console while ReadConsole()
waits for console input.
But the main thread was detecting that ReadConsole() haven't read
anything and was processing as if there're data in the buffer.
Fixed to handle correctly this error condition.
No test case added as the test relies on Ctrl-C sent to the client
from its console.
with the space character.
When the my_strnncollsp_simple function compares two strings and one is a prefix
of another then this function compares characters in the rest of longer key
with the space character to find whether the longer key is greater or less.
But the sort order of the collation isn't used in this comparison. This may
lead to a wrong comparison result, wrongly created index or wrong order of the
result set of a query with the ORDER BY clause.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
An assertion abort could occur for some grouping queries that employed
decimal user variables with assignments to them.
The problem appeared the constructors of the class Field_new_decimal
because the function my_decimal_length_to_precision did not guarantee
returning decimal precision not greater than DECIMAL_MAX_PRECISION.
The cast operation ignored the cases when the precision and/or the scale exceeded
the limits, 65 and 30 respectively. No errors were reported in these cases.
For some queries this may lead to an assertion abort.
Fixed by throwing errors for such cases.
For GCov builds, if the server crashes, the normal exit handler for writing
coverage information is not executed due to the abnormal termination.
Fix this by explicitly calling the __gcov_flush function in our crash handler.