For each view the mysqldump utility creates a temporary table
with the same name and the same columns as the view
in order to satisfy views that depend on this view.
After the creation of all tables, mysqldump drops all
temporary tables and creates actual views.
However, --skip-add-drop-table and --compact flags disable
DROP TABLE statements for those temporary tables. Thus, it was
impossible to create the views because of existence of the
temporary tables with the same names.
The get_time_value function is added. It is used to obtain TIME values both
from items the can return time as an integer and from items that can return
time only as a string.
The Arg_comparator::compare_datetime function now uses pointer to a getter
function to obtain values to compare. Now this function is also used for
comparison of TIME values.
The get_value_func variable is added to the Arg_comparator class.
It points to a getter function for the DATE/DATETIME/TIME comparator.
The Field_newdate::store when storing a DATETIME value was returning the
'value was cut' error even if the thd->count_cuted_fields flag is set to
CHECK_FIELD_IGNORE. This made range optimizr think that there is no
appropriate data in the table and thus to return an empty set.
Now the Field_newdate::store function returns conversion error only if the
thd->count_cuted_fields flag isn't set to CHECK_FIELD_IGNORE.
Time values were compared by the BETWEEN function as strings. This led to a
wrong result in cases when some of arguments are less than 100 hours and other
are greater.
Now if all 3 arguments of the BETWEEN function are of the TIME type then
they are compared as integers.
A bug in the restore_prev_nj_state function allowed interleaving
inner tables of outer join operations with outer tables. With the
current implementation of the nested loops algorithm it could lead
to wrong result sets for queries with nested outer joins.
Another bug in this procedure effectively blocked evaluation of some
valid execution plans for queries with nested outer joins.
Time values were compared as strings. This led to a wrong comparison
result when comparing values one of which is under 100 hours and another is
over 100 hours.
Now when the Arg_comparator::set_cmp_func function sees that both items to
compare are of the TIME type it sets the comparator to the
Arg_comparator::compare_e_int or the Arg_comparator::compare_int_unsigned
functions.
The special `zero' enum value was coerced to the normal
empty string enum value during a field-to-field copy.
This bug affected CREATE ... SELECT statements and
SELECT aggregate GROUP BY enum field statements.
Also this bug made unnecessary warnings during
the execution of CREATE ... SELECT statements:
Warning 1265 Data truncated for column...
By default MyISAM overwrites .MYD and .MYI files no
DATA DIRECTORY option is used. This can lead to two tables
using the same .MYD and .MYI files (that can't be dropped).
To prevent CREATE TABLE from overwriting a file a new option
is introduced : keep_files_on_create
When this is on the CREATE TABLE throws an error if either
the .MYD or .MYI exists for a MyISAM table.
The option is off by default (resulting in compatible behavior).
with the space character.
When the my_strnncollsp_simple function compares two strings and one is a prefix
of another then this function compares characters in the rest of longer key
with the space character to find whether the longer key is greater or less.
But the sort order of the collation isn't used in this comparison. This may
lead to a wrong comparison result, wrongly created index or wrong order of the
result set of a query with the ORDER BY clause.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
An assertion abort could occur for some grouping queries that employed
decimal user variables with assignments to them.
The problem appeared the constructors of the class Field_new_decimal
because the function my_decimal_length_to_precision did not guarantee
returning decimal precision not greater than DECIMAL_MAX_PRECISION.
The cast operation ignored the cases when the precision and/or the scale exceeded
the limits, 65 and 30 respectively. No errors were reported in these cases.
For some queries this may lead to an assertion abort.
Fixed by throwing errors for such cases.
The SELECT INTO OUTFILE FIELDS ENCLOSED BY digit or minus sign,
followed by the same LOAD DATA INFILE statement, used wrond encoding
of non-string fields contained the enclosed character in their text
representation.
Example:
SELECT 15, 9 INTO OUTFILE 'text' FIELDS ENCLOSED BY '5';
Old encoded result in the text file:
5155 595
^ was decoded as the 1st enclosing character of the 2nd field;
^ was skipped as garbage;
^ ^ was decoded as a pair of englosing characters of the 1st field;
^ was decoded as traling space of the first field;
^^ was decoded as a doubled enclosed character.
New encoded result in the text file:
51\55 595
^ ^ pair of enclosing characters of the 1st field;
^^ escaped enclosed character.
AsText() needs to know the maximum number of
characters a IEEE double precision value can
occupy to make sure there's enough buffer space.
The number was too small to hold all possible
values and this caused buffer overruns.
Fixed by correcting the calculation of the
maximum digits in a string representation of an
IEEE double precision value as printed by
String::qs_append(double).
Problem: logging queries not using indexes we check a special flag which
is set only at the server startup and is not changing with a corresponding
server variable together.
Fix: check the variable value instead of the flag.