There was an incomplete reset of the name resolution context, that caused
INSERT ... SELECT ... JOIN statements to resolve not by joint row type calculated
for the join.
Removed the redundant re-initialization of the context, because
mysql_insert_select_prepare() now correctly saves/restores the context.
BUG#18036 - update of table joined to self reports table as crashed
Set exclude_from_table_unique_test value back to FALSE. It is needed for
further check in multi_update::prepare whether to use record cache.
tables
Currently in INSERT ... SELECT ... LIMIT ... the compiler uses a
temporary table to store the results of SELECT ... LIMIT .. and then
uses that table as a source for INSERT. The problem is that in some cases
it actually skips the LIMIT clause in doing that and materializes the
whole SELECT result set regardless of the LIMIT.
This fix is limiting the process of filling up the temp table with only
that much rows that will be actually used by propagating the LIMIT value.
Certain updates of table joined to self results in unexpected
behavior.
The problem was that record cache was mistakenly enabled for
self-joined table updates. Normally record cache must be disabled
for such updates.
Fixed wrong condition in code that determines whether to use
record cache for self-joined table updates.
Only MyISAM tables were affected.
Fix for bug#16716 for --ps-protocol mode.
item_cmpfunc.cc:
Fix for a memory allocation/freeing problem in agg_cmp_type() after fix
for bug#16377. Few language corrections.
INSERT triggers".
In cases when REPLACE was internally executed via update and table had
on update (on delete) triggers defined we exposed the fact that such
optimization used by callng on update (not calling on delete) triggers.
Such behavior contradicts our documentation which describes REPLACE as
INSERT with optional DELETE.
This fix just disables this optimization for tables with on delete triggers.
The optimization is still applied for tables which have on update but have
no on delete triggers, we just don't invoke on update triggers in this case
and thus don't expose information about optimization to user.
Also added test coverage for values returned by ROW_COUNT() function (and
thus for values returned by mysql_affected_rows()) for various forms of
INSERT.
resulted in a wrong error message.
The nest_level counter indicates the depth of nesting for a subselect. It is
needed to properly resolve aggregate functions in nested subselects. Obviously
it shouldn't be incremented for UNION parts because they have the same level of
nesting. This counter was incremented by 1 in the mysql_new_select() function
for any new select and wasn't decremented for UNION parts. This resulted in
wrongly reported error messages.
Now the nest_level counter is decremented by 1 for any union part.
The Field::eq() considered instances of Field_bit that differ only in
bit_ptr/bit_ofs equal. This caused equality conditions optimization
(build_equal_items_for_cond()) to make bad field substitutions that result
in wrong predicates.
Field_bit requires an overloaded eq() function that checks the bit_ptr/bit_ofs
in addition to Field::eq().
This bug in Field_string::cmp resulted in a wrong comparison
with keys in partial indexes over multi-byte character fields.
Given field a is declared as
a varchar(16) collate utf8_unicode_ci
INDEX(a(4)) gives us an example of such an index.
Wrong key comparisons could lead to wrong result sets if
the selected query execution plan used a range scan by
a partial index over a utf8 character field.
This also caused wrong results in many other cases.
The INSERT DELAYED should not maintain its own private auto-increment
counter, because this is assuming that other threads cannot insert
into the table while the INSERT DELAYED thread is inserting, which is
a wrong assumption.
So the start of processing of a batch of INSERT rows in the
INSERT DELAYED thread must be treated as a start of a new statement
and cached next_insert_id must be cleared.
can lead to a wrong result.
All date/time functions has the STRING result type thus their results are
compared as strings. The string date representation allows a user to skip
some of leading zeros. This can lead to wrong comparison result if a date/time
function result is compared to such a string constant.
The idea behind this bug fix is to compare results of date/time functions
and data/time constants as ints, because that date/time representation is
more exact. To achieve this the agg_cmp_type() is changed to take in the
account that a date/time field or an date/time item should be compared
as ints.
This bug fix is partially back ported from 5.0.
The agg_cmp_type() function now accepts THD as one of parameters.
In addition, it now checks if a date/time field/function is present in the
list. If so, it tries to coerce all constants to INT to make date/time
comparison return correct result. The field for the constant coercion is
taken from the Item_field or constructed from the Item_func. In latter case
the constructed field will be freed after conversion of all constant items.
Otherwise the result is same as before - aggregated with help of the
item_cmp_type() function.
From the Item_func_between::fix_length_and_dec() function removed the part
which was converting date/time constants to int if possible. Now this is
done by the agg_cmp_type() function.
The new function result_as_longlong() is added to the Item class.
It indicates that the item is a date/time item and result of it can be
compared as int. Such items are date/time fields/functions.
Correct val_int() methods are implemented for classes Item_date_typecast,
Item_func_makedate, Item_time_typecast, Item_datetime_typecast. All these
classes are derived from Item_str_func and Item_str_func::val_int() converts
its string value to int without regard to the date/time type of these items.
Arg_comparator::set_compare_func() and Arg_comparator::set_cmp_func()
functions are changed to substitute result type of an item with the INT_RESULT
if the item is a date/time item and another item is a constant. This is done
to get a correct result of comparisons like date_time_function() = string_constant.
There was a wrong determination of the DB name (witch is
not always the one in TABLE_LIST because derived tables
may be calculated using temp tables that have their db name
set to "").
The fix determines the database name according to the type
of table reference, and calls the function check_access()
with the correct db name so the correct set of grants is found.
The is_null value was initialized once and thereafter only set to indicate
NULL, and never unset to indicate not-NULL.
Now set is_null to false, in addition to only setting it to true when the value
in question is null.
query
Problem:
There was a wrong context assigned to the columns that were added in insert_fields()
when expanding a '*'. When this is done in a prepared statement it causes
fix_fields() to fail to find the table that these columns reference.
Actually the right context is set in setup_natural_join_row_types() called at the
end of setup_tables(). However when executed in a context of a prepared statement
setup_tables() resets the context, but setup_natural_join_row_types() was not
setting it to the correct value assuming it has already done so.
Solution:
The top-most, left-most NATURAL/USING join must be set as a
first_name_resolution_table in context even when operating on prepared statements.
read buffer
Setting read buffer to values greater than SSIZE_MAX results in
unexpected behavior.
According to read(2) manual:
If count is greater than SSIZE_MAX, the result is unspecified.
Set upper limit for read_buffer_size and read_rnd_buffer_size to
SSIZE_MAX.
The st_lex::which_check_option_applicable() function controls for which
statements WITH CHECK OPTION clause should be taken into account. REPLACE and
REPLACE_SELECT wasn't in the list which results in allowing REPLACE to insert
wrong rows in a such view.
The st_lex::which_check_option_applicable() now includes REPLACE and
REPLACE_SELECT in the list of statements for which WITH CHECK OPTION clause is
applicable.
To calculate its max_length the CONCAT() function is simply sums max_lengths
of its arguments but when the collation of an argument differs from the
collation of the CONCAT() max_length will be wrong. This may lead to a data
truncation when a tmp table is used, in UNIONS for example.
The Item_func_concat::fix_length_and_dec() function now recalculates the
max_length of an argument when the mbmaxlen of the argument differs from the
mbmaxlen of the CONCAT().