INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ..." if we are inserting into
InnoDB table unique index of partial key with
underlying UTF-8 string field.
This error occurs because INSERT...ON DUPLICATE uses a wrong
procedure to copy string fields of multi-byte character sets
for index search.
This bug occurs when error message length exceeds allowed limit: my_error()
function outputs "%s" sequences instead of long string arguments.
Formats like %-.64s are very common in errmsg.txt files, however my_error()
function simply ignores precision of those formats.
- unsigned flag was not handled correctly for a number of mathematical funcions, which led to incorrect results
- passing large values as the number of decimals to ROUND() resulted in incorrect results and even server crashes in some cases
- reverted the fix and the testcase for bug #10083 as it violates the manual
- fixed some testcases which relied on broken ROUND() behavior
Before this fix, the parser would sometime change where a token starts by
altering Lex_input_string::tok_start, which later confused the code in
sql_yacc.yy that needs to capture the source code of a SQL statement,
like to represent the body of a stored procedure.
This line of code in sql_lex.cc :
case MY_LEX_USER_VARIABLE_DELIMITER:
lip->tok_start= lip->ptr; // Skip first `
would <skip the first back quote> ... and cause the bug reported.
In general, the responsibility of sql_lex.cc is to *find* where token are
in the SQL text, but is *not* to make up fake or incomplete tokens.
With a quoted label like `my_label`, the token starts on the first quote.
Extracting the token value should not change that (it did).
With this fix, the lexical analysis has been cleaned up to not change
lip->tok_start (in the case found for this bug).
The functions get_token() and get_quoted_token() now have an extra
parameters, used when some characters from the beginning of the token need
to be skipped when extracting a token value, like when extracting 'AB' from
'0xAB', for example, for a HEX_NUM token.
This exposed a bad assumption in Item_hex_string and Item_bin_string,
which has been fixed:
The assumption was that the string given, 'AB', was in fact preceded in
memory by '0x', which might be false (it can be preceded by "x'" and
followed by "'" -- or not be preceded by valid memory at all)
If a name is needed for Item_hex_string or Item_bin_string, the name is
taken from the original and true source code ('0xAB'), and assigned in
the select_item rule, instead of relying on assumptions related to how
memory is used.
The BETWEEN function was comparing DATE/DATETIME values either as ints or as
strings. Both methods have their disadvantages and may lead to a wrong
result.
Now BETWEEN function checks whether all of its arguments has the STRING result
types and at least one of them is a DATE/DATETIME item. If so it sets up
two Arg_comparator obects to compare with the compare_datetime() comparator
and uses them to compare such items.
Added two Arg_comparator object members and one flag to the
Item_func_between class for the correct DATE/DATETIME comparison.
The Item_func_between::fix_length_and_dec() function now detects whether
it's used for DATE/DATETIME comparison and sets up newly added Arg_comparator
objects to do this.
The Item_func_between::val_int() now uses Arg_comparator objects to perform
correct DATE/DATETIME comparison.
The owner variable of the Arg_comparator class now can be set to NULL if the
caller wants to handle NULL values by itself.
Now the Item_date_add_interval::get_date() function ajusts cached_field type according to the detected type.
DATE and DATETIME can be compared either as strings or as int. Both
methods have their disadvantages. Strings can contain valid DATETIME value
but have insignificant zeros omitted thus became non-comparable with
other DATETIME strings. The comparison as int usually will require conversion
from the string representation and the automatic conversion in most cases is
carried out in a wrong way thus producing wrong comparison result. Another
problem occurs when one tries to compare DATE field with a DATETIME constant.
The constant is converted to DATE losing its precision i.e. losing time part.
This fix addresses the problems described above by adding a special
DATE/DATETIME comparator. The comparator correctly converts DATE/DATETIME
string values to int when it's necessary, adds zero time part (00:00:00)
to DATE values to compare them correctly to DATETIME values. Due to correct
conversion malformed DATETIME string values are correctly compared to other
DATE/DATETIME values.
As of this patch a DATE value equals to DATETIME value with zero time part.
For example '2001-01-01' equals to '2001-01-01 00:00:00'.
The compare_datetime() function is added to the Arg_comparator class.
It implements the correct comparator for DATE/DATETIME values.
Two supplementary functions called get_date_from_str() and get_datetime_value()
are added. The first one extracts DATE/DATETIME value from a string and the
second one retrieves the correct DATE/DATETIME value from an item.
The new Arg_comparator::can_compare_as_dates() function is added and used
to check whether two given items can be compared by the compare_datetime()
comparator.
Two caching variables were added to the Arg_comparator class to speedup the
DATE/DATETIME comparison.
One more store() method was added to the Item_cache_int class to cache int
values.
The new is_datetime() function was added to the Item class. It indicates
whether the item returns a DATE/DATETIME value.
Validity checks for nested set functions
were not taking into account that the enclosed
set function may be on a nest level that is
lower than the nest level of the enclosing set
function.
Fixed by :
- propagating max_sum_func_level
up the enclosing set functions chain.
- updating the max_sum_func_level of the
enclosing set function when the enclosed set
function is aggregated above or on the same
nest level of as the level of the enclosing
set function.
- updating the max_arg_level of the enclosing
set function on a reference that refers to
an item above or on the same nest level
as the level of the enclosing set function.
- Treating both Item_field and Item_ref as possibly
referencing items from outer nest levels.
INSERT into InnoDB table may cause "ERROR 1062 (23000): Duplicate entry..."
errors or lost records after multi-row INSERT of the form:
"INSERT INTO t (id...) VALUES (NULL...) ON DUPLICATE KEY UPDATE id=VALUES(id)",
where "id" is an AUTO_INCREMENT column.
It happens because InnoDB handler forgets to save next insert id after
updating of auto_increment column with new values. As result of that
last insert id stored inside InnoDB dictionary tables differs from it's
cached thd->next_insert_id value.
When fields are inserted instead of * in the select list they were not marked
for check for the ONLY_FULL_GROUP_BY mode.
The Field_iterator_table::create_item() function now marks newly created
items for check when in the ONLY_FULL_GROUP_BY mode.
The setup_wild() and the insert_fields() functions now maintain the
cur_pos_in_select_list counter for the ONLY_FULL_GROUP_BY mode.
In multi_update::send_data(), the counter of matched rows was not correctly incremented, when during insertion of a new row to a temporay table it had to be converted from HEAP to MyISAM.
This fix changes the logic to increment the counter of matched rows in the following cases:
1. If the error returned from write_row() is zero.
2. If the error returned from write_row() is non-zero, is neither HA_ERR_FOUND_DUPP_KEY nor HA_ERR_FOUND_DUPP_UNIQUE, and a call to create_myisam_from_heap() succeeds.
FORCE_INIT_OF_VARS was not defined for the
debug builds on Windows. This caused LINT_INIT
macro to be defined as NOP and this triggers
false alarms about use of uninitialized with
the runtime libs of some Visual Studio versions.
Fixed by defining FORCE_INIT_OF_VARS to match the
state of the Windows