connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
In case of ROW item each compared pair does not
check if argumet collations can be aggregated and
thus appropiriate item conversion does not happen.
The fix is to add the check and convertion for ROW
pairs.
returns short string value.
Multibyte character sets were not taken into account when
calculating max_length in Item_param::convert_str_value(). As a
result, string parameters of a prepared statement could be
truncated later when calculating string length in characters by
dividing length in bytes by the charset's mbmaxlen value (e.g. in
Field_varstring::store()).
Fixed by taking charset's mbmaxlen into account when calculating
max_length in Item_param::convert_str_value().
Cleaned up SQL code in the test.
Needed to move the FLUSH TABLES statement prior to the DROP TABLE t1 to prevent a warning of
Table open on delete and a test fail.
Bug#38435 - LONG Microseconds cause MySQL to fail a CAST to DATETIME or DATE
Parsing of optional microsecond part in datetime did not
fail gracefully when field width was larger than the allowed
six places.
Now handles up to the correct six places, and disregards
any extra digits without messing up what we've already got.