An user assignment variable expression that's
evaluated in a logical expression context
(Item::val_bool()) can be pre-calculated in a
temporary table for GROUP BY.
However when the expression value is used after the
temp table creation it was re-evaluated instead of
being read from the temp table due to a missing
val_bool_result() method.
Fixed by implementing the method.
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called.
Test case added.
variable assignments
The assert() that is firing is checking if expressions that can't be
null return a NULL when evaluated.
MAKEDATE() function can return NULL if the second argument is
less then or equal to 0. Thus its nullability depends not only on
the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting MAKEDATE() to be nullable
despite the nullability of its arguments.
Test added.
Had to update one test result to reflect the metadata change.
feature
The test for bug no 50939 was put in range.test which isn't such a good idea
since it requires partitioning. Fixed by moving the test case to
partitioning_range.test.
Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.
Solution was to also cache the other call and forward it when moving
to a new partition to scan.
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
The CONVERT_TZ function crashes the server when the
timezone argument is an empty SET field value.
1) The CONVERT_TZ may find a timezone string in the
tz_names hash.
2) A string representation of the empty SET is a
String of zero length with the NULL pointer.
3) If the key argument length is zero, hash functions
do comparison using the length of the record being
compared against.
I.e. a zero-length String buffer is an invalid
argument for hash search functions, and if String
points to NULL buffer, hashcmp() fails with SEGV
accessing that memory.
The my_tz_find function has been modified to
treat empty Strings as invalid timezone values
to skip unnecessary hash search.
Currently we do a full validation of AHI whenever check tables is
called on any table. This patch fixes this by only doing this full
check in debug versions.
bug#55716
rb://423
approved by: Marko
file .\item_subselect.cc, line 836
IN quantified predicates are never executed directly. They are rather wrapped
inside nodes called IN Optimizers (Item_in_optimizer) which take care of the
execution. However, this is not done during query preparation. Unfortunately
the LIKE predicate pre-evaluates constant right-hand side arguments even
during name resolution. Likely this is meant as an optimization.
Fixed by not pre-evaluating LIKE arguments in view prepare mode.
Handle overflow when reading value from SELECT MAX(C) FROM T;
Call ha_innobase::info() after initializing the autoinc value
in ha_innobase::open().
Fix for both the builtin and plugin.
rb://402
The bug is due to a double delete of a BLOB, once via:
rollback -> btr_cur_pessimistic_delete()
and the second time via purge.
The bug is in row_upd_clust_rec_by_insert(). There we relinquish ownership
of the non-updated BLOB columns in btr_cur_mark_extern_inherited_fields()
before building the row entry that will be inserted and whose contents will
be logged in the UNDO log. However, we don't set the BLOB column later to
INHERITED so that a possible rollback will not free the original row's
non-updated BLOB entries. This is because the condition that checks for
that is in :
if (node->upd_ext) {}.
node->upd_ext is non-NULL only if a BLOB column was updated and that column
is part of some key ordering (see row_upd_replace()). This results in the
non-update BLOB columns being deleted during a rollback and subsequently by
purge again.
rb://413
Reverted the ulong->uint diff
Re-applied the first diff.
The original commit message follows:
enum plugin system variables are ulong internally, not int.
On systems where long is not the same as an int it causes
problems.
Fixed by correct typecasting. Removed the test from the
experimental list.
The enum system variables were handled inconsistently
as ints, unsigned int and unsigned long on various places.
This caused problems on platforms on which
sizeof(int) != sizeof(long).
Fixed by homogenizing the type of the enum variables
to unsigned int, since it's size compatible with the C enum
type.
Removed the test from the experimental list.
if() treated any non-numeric string as false
Fixed to treat those as true instead
Added some test cases
Fixed missing $ in variable name in include/mix2.inc
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
A CREATE...SELECT that fails is written to the binary log if a non-transactional
statement is updated. If the logging format is ROW, the CREATE statement and the
changes are written to the binary log as distinct events and by consequence the
created table is not rolled back in the slave.
In this patch, we opted to let the slave goes out of sync by not writting to the
binary log the CREATE statement. We do this by simply reseting the binary log's
cache.
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
In order to be able to check if the set of the grouping fields in a
GROUP BY has changed (and thus to start a new group) the optimizer
caches the current values of these fields in a set of Cached_item
derived objects.
The Cached_item_str, used for caching varchar and TEXT columns,
is limited in length by the max_sort_length variable.
A String buffer to store the value with an alloced length of either
the max length of the string or the value of max_sort_length
(whichever is smaller) in Cached_item_str's constructor.
Then, at compare time the value of the string to compare to was
truncated to the alloced length of the string buffer inside
Cached_item_str.
This is all fine and valid, but only if you're not assigning
values near or equal to the alloced length of this buffer.
Because when assigning values like this the alloced length is
rounded up and as a result the next set of data will not match the
group buffer, thus leading to wrong results because of the changed
alloced_length.
Fixed by preserving the original maximum length in the
Cached_item_str's constructor and using this instead of the
alloced_length to limit the string to compare to.
Test case added.
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.