After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
Added code resulted in strange linking problem for embedded on Windows
Avoided by not doing this for embedded mode
It's irrelevant for embedded server anyway, --protocol will be ignored
Queries involving predicates of the form "const NOT BETWEEN
not_indexed_column AND indexed_column" could return wrong data
due to incorrect handling by the range optimizer.
For "c NOT BETWEEN f1 AND f2" predicates, get_mm_tree()
produces a disjunction of the SEL_ARG trees for "f1 > c" and
"f2 < c". If one of the trees is empty (i.e. one of the
arguments is not sargable) the resulting tree should be empty
as well, since the whole expression in this case is not
sargable.
The above logic is implemented in get_mm_tree() as follows. The
initial state of the resulting tree is NULL (aka empty). We
then iterate through arguments and compute the corresponding
SEL_ARG tree (either "f1 > c" or "f2 < c"). If the resulting
tree is NULL, it is simply replaced by the generated
tree. Otherwise it is replaced by a disjunction of itself and
the generated tree. The obvious flaw in this implementation is
that if the first argument is not sargable and thus produces a
NULL tree, the resulting tree will simply be replaced by the
tree for the second argument. As a result, "c NOT BETWEEN f1
AND f2" will end up as just "f2 < c".
Fixed by adding a check so that when the first argument
produces an empty tree for the NOT BETWEEN case, the loop is
aborted with an empty tree as a result. The whole idea of using
a loop for 2 arguments does not make much sense, but it was
probably used to avoid code duplication for several BETWEEN
variants.
'CREATE TABLE IF NOT EXISTS ... SELECT' behaviour
BUG#55474, BUG#55499, BUG#55598, BUG#55616 and BUG#55777 are fixed
in this patch too.
This is the 5.1 part.
It implements:
- if the table exists, binlog two events: CREATE TABLE IF NOT EXISTS
and INSERT ... SELECT
- Insert nothing and binlog nothing on master if the existing object
is a view. It only generates a warning that table already exists.
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called.
Test case added.
variable assignments
The assert() that is firing is checking if expressions that can't be
null return a NULL when evaluated.
MAKEDATE() function can return NULL if the second argument is
less then or equal to 0. Thus its nullability depends not only on
the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting MAKEDATE() to be nullable
despite the nullability of its arguments.
Test added.
Had to update one test result to reflect the metadata change.
feature
The test for bug no 50939 was put in range.test which isn't such a good idea
since it requires partitioning. Fixed by moving the test case to
partitioning_range.test.
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
file .\item_subselect.cc, line 836
IN quantified predicates are never executed directly. They are rather wrapped
inside nodes called IN Optimizers (Item_in_optimizer) which take care of the
execution. However, this is not done during query preparation. Unfortunately
the LIKE predicate pre-evaluates constant right-hand side arguments even
during name resolution. Likely this is meant as an optimization.
Fixed by not pre-evaluating LIKE arguments in view prepare mode.
Reverted the ulong->uint diff
Re-applied the first diff.
The original commit message follows:
enum plugin system variables are ulong internally, not int.
On systems where long is not the same as an int it causes
problems.
Fixed by correct typecasting. Removed the test from the
experimental list.
The enum system variables were handled inconsistently
as ints, unsigned int and unsigned long on various places.
This caused problems on platforms on which
sizeof(int) != sizeof(long).
Fixed by homogenizing the type of the enum variables
to unsigned int, since it's size compatible with the C enum
type.
Removed the test from the experimental list.
if() treated any non-numeric string as false
Fixed to treat those as true instead
Added some test cases
Fixed missing $ in variable name in include/mix2.inc
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
A CREATE...SELECT that fails is written to the binary log if a non-transactional
statement is updated. If the logging format is ROW, the CREATE statement and the
changes are written to the binary log as distinct events and by consequence the
created table is not rolled back in the slave.
In this patch, we opted to let the slave goes out of sync by not writting to the
binary log the CREATE statement. We do this by simply reseting the binary log's
cache.
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
In order to be able to check if the set of the grouping fields in a
GROUP BY has changed (and thus to start a new group) the optimizer
caches the current values of these fields in a set of Cached_item
derived objects.
The Cached_item_str, used for caching varchar and TEXT columns,
is limited in length by the max_sort_length variable.
A String buffer to store the value with an alloced length of either
the max length of the string or the value of max_sort_length
(whichever is smaller) in Cached_item_str's constructor.
Then, at compare time the value of the string to compare to was
truncated to the alloced length of the string buffer inside
Cached_item_str.
This is all fine and valid, but only if you're not assigning
values near or equal to the alloced length of this buffer.
Because when assigning values like this the alloced length is
rounded up and as a result the next set of data will not match the
group buffer, thus leading to wrong results because of the changed
alloced_length.
Fixed by preserving the original maximum length in the
Cached_item_str's constructor and using this instead of the
alloced_length to limit the string to compare to.
Test case added.
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).