Objects of the class Item_equal contain an auxiliary member
eval_item of the type cmp_item that is used only for direct
evaluation of multiple equalities. Currently a multiple equality
is evaluated directly only in the cases when the equality holds
at most for one row in the result set.
The compare collation of eval_item was determined incorectly.
It could lead to returning incorrect results for some queries.
View check option clauses were ignored for updates of multi-table
views when the updates could not be performed on fly and the rows
to update had to be put into temporary tables first.
with a column of the DATETIME type could return a wrong
result set if the WHERE clause included a BETWEEN condition
on the column.
Fixed the method Item_func_between::fix_length_and_dec
where the aggregation type for BETWEEN predicates calculated
incorrectly if the first argument was a view column of the
DATETIME type.
updated.
INSERT ... ON DUPLICATE KEY UPDATE reports that a record was updated when
the duplicate key occurs even if the record wasn't actually changed
because the update values are the same as those in the record.
Now the compare_record() function is used to check whether the record was
changed and the update of a record reported only if the record differs
from the original one.
Ignoring error codes from type conversion allows default (wrong) values to
go unnoticed in the formation of index search conditions.
Fixed by correctly checking for conversion errors.
This performance degradation for UPDATEs could be observed in the update
statements for which the search key cannot be converted to any valid
value of the type of the search column, like for a the condition
int_fld=99999999999999999999999999, though it can be guaranteed here
that there is no row with such a key value.
The bug could cause choosing a sub-optimal execution plan for
a single-table query if a unique index with many null keys were
defined for the table.
It happened because the code of the check_quick_keys function
made an assumption that any key may occur in an unique index
only once. Yet this is not true for keys with nulls that may
have multiple occurrences in the index.
Two problems here:
Problem 1:
While constructing the join columns list the optimizer does as follows:
1. Sets the join_using_fields/natural_join members of the right JOIN
operand.
2. Makes a "table reference" (TABLE_LIST) to parent the two tables.
3. Assigns the join_using_fields/is_natural_join of the wrapper table
using join_using_fields/natural_join of the rightmost table
4. Sets join_using_fields to NULL for the right JOIN operand.
5. Passes the parent table up to the same procedure on the upper
level.
Step 1 overrides the the join_using_fields that are set for a nested
join wrapping table in step 4.
Fixed by making a designated variable SELECT_LEX::prev_join_using to
pass the data from step 1 to step 4 without destroying the wrapping
table data.
Problem 2:
The optimizer checks for ambiguous columns while transforming
NATURAL JOIN/JOIN USING to JOIN ON. While doing that there was no
distinction between columns that are used in the generated join
condition (where ambiguity can be checked) and the other columns
(where ambiguity can be checked only when resolving references
coming from outside the JOIN construct itself).
Fixed by allowing the non-USING columns to be present in multiple
copies in both sides of the join and moving the ambiguity check
to the place where unqualified references to the join columns are
resolved (find_field_in_natural_join()).
When a merge table is opened compare column and key definition of
underlying tables against column and key definition of merge table.
If any of underlying tables have different column/key definition
refuse to open merge table.
The optimizer takes away columns from GROUP BY/DISTINCT if they constitute
all the parts of an unique index.
However if some of the columns can contain NULLs this cannot be done
(because an UNIQUE index can have multiple rows with NULL values).
Fixed by not using UNIQUE indexes with nullable columns to remove
grouping columns from GROUP BY/DISTINCT.
If inserting a GPL header, include a complete one
Makefile.am, mysql.dsw, mysql.sln:
Removed C version of mysql-test-run
mysql.spec.sh:
Specify GPL license only in GPL sources
.del-my_manage.h:
Delete: mysql-test/my_manage.h
mysql.spec.sh:
Added GPL header
.del-mysql_test_run_new.c:
Delete: mysql-test/mysql_test_run_new.c
.del-mysql_test_run_new.dsp:
Delete: VC++Files/mysql-test/mysql_test_run_new.dsp
.del-my_create_tables.c:
Delete: mysql-test/my_create_tables.c
.del-mysql_test_run_new_ia64.dsp:
Delete: VC++Files/mysql-test/mysql_test_run_new_ia64.dsp
msql2mysql.sh:
Use up-to-date GPL header
.del-mysql_test_run_new.vcproj:
Delete: VC++Files/mysql-test/mysql_test_run_new.vcproj
.del-my_manage.c:
Delete: mysql-test/my_manage.c
Made the function opt_sum_query to return HA_ERR_KEY_NOT_FOUND when
no matches were found (instead of -1 it returned prior this patch).
This changes allow us to avoid possible conflicts with return values
from user-defined handler methods which also may return -1.
No particular test cases are provided with this fix.
Checking for NULL before calling the val_xxx()
methods only checks for such arguments that are
known to be NULLs at compile time.
The arguments that may or may not contain
NULLs (e.g. function calls and possibly others)
are not checked at all.
Fixed by first calling the val_xxx() method and
then checking for null in SEC_TO_TIME().
In addition QUARTER() was not returning 0 (as all the
val_int() functions do when processing a NULL value).