The optimizer needs to evaluate whether predicates are better
evaluated using an index. IN is one such predicate.
To qualify an IN predicate must involve a field of the index
on the left and constant arguments on the right.
However whether an expression is a constant can be determined only
by knowing the preceding tables in the join order.
Assuming that only IN predicates with expressions on the right that
are constant for the whole query qualify limits the scope of
possible optimizations of the IN predicate (more specifically it
doesn't allow the "Range checked for each record" optimization for
such an IN predicate.
Fixed by not pre-determining the optimizability of the IN predicate
in the case when all right IN operands are not SQL constant expressions
of untouched rows in full table scans".
SELECT ... FOR UPDATE/LOCK IN SHARE MODE statements as well as
UPDATE/DELETE statements which were executed using full table
scan were not releasing locks on rows which didn't satisfy
WHERE condition.
This bug surfaced in 5.0 and affected NDB tables. (InnoDB tables
intentionally don't support such unlocking in default mode).
This problem occured because code implementing join didn't call
handler::unlock_row() for rows which didn't satisfy part of condition
attached to this particular table/level of nested loop. So we solve
the problem adding this call.
Note that we already had this call in place in 4.1 but it was lost
(actually not quite correctly placed) when we have introduced nested
joins.
Also note that additional QA should be requested once this patch is
pushed as interaction between handler::unlock_row() and many recent
MySQL features such as subqueries, unions, views is not tested enough.
for queries using 'range checked for each record'.
The problem was fixed in 5.0 by the patch for bug 12291.
This patch down-ported the corresponding code from 5.0 into
QUICK_SELECT::init() and added a new test case.
in a select list.
The objects of the Item_trigger_field class inherited the implementations
of the methods copy_or_same, get_tmp_table_item and get_tmp_table_field
from the class Item_field while they rather should have used the default
implementations defined for the base class Item.
It could cause catastrophic problems for triggers that used SELECTs
with select list containing trigger fields such as NEW.<table column>
under DISTINCT.
- Make the code produce correct result: use an array of triggers to turn on/off equalities for each
compared column. Also turn on/off optimizations based on those equalities.
- Make EXPLAIN output show "Full scan on NULL key" for tables for which we switch between
ref/unique_subquery/index_subquery and ALL access.
- index_subquery engine now has HAVING clause when it is needed, and it is
displayed in EXPLAIN EXTENDED
- Fix incorrect presense of "Using index" for index/unique-based subqueries (BUG#22930)
// bk trigger note: this commit refers to BUG#24127
When transforming "oe IN (SELECT ie ...)" wrap the pushed-down predicates
iff "oe can be null", not "ie can be null".
The fix doesn't cover row-based subqueries, those will be fixed in #24127.
Currently in the ONLY_FULL_GROUP_BY mode no hidden fields are allowed in the
select list. To ensure this each expression in the select list is checked
to be a constant, an aggregate function or to occur in the GROUP BY list.
The last two requirements are wrong and doesn't allow valid expressions like
"MAX(b) - MIN(b)" or "a + 1" in a query with grouping by a.
The correct check implemented by the patch will ensure that:
any field reference in the [sub]expressions of the select list
is under an aggregate function or
is mentioned as member of the group list or
is an outer reference or
is part of the select list element that coincide with a grouping element.
The Item_field objects now can contain the position of the select list
expression which they belong to. The position is saved during the
field's Item_field::fix_fields() call.
The non_agg_fields list for non-aggregated fields is added to the SELECT_LEX
class. The SELECT_LEX::cur_pos_in_select_list now contains the position in the
select list of the expression being currently fixed.
aliases ignored
When a column reference to a column in JOIN USING is resolved and a new
Item is created for this column the user defined name was lost.
This fix preserves the alias by setting the name of the new Item to the
original alias.
correctly.
The Item_func::print method was used to print the Item_func_encode and the
Item_func_decode objects. The last argument to ENCODE and DECODE functions
is a plain C string and thus Item_func::print wasn't able to print it.
The print() method is added to the Item_func_encode class. It correctly
prints the Item_func_encode and the Item_func_decode objects.
WHERE is present.
If a DELETE statement with ORDER BY and LIMIT contains a WHERE clause
with conditions that for sure cannot be used for index access (like in
WHERE @var:= field) the execution always follows the filesort path.
It happens currently even when for the above case there is an index that
can be used to speedup sorting by the order by list.
Now if a DELETE statement with ORDER BY and LIMIT contains such WHERE
clause conditions that cannot be used to build any quick select then
the mysql_delete() tries to use an index like there is no WHERE clause at all.
In the method Item_field::fix_fields we try to resolve the name of
the field against the names of the aliases that occur in the select
list. This is done by a call of the function find_item_in_list.
When this function finds several occurrences of the field name
it sends an error message to the error queue and returns 0.
Yet the code did not take into account that find_item_in_list
could return 0 and tried to dereference the returned value.
used.
The Item::save_in_field() function is called from fill_record() to fill the
new row with data while execution of the CREATE TABLE ... SELECT statement.
Item::save_in_field() calls val_xxx() methods in order to get values.
val_xxx() methods do not take into account the result field. Due to this
Item_func_set_user_var::val_xxx() methods returns values from the original
table, not from the temporary one.
The save_in_field() member function is added to the Item_func_set_user_var
class. It detects whether the result field should be used and properly updates
the value of the user variable.
A BINARY field is represented by the Field_string class. The space character
is used as the filler for unused characters in such a field. But a BINARY field
should use \x00 instead.
Field_string:reset() now detects whether the current field is a BINARY one
and if so uses the \x00 character as a default value filler.
correctly in some cases".
In short, calls to a stored function located in another database
than the default database, may fail to replicate if the call was made
by SET, SELECT, or DO.
Longer: when a stored function is called from a statement which does not go
to binlog ("SET @a=somedb.myfunc()", "SELECT somedb.myfunc()",
"DO somedb.myfunc()"), this crafted statement is binlogged:
"SELECT myfunc();" (accompanied with a mention of the default database
if there is one). So, if "somedb" is not the default database,
the slave would fail to find myfunc(). The fix is to specify the
function's database name in the crafted binlogged statement, like this:
"SELECT somedb.myfunc();". Test added in rpl_sp.test.
The optimizer removes expressions from GROUP BY/DISTINCT
if they happen to participate in a <expression> = <const>
predicates of the WHERE clause (the idea being that if
it's always equal to a constant it can't have multiple
values).
However for predicates where the expression and the
constant item are of different result type this is not
valid (e.g. a string column compared to 0).
Fixed by additional check of the result types of the
expression and the constant and if they differ the
expression don't get removed from the group by list.
The function mi_get_pointer_length() computed too small
pointer size for very large tables.
Inserted missing 'else' between the branches for very
large tables.
This bug appeared after the patch for bug 21390 that had added some code
to handle outer joins with no matches after substitution of a const
table in an efficient way. That code as it is cannot be applied to the case
of nested outer join operations. Being applied to the queries with
nested outer joins the code can cause crashes or wrong result sets.
The fix blocks row substitution for const inner tables of an outer join
if the inner operand is not a single table.
With MySQL 3.23 and 4.0, the syntax 'LIMIT N, -1' is accepted, and returns
all the rows located after row N. This behavior, however, is not the
intended result, and defeats the purpose of LIMIT, which is to constrain
the size of a result set.
With MySQL 4.1 and later, this construct is correctly detected as a syntax
error.
This fix does not change the production code, and only adds a new test case
to improve test coverage in this area, to enforce in the test suite the
intended behavior.
repair it
Multi-table delete that is optimized with QUICK_RANGE reports table
corruption.
DELETE statement must not use KEYREAD optimization, and sets
table->no_keyread to 1. This was ignored in QUICK_RANGE optimization.
With this fix QUICK_RANGE optimization honors table->no_keyread
value and does not enable KEYREAD when it is requested.
When only one row was present, the subtraction of nearly the same number
resulted in catastropic cancellation, introducing an error in the
VARIANCE calculation near 1e-15. That was sqrt()ed to get STDDEV, the
error was escallated to near 1e-8.
The simple fix of testing for a row count of 1 and forcing that to yield
0.0 is insufficient, as two rows of the same value should also have a
variance of 0.0, yet the error would be about the same.
So, this patch changes the formula that computes the VARIANCE to be one
that is not subject to catastrophic cancellation.
In addition, it now uses only (faster-than-decimal) floating point numbers
to calculate, and renders that to other types on demand.
An update that used a join of a table to itself and modified the
table on one side of the join reported the table as crashed or
updated wrong rows.
Fixed by creating temporary table for self-joined multi update statement.
Handling of large signed/unsigned values was not consistent, so some string functions could return bogus results.
The current fix is to simply patch up the val_str() methods for those string items.
It would be good clean this code up in general, to make similar problems much harder to make. This is left as an exercise for the reader.
spatial index
While executing OPTIMIZE TABLE on MyISAM tables the server re-creates the
index file(s) in order to sort them physically by the key. This cannot be
done for R-tree indexes as it makes no sense.
The server was not checking the type of the index and was accessing an
R-tree index as if it was a B-tree.
Fixed by preventing sorting the index file if it contains an R-tree index.
If SELECT-part of CREATE VIEW statement contains '\Z',
it is not handled correctly.
The problem was in String::print().
Symbol with code 032 (26) is replaced with '\z',
which is not supported by the lexer.
The fix is to replace the symbol with '\Z'.
the UDF
When deleting a user defined function MySQL must remove it from both the
in-memory hash table and the mysql.proc system table.
Finding (and removal therefore) from the internal hash table is case
insensitive (or whatever the default charset is), whereas finding and
removal from the system table is case sensitive.
As a result if you supply a function name that is not in the same character
case to DROP FUNCTION the server will remove the function only from the
in-memory hash table and will keep the row in mysql.proc system table.
This will cause inconsistency between the two structures (that is fixed
only by restarting the server).
Fixed by using the name in the precise case (from the in-memory hash table)
to delete the row in the mysql.proc system table.
Problem:
When creating a temporary field for a temporary table in create_tmp_field_from_field(), a resulting field is created as an exact copy of an original one (in Field::new_field()). However, Field_enum and Field_set contain a pointer (typelib) to memory allocated in the parent table's MEM_ROOT, which under some circumstances may be deallocated later by the time a temporary table is used.
Solution:
Override the new_field() method for Field_enum and Field_set and create a separate copy of the typelib structure in there.
- When this bug was corrected it changed the behavior
for data/index directory in the myisam test case.
- This patch moves the OS depending tests to a non-windows
test file.
Blocked evaluation of constant objects of the classes
Item_func_is_null and Item_is_not_null_test at the
prepare phase in the cases when the objects used subqueries.
Removed an assertion that was not valid for the cases where the query
in a prepared statement contained a single-row non-correlated
subquery that was used as an argument of the IS NULL predicate.
Bug#4968 "Stored procedure crash if cursor opened on altered table"
Bug#19733 "Repeated alter, or repeated create/drop, fails"
Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from
stored procedure."
Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
Test cases for bugs 4968, 19733, 6895 will be added in 5.0.
Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE
statements in stored routines or as prepared statements caused
incorrect results (and crashes in versions prior to 5.0.25).
In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
The problem of bugs 4968, 19733, 19282 and 6895 was that functions
mysql_prepare_table, mysql_create_table and mysql_alter_table were not
re-execution friendly: during their operation they used to modify contents
of LEX (members create_info, alter_info, key_list, create_list),
thus making the LEX unusable for the next execution.
In particular, these functions removed processed columns and keys from
create_list, key_list and drop_list. Search the code in sql_table.cc
for drop_it.remove() and similar patterns to find evidence.
The fix is to supply to these functions a usable copy of each of the
above structures at every re-execution of an SQL statement.
To simplify memory management, LEX::key_list and LEX::create_list
were added to LEX::alter_info, a fresh copy of which is created for
every execution.
The problem of crashing bug 22060 stemmed from the fact that the above
metnioned functions were not only modifying HA_CREATE_INFO structure in
LEX, but also were changing it to point to areas in volatile memory of
the execution memory root.
The patch solves this problem by creating and using an on-stack
copy of HA_CREATE_INFO (note that code in 5.1 already creates and
uses a copy of this structure in mysql_create_table()/alter_table(),
but this approach didn't work well for CREATE TABLE SELECT statement).
- Using DATA/INDEX DIRECTORY option on Windows put data/index file into
default directory because the OS doesn't support readlink().
- The procedure for changing data/index file directory is
different under Windows.
- With this fix we report a warning if DATA/INDEX option is used,
but OS doesn't support readlink().
table
ROW_FORMAT option is lost during CREATE/DROP INDEX.
This fix forces CREATE/DROP INDEX to retain ROW_FORMAT by instructing
mysql_alter_table() that ROW_FORMAT is not used during creating/dropping
indexes.
The problem is that the GEOMETRY NOT NULL can't automatically set
any value as a default one. We always tried to complete LOAD DATA
command even if there's not enough data in file. That doesn't work
for GEOMETRY NOT NULL. Now Field_*::reset() returns an error sign
and it's checked in mysql_load()
Problem: replication of LC_TIME_NAMES didn't work.
Thus, INSERTS or UPDATES using date_format() always
worked with en_US on the slave side.
Fix: adding ONE_SHOT implementation for LC_TIME_NAMES.
Problem: ``SET PASSWORD FOR foo@localhost'' was written into
binary log using double quites: ``SET PASSWORD FOR "foo"@"localhost"...''.
If sql_mode was set to ANSI_QUOTES, parser on slave considered
"foo" and "localhost" as identifiers instead of strigns constants,
so it failed to parse, generated syntax error and slave then stopped.
Fix: changing binary log entries to use single quotes:
``SET PASSWORD FOR 'foo'@'localhost'...'' not to depend on ANSI_QUOTES.