away.
During optimization stage the WHERE conditions can be changed or even
be removed at all if they know for sure to be true of false. Thus they aren't
showed in the EXPLAIN EXTENDED which prints conditions after optimization.
Now if all elements of an Item_cond were removed this Item_cond is substituted
for an Item_int with the int value of the Item_cond.
If there were conditions that were totally optimized away then values of the
saved cond_value and having_value will be printed instead.
Functions over sum functions wasn't set up correctly for the ORDER BY clause
which leads to a wrong order of the result set.
The split_sum_func() function is called now for each ORDER BY item that
contains a sum function to set it up correctly.
The flag alias_name_used was not set on for the outer references
in subqueries. It resulted in replacement of any outer reference
resolved against an alias for a full field name when the frm
representation of a view with a subquery was generated.
If the subquery and the outer query referenced the same table in
their from lists this replacement effectively changed the meaning
of the view and led to wrong results for selects from this view.
Modified several functions to ensure setting the right value of
the alias_name_used flag for outer references resolved against
aliases.
When the ORDER BY clause gets fixed it's allowed to search in the current
item_list in order to find aliased fields and expressions. This is ok for a
SELECT but wrong for an UPDATE statement. If the ORDER BY clause will
contain a non-existing field which is mentioned in the UPDATE set list
then the server will crash due to using of non-existing (0x0) field.
When an Item_field is getting fixed it's allowed to search item list for
aliased expressions and fields only for selects.
created for sorting.
Any outer reference in a subquery was represented by an Item_field object.
If the outer select employs a temporary table all such fields should be
replaced with fields from that temporary table in order to point to the
actual data. This replacement wasn't done and that resulted in a wrong
subquery evaluation and a wrong result of the whole query.
Now any outer field is represented by two objects - Item_field placed in the
outer select and Item_outer_ref in the subquery. Item_field object is
processed as a normal field and the reference to it is saved in the
ref_pointer_array. Thus the Item_outer_ref is always references the correct
field. The original field is substituted for a reference in the
Item_field::fix_outer_field() function.
New function called fix_inner_refs() is added to fix fields referenced from
inner selects and to fix references (Item_ref objects) to these fields.
The new Item_outer_ref class is a descendant of the Item_direct_ref class.
It additionally stores a reference to the original field and designed to
behave more like a field.
UPDATE contains wrong data if the SELECT employs a temporary table.
If the UPDATE values of the INSERT .. SELECT .. ON DUPLICATE KEY UPDATE
statement contains fields from the SELECT part and the select employs a
temporary table then those fields will contain wrong values because they
aren't corrected to get data from the temporary table.
The solution is to add these fields to the selects all_fields list,
to store pointers to those fields in the selects ref_pointer_array and
to access them via Item_ref objects.
The substitution for Item_ref objects is done in the new function called
Item_field::update_value_transformer(). It is called through the
item->transform() mechanism at the end of the select_insert::prepare()
function.
were evaluated.
According to the new rules for string comparison partial indexes on text
columns can be used in the same cases when partial indexes on varchar
columns can be used.
The crash happens because second filling of the same I_S table happens in
case of subselect with order by. table->sort.io_cache previously allocated
in create_sort_index() is deleted during second filling
(function get_schema_tables_result). There are two places where
I_S table can be filled: JOIN::exec and create_sort_index().
To fix the bug we should check if the table was already filled
in one of these places and skip processing of the table in second.
The function make_unireg_sortorder ignored the fact that any
view field is represented by a 'ref' object.
This could lead to wrong results for the queries containing
both GROUP BY and ORDER BY clauses.
The optimizer takes away columns from GROUP BY/DISTINCT if they constitute
all the parts of an unique index.
However if some of the columns can contain NULLs this cannot be done
(because an UNIQUE index can have multiple rows with NULL values).
Fixed by not using UNIQUE indexes with nullable columns to remove
grouping columns from GROUP BY/DISTINCT.
Depending on the queries we use different data processing methods
and can lose some data in case of double (and decimal in 4.1) fields.
The fix consists of two parts:
1. double comparison changed, now double a is equal to double b
if (a-b) is less than 5*0.1^(1 + max(a->decimals, b->decimals)).
For example, if a->decimals==1, b->decimals==2, a==b if (a-b)<0.005
2. if we use a temporary table, store double values there as is
to avoid any data conversion (rounding).
Made the function opt_sum_query to return HA_ERR_KEY_NOT_FOUND when
no matches were found (instead of -1 it returned prior this patch).
This changes allow us to avoid possible conflicts with return values
from user-defined handler methods which also may return -1.
No particular test cases are provided with this fix.
The bug report has demonstrated the following two problems.
1. If an ORDER/GROUP BY list includes a constant expression being
optimized away and, at the same time, containing single-row
subselects that return more that one row, no error is reported.
Strictly speaking the standard allows to ignore error in this case.
Yet, now a corresponding fatal error is reported in this case.
2. If a query requires sorting by expressions containing single-row
subselects that, however, return more than one row, then the execution
of the query may cause a server crash.
To fix this some code has been added that blocks execution of a subselect
item in case of a fatal error in the method Item_subselect::exec.
After fix for bug#21798 JOIN stores the pointer to the buffer for sorting
fields. It is used while sorting for grouping and for ordering. If ORDER BY
clause has more elements then the GROUP BY clause then a memory overrun occurs.
Now the length of the ORDER BY list is always passed to the
make_unireg_sortorder() function and it allocates buffer big enough to be
used for bigger list.
of untouched rows in full table scans".
SELECT ... FOR UPDATE/LOCK IN SHARE MODE statements as well as
UPDATE/DELETE statements which were executed using full table
scan were not releasing locks on rows which didn't satisfy
WHERE condition.
This bug surfaced in 5.0 and affected NDB tables. (InnoDB tables
intentionally don't support such unlocking in default mode).
This problem occured because code implementing join didn't call
handler::unlock_row() for rows which didn't satisfy part of condition
attached to this particular table/level of nested loop. So we solve
the problem adding this call.
Note that we already had this call in place in 4.1 but it was lost
(actually not quite correctly placed) when we have introduced nested
joins.
Also note that additional QA should be requested once this patch is
pushed as interaction between handler::unlock_row() and many recent
MySQL features such as subqueries, unions, views is not tested enough.
- Make the code produce correct result: use an array of triggers to turn on/off equalities for each
compared column. Also turn on/off optimizations based on those equalities.
- Make EXPLAIN output show "Full scan on NULL key" for tables for which we switch between
ref/unique_subquery/index_subquery and ALL access.
- index_subquery engine now has HAVING clause when it is needed, and it is
displayed in EXPLAIN EXTENDED
- Fix incorrect presense of "Using index" for index/unique-based subqueries (BUG#22930)
// bk trigger note: this commit refers to BUG#24127
Currently in the ONLY_FULL_GROUP_BY mode no hidden fields are allowed in the
select list. To ensure this each expression in the select list is checked
to be a constant, an aggregate function or to occur in the GROUP BY list.
The last two requirements are wrong and doesn't allow valid expressions like
"MAX(b) - MIN(b)" or "a + 1" in a query with grouping by a.
The correct check implemented by the patch will ensure that:
any field reference in the [sub]expressions of the select list
is under an aggregate function or
is mentioned as member of the group list or
is an outer reference or
is part of the select list element that coincide with a grouping element.
The Item_field objects now can contain the position of the select list
expression which they belong to. The position is saved during the
field's Item_field::fix_fields() call.
The non_agg_fields list for non-aggregated fields is added to the SELECT_LEX
class. The SELECT_LEX::cur_pos_in_select_list now contains the position in the
select list of the expression being currently fixed.
The optimizer removes expressions from GROUP BY/DISTINCT
if they happen to participate in a <expression> = <const>
predicates of the WHERE clause (the idea being that if
it's always equal to a constant it can't have multiple
values).
However for predicates where the expression and the
constant item are of different result type this is not
valid (e.g. a string column compared to 0).
Fixed by additional check of the result types of the
expression and the constant and if they differ the
expression don't get removed from the group by list.
This bug appeared after the patch for bug 21390 that had added some code
to handle outer joins with no matches after substitution of a const
table in an efficient way. That code as it is cannot be applied to the case
of nested outer join operations. Being applied to the queries with
nested outer joins the code can cause crashes or wrong result sets.
The fix blocks row substitution for const inner tables of an outer join
if the inner operand is not a single table.
Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
When compiling GROUP BY Item_ref instances are dereferenced in
setup_copy_fields(), i.e. replaced with the corresponding Item_field
(if they point to one) or Item_copy_string for the other cases.
Since the Item_ref (in the Item_field case) is no longer used the information
about the aliases stored in it is lost.
Fixed by preserving the column, table and DB alias on dereferencing Item_ref
The problem was that any VIEW columns had always implicit derivation.
Fix: derivation is now copied from the original expression
given in VIEW definition.
For example:
- a VIEW column which comes from a string constant
in CREATE VIEW definition have now coercible derivation.
- a VIEW column having COLLATE clause
in CREATE VIEW definition have now explicit derivation.
Don't assume that condition that was pushed down into subquery has
produced exactly one KEY_FIELD element - it could produce several or
none at all, handle all of those cases.
This is a performance issue for queries with subqueries evaluation
of which requires filesort.
Allocation of memory for the sort buffer at each evaluation of a
subquery may take a significant amount of time if the buffer is rather big.
With the fix we allocate the buffer at the first evaluation of the
subquery and reuse it at each subsequent evaluation.
Evaluate "NULL IN (SELECT ...)" in a special way: Disable pushed-down
conditions and their "consequences":
= Do full table scans instead of unique_[index_subquery] lookups.
= Change appropriate "ref_or_null" accesses to full table scans in
subquery's joins.
Also cache value of NULL IN (SELECT ...) if the SELECT is not correlated
wrt any upper select.
We miss some records sometimes using RANGE method if we have
partial key segments.
Example:
Create table t1(a char(2), key(a(1)));
insert into t1 values ('a'), ('xx');
select a from t1 where a > 'x';
We call index_read() passing 'x' key and HA_READ_AFTER_KEY flag
in the handler::read_range_first() wich is wrong because we have
a partial key segment for the field and might miss records like 'xx'.
Fix: don't use open segments in such a case.
list using a function
When executing dependent subqueries they are re-inited and re-exec() for
each row of the outer context.
The cause for the bug is that during subquery reinitialization/re-execution,
the optimizer reallocates JOIN::join_tab, JOIN::table in make_simple_join()
and the local variable in 'sortorder' in create_sort_index(), which is
allocated by make_unireg_sortorder().
Care must be taken not to allocate anything into the thread's memory pool
while re-initializing query plan structures between subquery re-executions.
All such items mush be cached and reused because the thread's memory pool
is freed at the end of the whole query.
Note that they must be cached and reused even for queries that are not
otherwise cacheable because otherwise it will grow the thread's memory
pool every time a cacheable query is re-executed.
We provide additional members to the JOIN structure to store references
to the items that need to be cached.
account predicates that become sargable after reading const tables.
In some cases this resulted in choosing non-optimal execution plans.
Now info of such potentially saragable predicates is saved in
an array and after reading const tables we check whether this
predicates has become saragable.
Examined rows are counted for every join part. The per-join-part
counter was incremented over all iterations. The result variable
was replaced at the end of every iteration. The final result was
the number of examined rows by the join part that ended its
execution as the last one. The numbers of other join parts was
lost.
Now we reset the per-join-part counter before every iteration and
add it to the result variable at the end of the iteration. That
way we get the sum of all iterations of all join parts.
No test case. Testing this needs a look into the slow query log.
I don't know of a way to do this portably with the test suite.
Currently SQL_BIG_RESULT is checked only at compile time.
However, additional optimizations may take place after
this check that change the sort method from 'filesort'
to sorting via index. As a result the actual plan
executed is not the one specified by the SQL_BIG_RESULT
hint. Similarly, there is no such test when executing
EXPLAIN, resulting in incorrect output.
The patch corrects the problem by testing for
SQL_BIG_RESULT both during the explain and execution
phases.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
invocations of LAST_INSERT_ID.
Reding of LAST_INSERT_ID inside stored function wasn't noted by caller,
and no LAST_INSERT_ID_EVENT was issued for binary log.
The solution is to add THD::last_insert_id_used_bin_log, which is much
like THD::last_insert_id_used, but is reset only for upper-level
statements. This new variable is used to issue LAST_INSERT_ID_EVENT.
Non-upper-level INSERTs (the ones in the body of stored procedure,
stored function, or trigger) into a table that have AUTO_INCREMENT
column didn't affected the result of LAST_INSERT_ID() on this level.
The problem was introduced with the fix of bug 6880, which in turn was
introduced with the fix of bug 3117, where current insert_id value was
remembered on the first call to LAST_INSERT_ID() (bug 3117) and was
returned from that function until it was reset before the next
_upper-level_ statement (bug 6880).
The fix for bug#21726 brings back the behaviour of version 4.0, and
implements the following: remember insert_id value at the beginning
of the statement or expression (which at that point equals to
the first insert_id value generated by the previous statement), and
return that remembered value from LAST_INSERT_ID() or @@LAST_INSERT_ID.
Thus, the value returned by LAST_INSERT_ID() is not affected by values
generated by current statement, nor by LAST_INSERT_ID(expr) calls in
this statement.
Version 5.1 does not have this bug (it was fixed by WL 3146).
Fix for bug 7894 replaces a field(s) in a non-aggregate function with a item
reference if such a field was specified in the GROUP BY clause in order to
get a correct result.
When ROLLUP is involved this lead to a wrong result due to value of a such
field is got through a copy function and copying happens after the function
evaluation.
Such replacement isn't needed if grouping is also done by such a function.
The change_group_ref() function now isn't called for a function present in
the group list.
while space allocation
Under some circumstances DISTINCT clause can be converted to grouping.
In such cases grouping is performed by all items in the select list.
If an ORDER clause is present then items from it is prepended to group list.
But the case with ORDER wasn't taken into account when allocating the
array for sum functions. This leads to memory corruption and crash.
The JOIN::alloc_func_list() function now allocates additional space if there
is an ORDER by clause is specified and DISTINCT -> GROUP BY optimization is
possible.
create_tmp_table()".
The fix for bug 21787 "COUNT(*) + ORDER BY + LIMIT returns wrong
result" introduced valgrind warnings which occured during execution
of information_schema.test and sp-prelocking.test in version 5.0.
There were no user visible effects.
The latter fix made create_tmp_table() dependant on
THD::lex::current_select value. Valgrind warnings occured when this
function was executed and THD::lex::current_select member pointed
to uninitialized SELECT_LEX instance.
This fix tries to remove this dependancy by moving some logic
outside of create_tmp_table() function.
post-review fixes as indicated by Serg.
manual testing of error cases done in 5.0 due to support for DBUG_EXECUTE_IF
to insert errors.
Unable to write test case for mysql-test until 5.1 due to support for setting
debug options at runtime.
statement that uses an aggregating IN subquery with
HAVING clause.
A wrong order of the call of split_sum_func2 for the HAVING
clause of the subquery and the transformation for the
subquery resulted in the creation of a andor structure
that could not be restored at an execution of the prepared
statement.
containing a select statement that uses an aggregating IN subquery.
Added a parameter to the function fix_prepare_information
to restore correctly the having clause for the second execution.
Saved andor structure of the having conditions at the proper moment
before any calls of split_sum_func2 that could modify the having structure
adding new Item_ref objects. (These additions, are produced not with
the statement mem_root, but rather with the execution mem_root.)
equal constant under any circumstances.
In fact this substitution can be allowed if the field is
not of a type string or if the field reference serves as
an argument of a comparison predicate.
if join is used
For procedures with selects that use complicated joins with ON expression
re-execution could erroneously ignore this ON expression, giving
incorrect result.
The problem was that optimized ON expression wasn't saved for
re-execution. The solution is to properly save it.
Select_type in the EXPLAIN output for the query SELECT * FROM t1 was
'SIMPLE', while for the query SELECT * FROM v1, where the view v1
was defined as SELECT * FROM t1, the EXPLAIN output contained 'PRIMARY'
for the select_type column.
The problem was due to a prior fix for BUG 9676, which limited
the rows stored in a temporary table to the LIMIT clause. This
optimization is not applicable to non-group queries with aggregate
functions. The fix disables the optimization in this case.
account by the optimizer.
Now all row equalities are converted into conjunctions of
equalities between row elements. They are taken into account
by the optimizer together with the original regular equality
predicates.
const tables. This resulted in choosing extremely inefficient
execution plans in same cases when distribution of data in
joined were skewed (see the customer test case for the bug).
GROUP BY/DISTINCT pruning optimization must be done before ORDER BY
optimization because ORDER BY may be removed when GROUP BY/DISTINCT
sorts as a side effect, e.g. in
SELECT DISTINCT <non-key-col>,<pk> FROM t1
ORDER BY <non-key-col> DISTINCT
must be removed before ORDER BY as if done the other way around
it will remove both.
used.
Sorting by RAND() uses a temporary table in order to get a correct results.
User defined variable was set during filling the temporary table and later
on it is substituted for its value from the temporary table. Due to this
it contains the last value stored in the temporary table.
Now if the result_field is set for the Item_func_set_user_var object it
updates variable from the result_field value when being sent to a client.
The Item_func_set_user_var::check() now accepts a use_result_field
parameter. Depending on its value the result_field or the args[0] is used
to get current value.
A date can be represented as an int (like 20060101) and as a string (like
"2006.01.01"). When a DATE/TIME field is compared in one SELECT against both
representations the constant propagation mechanism leads to comparison
of DATE as a string and DATE as an int. In this example it compares 2006 and
20060101 integers. Obviously it fails comparison although they represents the
same date.
Now the Item_bool_func2::fix_length_and_dec() function sets the comparison
context for items being compared. I.e. if items compared as strings the
comparison context is STRING.
The constant propagation mechanism now doesn't mix items used in different
comparison contexts. The context check is done in the
Item_field::equal_fields_propagator() and in the change_cond_ref_to_const()
functions.
Also the better fix for bug 21159 is introduced.
SELECT right instead of INSERT right was required for an insert into to a view.
This wrong behaviour appeared after the fix for bug #20989. Its intention was
to ask only SELECT right for all tables except the very first for a complex
INSERT query. But that patch has done it in a wrong way and lead to asking
a wrong access right for an insert into a view.
The setup_tables_and_check_access() function now accepts two want_access
parameters. One will be used for the first table and the second for other
tables.
Disable const propagation for Item_hex_string.
This must be done because Item_hex_string->val_int() is not
the same as (Item_hex_string->val_str() in BINARY column)->val_int().
We cannot simply disable the replacement in a particular context (
e.g. <bin_col> = <int_col> AND <bin_col> = <hex_string>) since
Items don't know the context they are in and there are functions like
IF (<hex_string>, 'yes', 'no').
Note that this will disable some valid cases as well
(e.g. : <bin_col> = <hex_string> AND <bin_col2> = <bin_col>) but
there's no way to distinguish the valid cases without having the
Item's parent say something like : Item->set_context(Item::STRING_RESULT)
and have all the Items that contain other Items do that consistently.
optimizer does not honor IGNORE INDEX
- Allow an index to be used for sorting the table
instead of filesort only if it is not disabled by
IGNORE INDEX.
table in a join
The optimizer removes redundant columns in ORDER BY. It is considering
redundant every reference to const table column, e.g b in :
create table t1 (a int, b int, primary key(a));
select 1 from t1 order by b where a = 1
But it must not remove references to const table columns if the
const table is an outer table because there still can be 2 values :
the const value and NULL. e.g.:
create table t1 (a int, b int, primary key(a));
select t2.b c from t1 left join t1 t2 on (t1.a = t2.a and t2.a = 5)
order by c;
Treat queries with no FROM and aggregate functions as normal queries,
so the aggregate function get correctly calculated as if there is 1 row.
This means that they will be considered to have one row, so COUNT(*) will return
1 instead of 0. Other aggregates will behave in compatible manner.
Renamed variable, to avoid name clash with macro "rem_size"
on AIX 5.3 and "/usr/include/sys/xmem.h" (bug#17648)
asn.cpp, asn.hpp:
Avoid name clash with NAME_MAX
When processing aggregate functions all tables values are reset
to NULLs at the end of each group.
When doing that if there are no rows found for a group
the const tables must not be reset as they are not recalculated
by do_select()/sub_select() for each group.
Too many cursors (more than 1024) could lead to memory corruption.
This affects both, stored routines and C API cursors, and the
threshold is per-server, not per-connection. Similarly, the
corruption could happen when the server was under heavy load
(executing more than 1024 simultaneous complex queries), and this is
the reason why this bug is fixed in 4.1, which doesn't support
cursors.
The corruption was caused by a bug in the temporary tables code, when
an attempt to create a table could lead to a write beyond allocated
space. Note, that only internal tables were affected (the tables
created internally by the server to resolve the query), not tables
created with CREATE TEMPORARY TABLE. Another pre-condition for the
bug is TRUE value of --temp-pool startup option, which, however, is a
default.
The cause of a bug was that random memory was overwritten in
bitmap_set_next() due to out-of-bound memory access.
When optimizing conditions like 'a = <some_val> OR a IS NULL' so that they're
united into a single condition on the key and checked together the server must
check which value is the NULL value in a correct way : not only using ->is_null
but also check if the expression doesn't depend on any tables referenced in the
current statement.
This additional check must be performed because that optimization takes place
before the actual execution of the statement, so if the field was initialized
to NULL from a previous statement the optimization would be applied incorrectly.
The problem was in that opt_sum_query() replaced MIN/MAX functions
with the corresponding constant found in a key, but due to imprecise
representation of float numbers, when evaluating the where clause,
this comparison failed.
When MIN/MAX optimization detects that all tables can be removed,
also remove all conjuncts in a where clause that refer to these
tables. As a result of this fix, these conditions are not evaluated
twice, and in the case of float number comparisons we do not discard
result rows due to imprecise float representation.
As a side-effect this fix also corrects an unnoticed problem in
bug 12882.
When there is no index defined filesort is used to sort the result of a
query. If there is a function in the select list and the result set should be
ordered by it's value then this function will be evaluated twice. First time to
get the value of the sort key and second time to send its value to a user.
This happens because filesort when sorts a table remembers only values of its
fields but not values of functions.
All functions are affected. But taking into account that SP and UDF functions
can be both expensive and non-deterministic a temporary table should be used
to store their results and then sort it to avoid twice SP evaluation and to
get a correct result.
If an expression referenced in an ORDER clause contains a SP or UDF
function, force the use of a temporary table.
A new Item_processor function called func_type_checker_processor is added
to check whether the expression contains a function of a particular type.
when calculating GROUP_CONCAT all blob fields are transformed
to varchar when making the temp table.
However a varchar has at max 2 bytes for length.
This fix makes the conversion only for blobs whose max length
is below that limit.
Otherwise blob field is created by make_string_field() call.
When making a place to store field values at the start of each group
the real item (not the reference) must be used when deciding which column
to copy.
An aggregate function reference was resolved incorrectly and
caused a crash in count_field_types.
Must use real_item() to get to the real Item instance through
the reference
The bug was due to a loss happened during a refactoring made
on May 30 2005 that modified the function JOIN::reinit.
As a result of it for any subquery the value of offset_limit_cnt
was not restored for the following executions. Yet the first
execution of the subquery made it equal to 0.
The fix restores this value in the function JOIN::reinit.
DESCRIBE returned the type BIGINT for a column of a view if the column
was specified by an expression over values of the type INT.
E.g. for the view defined as follows:
CREATE VIEW v1 SELECT COALESCE(f1,f2) FROM t1
DESCRIBE returned type BIGINT for the only column of the view if f1,f2 are
columns of the INT type.
At the same time DESCRIBE returned type INT for the only column of the table
defined by the statement:
CREATE TABLE t2 SELECT COALESCE(f1,f2) FROM t1.
This inconsistency was removed by the patch.
Now the code chooses between INT/BIGINT depending on the
precision of the aggregated column type.
Thus both DESCRIBE commands above returns type INT for v1 and t2.
* don't use join cache when the incoming data set is already ordered
for ORDER BY
This choice must be made because join cache will effectively
reverse the join order and the results will be sorted by the index
of the table that uses join cache.
may return a wrong result.
An Item_sum_hybrid object has the was_values flag which indicates whether any
values were added to the sum function. By default it is set to true and reset
to false on any no_rows_in_result() call. This method is called only in
return_zero_rows() function. An ALL/ANY subquery can be optimized by MIN/MAX
optimization. The was_values flag is used to indicate whether the subquery
has returned at least one row. This bug occurs because return_zero_rows() is
called only when we know that the select will return zero rows before
starting any scans but often such information is not known.
In the reported case the return_zero_rows() function is not called and
the was_values flag is not reset to false and yet the subquery return no rows
Item_func_not_all and Item_func_nop_all functions return a wrong
comparison result.
The end_send_group() function now calls no_rows_in_result() for each item
in the fields_list if there is no rows were found for the (sub)query.
To make MySQL compatible with some ODBC applications, you can find
the AUTO_INCREMENT value for the last inserted row with the following query:
SELECT * FROM tbl_name WHERE auto_col IS NULL.
This is done with a special code that replaces 'auto_col IS NULL' with
'auto_col = LAST_INSERT_ID'.
However this also resets the LAST_INSERT_ID to 0 as it uses it for a flag
so as to ensure that only the first SELECT ... WHERE auto_col IS NULL
after an INSERT has this special behaviour.
In order to avoid resetting the LAST_INSERT_ID a special flag is introduced
in the THD class. This flag is used to restrict the second and subsequent
SELECTs instead of LAST_INSERT_ID.