tables
Currently in INSERT ... SELECT ... LIMIT ... the compiler uses a
temporary table to store the results of SELECT ... LIMIT .. and then
uses that table as a source for INSERT. The problem is that in some cases
it actually skips the LIMIT clause in doing that and materializes the
whole SELECT result set regardless of the LIMIT.
This fix is limiting the process of filling up the temp table with only
that much rows that will be actually used by propagating the LIMIT value.
The bug report revealed two problems related to min/max optimization:
1. If the length of a constant key used in a SARGable condition for
for the MIN/MAX fields is greater than the length of the field an
unwanted warning on key truncation is issued;
2. If MIN/MAX optimization is applied to a partial index, like INDEX(b(4))
than can lead to returning a wrong result set.
A query with a group by and having clauses could return a wrong
result set if the having condition contained a constant conjunct
evaluated to FALSE.
It happened because the pushdown condition for table with
grouping columns lost its constant conjuncts.
Pushdown conditions are always built by the function make_cond_for_table
that ignores constant conjuncts. This is apparently not correct when
constant false conjuncts are present.
The bug was as follows: When merge_key_fields() encounters "t.key=X OR t.key=Y" it will
try to join them into ref_or_null access via "t.key=X OR NULL". In order to make this
inference it checks if Y<=>NULL, ignoring the fact that value of Y may be not yet known.
The fix is that the check if Y<=>NULL is made only if value of Y is known (i.e. it is a
constant).
TODO: When merging to 5.0, replace used_tables() with const_item() everywhere in merge_key_fields().
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
Remove wrong fix for Bug#14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
Safety fix for bug #13855 "select distinct with group by caused server crash"
Initialized usable_keys from table->keys_in_use instead of ~0
in test_if_skip_sort_order(). It was possible that a disabled
index was used for sorting.
Procedure analyse() redefines select's fields_list. setup_copy_fields() assumes
that fields_list is a part of all_fields_list. Because select have only
3 columns and analyse() redefines it to have 10 columns, int overrun in
setup_copy_fields() occurs and server goes to almost infinite loop.
Because fields_list used not only to send data ad fields types, it's wrong
to allow procedure redefine it. This patch separates select's fileds_list
and procedure's one. Now if procedure is present, copy of fields_list is
created in procedure_fields_list and it is used for sending data and fields.
Date field was declared as not null, thus expression 'datefield is null'
was always false. For SELECT special handling of such cases is used.
There 'datefield is null' converted to 'datefield eq "0000-00-00"'.
In mysql_update() before creation of select added remove_eq_conds() call.
It makes some optimization of conds and in particular performs conversion
from 'is null' to 'eq'.
Also remove_eq_conds() makes some evaluation of conds and if it founds that
conds is always false then update statement is not processed further.
All this allows to perform some update statements process faster due to
optimized conds, and not wasting resources if conds known to be false.
DISTINCT wasn't optimized away and caused creation of tmp table in wrong
case. This result in integer overrun and running out of memory.
Fix backported from 4.1. Now if optimizer founds that in result be only 1
row it removes distinct.
When fixing Item_func_plus in ORDER BY clause field c is searched in all
opened tables, but because c is an alias it wasn't found there.
This patch adds a flag to select_lex which allows Item_field::fix_fields()
to look up in select's item_list to find aliased fields.
For queries with GROUP BY and without hidden GROUP BY fields DISTINCT is
optimized away becuase such queries produce result set without duplicates.
But ROLLUP can add rows which may be same to some rows and this fact was
ignored.
Added check so if ROLLUP is present DISTINCT can't be optimized away.
Fixed bug #12885.
Forced inheritence of the maybe_null flag for the expressions
containing GROUP BY attributes in selects with ROLLUP.
olap.test, olap.result:
Added test case for bug #12885.
Item::tmp_table_field_from_field_type() and create_tmp_field_from_item()
was converting string field to blob depending on byte-wise length instead of
character length, which results in converting valid varchar string with
length == 86 to longtext.
Made that functions above take into account max width of character when
converting string fields to blobs.
Added test cases for bug #12625.
sql_select.cc:
Fixed bug #12625.
Fixed invalid removal of constant items from the DISTINCT
list in the function create_distinct_group.
create_tmp_field_from_item() was creating tmp field without regard to
original field type of Item. This results in wrong type being reported to
client.
To create_tmp_field_from_item() added special handling for Items with
DATE/TIME field types to preserve their type.
Fixed bug #11479.
The JOIN::reinit method cannot call setup_tables
after the optimization phase since this function
removes some optimization settings for joined
tables. E.g. it resets values of the null_row flag to 0.
subselect.result, subselect.test:
Added a test case for bug #11479.
- Fixed some error condtion when handling dates with 'T'
- Added extra test for bug #11867 (Wrong result with "... WHERE ROW( a, b ) IN ( SELECT DISTINCT a, b WHERE ...)" to show it's not yet fixed
- Safety fixes and cleanups
When creating temporary table for UNION, pass TMP_TABLE_FORCE_MYISAM flag to
create_tmp_table if we will be using fulltext function(s) when reading from the
temp. table.
Added a test case for bug #12095.
sql_class.h:
Fixed bug #12095: a join query with GROUP_CONCAT over a single row table.
Added a flag to the TMP_TABLE_PARAM class forcing to put constant
items generated after elimination of a single row table into temp table
in some cases (e.g. when GROUP_CONCAT is calculated over a single row
table).
bk ci sql/item_sum.cc
Fixed bug #12095: a join query with GROUP_CONCAT over a single row table.
If GROUP_CONCAT is calculated we always put its argument into a temp
table, even when the argument is a constant item.
sql_select.cc:
Fixed bug #12095: a join query with GROUP_CONCAT over one row table.
If temp table is used to calculate GROUP_CONCAT the argument should
be always put into this table, even when it is a constant item.
Fixed bug #12144.
Added an optimization that avoids key access with null keys for the 'ref'
method when used in outer joins. The regilar optimization with adding
IS NOT NULL expressions is not applied for outer join on expressions as
the predicates of these expressions are not pushed down in 4.1.
null_key.result, null_key.test:
Added a test case for bug #12144.
(This is because conds may not be a constant)
This bug caused a failure in the test suite for select.test becasue for prepared statements in one case a 'constant sub query' was not regarded as a constant
Sanja will as a separate task check if we can fix that the sub query can be recognized as a constant
Bug #12217
Added a test case for bug #11745.
sql_select.cc:
Fixed bug # 11745.
Added support of where clause for queries with FROM DUAL.
sql_yacc.yy:
Fixed bug # 11745.
Added optional where clause for queries with FROM DUAL.
disabled if ref is built with a key from the updated table
Problem was in add_not_null_conds() optimization function.
It contains following code:
JOIN_TAB *referred_tab= not_null_item->field->table->reginfo.join_tab;
...
add_cond_and_fix(&referred_tab->select_cond, notnull);
For UPDATE described in bug report referred_tab is 0 and dereferencing it
crashes the server.
than in previous 4.1.x
Wrongly applied optimization were adding NOT NULL constraint which results in
rejecting valid rows and reduced result set.
The problem was that add_notnull_conds() while checking subquery were adding
NOT NULL constraint to left joined table, to which, normally, optimization
don't have to be applied.