into moonlight.intranet:/home/tomash/src/mysql_ab/mysql-5.0-merge
configure.in:
Auto merged
man/Makefile.am:
Auto merged
mysys/my_bitmap.c:
Auto merged
scripts/make_binary_distribution.sh:
Auto merged
sql/field.cc:
Auto merged
sql/sql_locale.cc:
Auto merged
support-files/mysql.spec.sh:
Auto merged
mysql-test/t/mysqlbinlog.test:
Manual merge.
sql/sql_select.cc:
Manual merge.
Renamed variable, to avoid name clash with macro "rem_size"
on AIX 5.3 and "/usr/include/sys/xmem.h" (bug#17648)
asn.cpp, asn.hpp:
Avoid name clash with NAME_MAX
sql/sql_select.cc:
Renamed variable, to avoid name clash with macro "rem_size"
on AIX 5.3 and "/usr/include/sys/xmem.h" (bug#17648)
extra/yassl/taocrypt/src/asn.cpp:
Avoid name clash with NAME_MAX
extra/yassl/taocrypt/include/asn.hpp:
Avoid name clash with NAME_MAX
Too many cursors (more than 1024) could lead to memory corruption.
This affects both, stored routines and C API cursors, and the
threshold is per-server, not per-connection. Similarly, the
corruption could happen when the server was under heavy load
(executing more than 1024 simultaneous complex queries), and this is
the reason why this bug is fixed in 4.1, which doesn't support
cursors.
The corruption was caused by a bug in the temporary tables code, when
an attempt to create a table could lead to a write beyond allocated
space. Note, that only internal tables were affected (the tables
created internally by the server to resolve the query), not tables
created with CREATE TEMPORARY TABLE. Another pre-condition for the
bug is TRUE value of --temp-pool startup option, which, however, is a
default.
The cause of a bug was that random memory was overwritten in
bitmap_set_next() due to out-of-bound memory access.
mysys/my_bitmap.c:
Local 'bitmap_size' is measured in bytes, no need to multiply it by 8.
sql/sql_select.cc:
Clear the temp_pool_slot bit only if we have set it previously.
The bug was due to a loss happened during a refactoring made
on May 30 2005 that modified the function JOIN::reinit.
As a result of it for any subquery the value of offset_limit_cnt
was not restored for the following executions. Yet the first
execution of the subquery made it equal to 0.
The fix restores this value in the function JOIN::reinit.
mysql-test/r/subselect.result:
Added a test case fir bug #20519.
mysql-test/t/subselect.test:
Added a test case fir bug #20519.
into macbook.gmz:/Users/kgeorge/mysql/work/B17212-5.0-opt
mysql-test/r/innodb_mysql.result:
Merge 4.1->5.0 for bug #17212
mysql-test/t/innodb_mysql.test:
Merge 4.1->5.0 for bug #17212
sql/sql_select.cc:
Merge 4.1->5.0 for bug #17212
DESCRIBE returned the type BIGINT for a column of a view if the column
was specified by an expression over values of the type INT.
E.g. for the view defined as follows:
CREATE VIEW v1 SELECT COALESCE(f1,f2) FROM t1
DESCRIBE returned type BIGINT for the only column of the view if f1,f2 are
columns of the INT type.
At the same time DESCRIBE returned type INT for the only column of the table
defined by the statement:
CREATE TABLE t2 SELECT COALESCE(f1,f2) FROM t1.
This inconsistency was removed by the patch.
Now the code chooses between INT/BIGINT depending on the
precision of the aggregated column type.
Thus both DESCRIBE commands above returns type INT for v1 and t2.
mysql-test/r/analyse.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/bigint.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/create.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/olap.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/ps_2myisam.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/ps_3innodb.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/ps_4heap.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/ps_5merge.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/ps_6bdb.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/ps_7ndb.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/sp.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/subselect.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/type_ranges.result:
Adjusted the results after having fixed bug #19714.
mysql-test/r/view.result:
Added a test case for bug #19714.
mysql-test/t/view.test:
Added a test case for bug #19714.
into moonbone.local:/work/tmp_merge-5.0-opt-mysql
sql/item_subselect.cc:
Auto merged
sql/sql_class.cc:
Auto merged
sql/sql_class.h:
Auto merged
sql/sql_select.cc:
Auto merged
mysql-test/r/subselect.result:
Manual merge
mysql-test/t/subselect.test:
Manual merge
into moonbone.local:/work/tmp_merge-5.0-opt-mysql
mysql-test/r/rpl_insert_id.result:
Auto merged
mysql-test/t/rpl_insert_id.test:
Auto merged
sql/item_strfunc.cc:
Auto merged
sql/item_strfunc.h:
Auto merged
sql/sql_class.cc:
Auto merged
sql/sql_class.h:
Auto merged
sql/sql_insert.cc:
Auto merged
sql/sql_select.cc:
Auto merged
* don't use join cache when the incoming data set is already ordered
for ORDER BY
This choice must be made because join cache will effectively
reverse the join order and the results will be sorted by the index
of the table that uses join cache.
mysql-test/r/innodb_mysql.result:
Bug #17212 results not sorted correctly by ORDER BY when using index
* Test suite for the bug
mysql-test/t/innodb_mysql.test:
Bug #17212 results not sorted correctly by ORDER BY when using index
* Test suite for the bug
sql/sql_select.cc:
Bug #17212 results not sorted correctly by ORDER BY when using index
* don't use join cache when the incoming data set is already sorted
may return a wrong result.
An Item_sum_hybrid object has the was_values flag which indicates whether any
values were added to the sum function. By default it is set to true and reset
to false on any no_rows_in_result() call. This method is called only in
return_zero_rows() function. An ALL/ANY subquery can be optimized by MIN/MAX
optimization. The was_values flag is used to indicate whether the subquery
has returned at least one row. This bug occurs because return_zero_rows() is
called only when we know that the select will return zero rows before
starting any scans but often such information is not known.
In the reported case the return_zero_rows() function is not called and
the was_values flag is not reset to false and yet the subquery return no rows
Item_func_not_all and Item_func_nop_all functions return a wrong
comparison result.
The end_send_group() function now calls no_rows_in_result() for each item
in the fields_list if there is no rows were found for the (sub)query.
mysql-test/t/subselect.test:
Added test case for bug#18503: Queries with a quantified subquery returning empty set may return a wrong result.
mysql-test/r/subselect.result:
Added test case for bug#18503: Queries with a quantified subquery returning empty set may return a wrong result.
sql/sql_select.cc:
Fixed bug#18503: Queries with a quantified subquery returning empty set may return a wrong result.
The end_send_group() function now calls no_rows_in_result() for each item
in the fields_list if there is no matching rows were found.
into macbook.gmz:/Users/kgeorge/mysql/work/B14553-5.0-opt
mysql-test/r/odbc.result:
Auto merged
sql/sql_select.cc:
Auto merged
mysql-test/r/rpl_insert_id.result:
merge the test at the end of 4.1 test
mysql-test/t/rpl_insert_id.test:
merge the test at the end of 4.1 test
sql/sql_class.cc:
merged
sql/sql_class.h:
merged
To make MySQL compatible with some ODBC applications, you can find
the AUTO_INCREMENT value for the last inserted row with the following query:
SELECT * FROM tbl_name WHERE auto_col IS NULL.
This is done with a special code that replaces 'auto_col IS NULL' with
'auto_col = LAST_INSERT_ID'.
However this also resets the LAST_INSERT_ID to 0 as it uses it for a flag
so as to ensure that only the first SELECT ... WHERE auto_col IS NULL
after an INSERT has this special behaviour.
In order to avoid resetting the LAST_INSERT_ID a special flag is introduced
in the THD class. This flag is used to restrict the second and subsequent
SELECTs instead of LAST_INSERT_ID.
mysql-test/r/odbc.result:
test suite for the bug
mysql-test/r/rpl_insert_id.result:
test for the fix in replication
mysql-test/t/odbc.test:
test suite for the bug
mysql-test/t/rpl_insert_id.test:
test for the fix in replication
sql/sql_class.cc:
initialize the flag
sql/sql_class.h:
flag's declaration and set code when setting the last_insert_id
sql/sql_select.cc:
the special flag is used instead of last_insert_id
into chilla.local:/home/mydev/mysql-5.0-bug16218
sql/field.cc:
Auto merged
sql/field.h:
Auto merged
sql/sql_insert.cc:
Auto merged
sql/sql_select.cc:
Auto merged
sql/sql_trigger.cc:
Auto merged
sql/table.cc:
Auto merged
mysql-test/r/distinct.result:
4.1->5.0 merge for bug #16458
* 5.0 is better in detecting duplicate columns
sql/sql_select.cc:
4.1->5.0 merge for bug #16458
* Should not do the optimization if using index for group by
* chnaged structures in 5.0
The problem was that we restored SQL_CACHE, SQL_NO_CACHE flags in SELECT
statement from internal structures based on value set later at runtime, not
the original value set by the user.
The solution is to remember that original value.
mysql-test/r/auto_increment.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/func_compress.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/func_math.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/func_system.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/func_time.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/information_schema.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/query_cache.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/rpl_get_lock.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/rpl_master_pos_wait.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/show_check.result:
Add result for bug#17203.
mysql-test/r/subselect.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/type_blob.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/variables.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/r/view.result:
Update result to not report SQL_NO_CACHE if it wasn't there at first place.
mysql-test/t/show_check.test:
Add test case for bug#17203.
sql/sql_lex.cc:
Reset SELECT_LEX::sql_cache together with SELECT_LEX::options.
sql/sql_lex.h:
Add SELECT_LEX::sql_cache field to store original user setting.
sql/sql_select.cc:
Output SQL_CACHE and SQL_NO_CACHE depending on stored original user setting.
sql/sql_yacc.yy:
Make effect of SQL_CACHE and SQL_NO_CACHE mutually exclusive. Ignore
SQL_CACHE if SQL_NO_CACHE was used. Remember what was set by the user.
Reset SELECT_LEX::sql_cache together with SELECT_LEX::options.
'SELECT DISTINCT a,b FROM t1' should not use temp table if there is unique
index (or primary key) on a.
There are a number of other similar cases that can be calculated without the
use of a temp table : multi-part unique indexes, primary keys or using GROUP BY
instead of DISTINCT.
When a GROUP BY/DISTINCT clause contains all key parts of a unique
index, then it is guaranteed that the fields of the clause will be
unique, therefore we can optimize away GROUP BY/DISTINCT altogether.
This optimization has two effects:
* there is no need to create a temporary table to compute the
GROUP/DISTINCT operation (or the temporary table will be smaller if only GROUP
is removed and DISTINCT stays or if DISTINCT is removed and GROUP BY stays)
* this causes the statement in effect to become updatable in Connector/Java
because the result set columns will be direct reference to the primary key of
the table (instead to the temporary table that it currently references).
Implemented a check that will optimize away GROUP BY/DISTINCT for queries like
the above.
Currently it will work only for single non-constant table in the FROM clause.
mysql-test/r/distinct.result:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- test case
mysql-test/t/distinct.test:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- test case
sql/sql_select.cc:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- disable GROUP BY if contains the fields of a unique index.
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
INSERT DELAYED crashed in 5.0 on a table with a varchar that
could be NULL and was created pre-5.0 (Bugs 16218 and 13707).
INSERT DELAYED corrupted data in 5.0 on a table with varchar
fields that was created pre-5.0 (Bugs 17294 and 16611).
In case of INSERT DELAYED the open table is copied from the
delayed insert thread to be able to create a record for the
queue. When copying the fields, a method was used that did
convert old varchar to new varchar fields and did not set up
some pointers into the record buffer of the table.
The field conversion was guilty for the misinterpretation of
the record contents by the delayed insert thread. The wrong
pointer setup was guilty for the crashes.
For Bug 13707 (Server crash with INSERT DELAYED on MyISAM table)
I fixed the above mentioned method to set up one of the pointers.
For Bug 16218 I set up the other pointers too.
But when looking at the corruptions I got aware that converting
the field type was totally wrong for INSERT DELAYED. The copied
table is used to create a record that is to be sent to the
delayed insert thread. Of course it can interpret the record
correctly only if all field types are the same in both table
objects.
So I revoked the fix for Bug 13707 and changed the new_field()
method so that it can suppress conversions.
No test case as this is a migration problem. One needs to
create a table with 4.x and use it with 5.x. I added two
test scripts to the bug report.
sql/field.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
sql/field.h:
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
sql/sql_insert.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added comments. Made small style fixes.
Used the new parameter 'keep_type' of Field::new_field()
to avoid field type conversion. The table copy must have
exactly the same types of fields as the original table.
Otherwise the record contents created by the foreground
thread could be misinterpreted by the delayed insert thread.
sql/sql_select.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
sql/sql_trigger.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
sql/table.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
Additional fix for #16377 for bigendian platforms
sql_select.cc, select.result, select.test:
After merge fix
mysql-test/t/select.test:
After merge fix
mysql-test/r/select.result:
After merge fix
sql/sql_select.cc:
After merge fix
sql/field.h:
Additional fix for #16377 for bigendian platforms
sql/field.cc:
Additional fix for #16377 for bigendian platforms
tables
Currently in INSERT ... SELECT ... LIMIT ... the compiler uses a
temporary table to store the results of SELECT ... LIMIT .. and then
uses that table as a source for INSERT. The problem is that in some cases
it actually skips the LIMIT clause in doing that and materializes the
whole SELECT result set regardless of the LIMIT.
This fix is limiting the process of filling up the temp table with only
that much rows that will be actually used by propagating the LIMIT value.
mysql-test/r/insert_select.result:
* Bug #9676: INSERT INTO x SELECT .. FROM x LIMIT 1; slows down with big
tables
- a test demonstrating the code path
mysql-test/t/insert_select.test:
* Bug #9676: INSERT INTO x SELECT .. FROM x LIMIT 1; slows down with big
tables
- a test demonstrating the code path
sql/sql_select.cc:
* Bug #9676: INSERT INTO x SELECT .. FROM x LIMIT 1; slows down with big
tables
- pass through the real LIMIT number if the temp table is created for
buffering results.
- set the counter for all the cases when the temp table is not used for
grouping
into rurik.mysql.com:/home/igor/mysql-5.0-opt
sql/opt_sum.cc:
Auto merged
sql/sql_select.cc:
Auto merged
sql/sql_select.h:
Auto merged
mysql-test/r/func_group.result:
SCCS merged
mysql-test/t/func_group.test:
SCCS merged
The bug report revealed two problems related to min/max optimization:
1. If the length of a constant key used in a SARGable condition for
for the MIN/MAX fields is greater than the length of the field an
unwanted warning on key truncation is issued;
2. If MIN/MAX optimization is applied to a partial index, like INDEX(b(4))
than can lead to returning a wrong result set.
mysql-test/r/func_group.result:
Added test cases for bug #18206.
mysql-test/t/func_group.test:
Added test cases for bug #18206.
sql/opt_sum.cc:
Fixed bug #18206.
Suppressed the warning about data truncation when store_val_in_field
was used to store keys for the field used in MIN/MAX optimization.
Blocked MIN/MAX optimization for partial keys, such as in INDEX(b(4)).
sql/sql_select.cc:
Fixed bug #18206.
Added a parameter for the function store_val_in_field allowing to
control setting warnings about data truncation in the function.
sql/sql_select.h:
Fixed bug #18206.
Added a parameter for the function store_val_in_field allowing to
control setting warnings about data truncation in the function.
into mysql.com:/home/kgeorge/mysql/5.0/B18681
sql/mysql_priv.h:
Auto merged
sql/sql_acl.cc:
Auto merged
sql/sql_base.cc:
Auto merged
sql/sql_insert.cc:
Auto merged
sql/sql_select.cc:
Auto merged
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.
mysql-test/r/grant.result:
removed nondeterminism (unspecified order) in some test output
mysql-test/r/view_grant.result:
Somewhat extended test case for the bug and similar queries.
mysql-test/t/grant.test:
removed nondeterminism (unspecified order) in some test output
mysql-test/t/view_grant.test:
Somewhat extended test case for the bug and similar queries.
sql/mysql_priv.h:
A wrapper for setup_tables that also checks access to the tables
sql/sql_acl.cc:
removed artificial security check stop and used the table ref's credentials.
sql/sql_base.cc:
a wrapper for setup_tables to check access to the tables
sql/sql_delete.cc:
wrapper called.
sql/sql_insert.cc:
wrapper called
sql/sql_load.cc:
wrapper called
sql/sql_parse.cc:
wrapper called and artificial check stop removed
sql/sql_select.cc:
wrapper called
sql/sql_update.cc:
wrapper called
sql/table.cc:
Mask table access to the view error as well.
Fixed compiler warnings
Fixed valgrind warning in Item_date_add_intervall::eq. (Recoding of bugfix #19490)
sql/field.cc:
remove dflt_field from field structure (not needed)
Simple cleanup of code that been copied elsewhere
sql/field.h:
remove dflt_field from field structure (not needed)
sql/item.h:
Removed compiler warnings
sql/item_timefunc.cc:
Fixed Item_date_add_intervall::eq
The problem was that when we call 'eq' 'this' is not fixed, which means we can't call const_item() or a value function.
I fixed this so that we check eq for all arguments and that the sign and type are identical.
(The original code gave a 'accessing uninitialized data' in valgrind.
sql/mysql_priv.h:
Added default fields to create_tmp_field
sql/sql_insert.cc:
New default_field parameter to create_tmp_field()
sql/sql_select.cc:
New default_field parameter to create_tmp_field()
Use this in create_tmp_table() to set right default value for a field
transfer NO_DEFAULT_VALUE_FLAG flag to new field
mysql-test/r/strict.result:
Fix for bug#17626 CREATE TABLE ... SELECT failure with TRADITIONAL SQL mode
test case
mysql-test/r/type_ranges.result:
Fix for bug#17626 CREATE TABLE ... SELECT failure with TRADITIONAL SQL mode
result fix
mysql-test/t/strict.test:
Fix for bug#17626 CREATE TABLE ... SELECT failure with TRADITIONAL SQL mode
test case
When a CREATE TABLE command created a table from a materialized
view id does not inherit default values from the underlying table.
Moreover the temporary table used for the view materialization
does not inherit those default values.
In the case when the underlying table contained ENUM fields it caused
misleading error messages. In other cases the created table contained
wrong default values.
The code was modified to ensure inheritance of default values for
materialized views.
mysql-test/r/view.result:
Added a test case for bug #19089.
mysql-test/t/view.test:
Added a test case for bug #19089.
sql/field.cc:
Fixed bug ##19089.
Added field dflt_field to the class Field.
This field is set for temp table fields that inherits
default values of items from which they are created.
sql/field.h:
Fixed bug ##19089.
Added field dflt_field to the class Field.
This field is set for temp table fields that inherits
default values of items from which they are created.
sql/sql_select.cc:
Fixed bug #19089.
When a CREATE TABLE command created a table from a materialized
view id does not inherit default values from the underlying table.
Moreover the temporary table used for the view materialization
does not inherit those default values.
The code was modified to ensure inheritance of default values for
materialized views.
Re-work best_access_path() and find_best() to reuse E(#rows(range access)) as
E(#rows(ref[_or_null](const) access) only when it is appropriate.
[This is the final cumulative patch]
mysql-test/r/select.result:
BUG#17379: Testcase
mysql-test/r/subselect.result:
BUG#17379: Updated test results
mysql-test/t/select.test:
BUG#17379: Testcase
sql/opt_range.cc:
BUG#17379: Wrong reuse of E(#rows(range)) as E(#rows(ref(const))):
Make range optimizer together with TABLE::quick_* also return TABLE::quick_n_ranges
sql/sql_select.cc:
BUG#17379: Wrong reuse of E(#rows(range)) as E(#rows(ref(const))):
Re-work best_access_path() to reuse E(#rows(range access)) as
E(#rows(ref[_or_null](const) access) only when it is appropriate.
sql/table.h:
BUG#17379: Wrong reuse of E(#rows(range)) as E(#rows(ref(const))):
Make range optimizer together with TABLE::quick_* also return TABLE::quick_n_ranges
When converting DISTINCT to GROUP BY where the columns are from the covering
index and they are quoted twice in the SELECT list the optimizer is creating
improper processing sequence. This is because of the fact that the columns
of the covering index are not recognized as such and treated as non-index
columns.
Generally speaking duplicate columns can safely be removed from the GROUP
BY/DISTINCT list because this will not add or remove new rows in the
resulting set. Duplicates can be removed even if they are not consecutive
(as is the case for ORDER BY, where the duplicate columns can be removed
only if they are consecutive).
So we can safely transform "SELECT DISTINCT a,a FROM ... ORDER BY a" to
"SELECT a,a FROM ... GROUP BY a ORDER BY a" instead of
"SELECT a,a FROM .. GROUP BY a,a ORDER BY a". We can even transform
"SELECT DISTINCT a,b,a FROM ... ORDER BY a,b" to
"SELECT a,b,a FROM ... GROUP BY a,b ORDER BY a,b".
The fix to this bug consists of checking for duplicate columns in the SELECT
list when constructing the GROUP BY list in transforming DISTINCT to GROUP
BY and skipping the ones that are already in.
mysql-test/r/distinct.result:
test case for the bug without loose index scan
mysql-test/r/group_min_max.result:
test case for the bug
mysql-test/t/distinct.test:
test case for the bug without loose index scan
mysql-test/t/group_min_max.test:
test case for the bug
sql/sql_select.cc:
duplicates check and removal
A query with a group by and having clauses could return a wrong
result set if the having condition contained a constant conjunct
evaluated to FALSE.
It happened because the pushdown condition for table with
grouping columns lost its constant conjuncts.
Pushdown conditions are always built by the function make_cond_for_table
that ignores constant conjuncts. This is apparently not correct when
constant false conjuncts are present.
mysql-test/r/having.result:
Added a test case for bug #14927.
mysql-test/t/having.test:
Added a test case for bug #14927.
sql/sql_lex.cc:
Fixed bug #14927.
Initialized fields for having conditions in st_select_lex::init_query().
sql/sql_lex.h:
Fixed bug #14927.
Added a field to restore having condititions for execution in SP and PS.
sql/sql_prepare.cc:
Fixed bug #14927.
Added code to restore havinf conditions for execution in SP and PS.
sql/sql_select.cc:
Fixed bug #14927.
Performed evaluation of constant expressions in having clauses.
If the having condition contains a constant conjunct that is always false
an empty result set is returned after the optimization phase.
In this case the corresponding EXPLAIN command now returns
"Impossible HAVING" in the last column.
The bug was as follows: When merge_key_fields() encounters "t.key=X OR t.key=Y" it will
try to join them into ref_or_null access via "t.key=X OR NULL". In order to make this
inference it checks if Y<=>NULL, ignoring the fact that value of Y may be not yet known.
The fix is that the check if Y<=>NULL is made only if value of Y is known (i.e. it is a
constant).
TODO: When merging to 5.0, replace used_tables() with const_item() everywhere in merge_key_fields().
mysql-test/r/innodb_mysql.result:
Testcase for BUG16798
mysql-test/t/innodb_mysql.test:
Testcase for BUG16798
sql/sql_select.cc:
BUG#16798: Inapplicable ref_or_null query plan and bad query result on random occasions
In merge_key_fields() don't call val->is_null() if the value of val is not known.
into rurik.mysql.com:/home/igor/dev/mysql-5.0-0
mysql-test/r/subselect.result:
Auto merged
sql/mysql_priv.h:
Auto merged
sql/sql_select.cc:
Auto merged
This performance degradation was due to the fact that some
cost evaluation code added into 4.1 in the function find_best was
not merged into the code of the function best_access_path added
together with other code for greedy optimizer.
Added a parameter to the function print_plan. The parameter contains
accumulated cost for a given partial join.
The patch does not include a special test case since this performance
degradation is hard to reproduse with a simple example.
TODO: make the function find_best use the function best_access_path
in order to remove duplication of code which might result in incomplete
merges in the future.
mysql-test/r/delete.result:
Fixed bug #14292: performance degradation for a benchmark query.
Adjusted test results.
mysql-test/r/subselect.result:
Fixed bug #14292: performance degradation for a benchmark query.
Adjusted test results.
sql/mysql_priv.h:
Fixed bug #14292: performance degradation for a benchmark query.
Added a parameter to the function print_plan. The parameter contains
accumulated cost for a given partial join.
sql/sql_select.cc:
Fixed bug #14292: performance degradation for a benchmark query.
This performance degradation was due to the fact that some
cost evaluation code added into 4.1 in the function find_best was
not merged into the code of the function best_access_path added
together with other code for greedy optimizer.
sql/sql_test.cc:
Fixed bug #14292: performance degradation for a benchmark query.
Added a parameter to the function print_plan. The parameter contains
accumulated cost for a given partial join.
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
mysql-test/r/order_by.result:
Added a test case for bug #18767.
mysql-test/t/order_by.test:
Added a test case for bug #18767.
sql/sql_lex.h:
Fixed bug #18767.
Placed the code the created a fake SELECT_LEX into a separate function.
sql/sql_parse.cc:
Fixed bug #18767.
Placed the code the created a fake SELECT_LEX into a separate function.
sql/sql_select.cc:
Fixed bug #18767.
Changed the condition on which a SELECT is treated as part of a UNION.
The SELECT in
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2
now is handled in the same way as the first SELECT in a UNION
sequence.
sql/sql_union.cc:
Fixed bug #18767.
Changed the condition at which a SELECT is treated as part of a UNION.
The SELECT in
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2
now is handled in the same way as the first SELECT in a UNION
sequence.
sql/sql_yacc.yy:
Fixed bug #18767.
Changed the condition at which a SELECT is treated as part of a UNION.
The SELECT in
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2
now is handled in the same way as the first SELECT in a UNION
sequence. In the same way is handled the SELECT in
(SELECT ... LIMIT n) ORDER BY order list.
Yet if there is neither ORDER BY nor LIMIT in the single-select
union construct
(SELECT ...) ORDER BY order_list
then it is still handled as simple select with an order clause.
The SQL standard doesn't allow to use in HAVING clause fields that are not
present in GROUP BY clause and not under any aggregate function in the HAVING
clause. However, mysql allows using such fields. This extension assume that
the non-grouping fields will have the same group-wise values. Otherwise, the
result will be unpredictable. This extension allowed in strict
MODE_ONLY_FULL_GROUP_BY sql mode results in misunderstanding of HAVING
capabilities.
The new error message ER_NON_GROUPING_FIELD_USED message is added. It says
"non-grouping field '%-.64s' is used in %-.64s clause". This message is
supposed to be used for reporting errors when some field is not found in the
GROUP BY clause but have to be present there. Use cases for this message are
this bug and when a field is present in a SELECT item list not under any
aggregate function and there is GROUP BY clause present which doesn't mention
that field. It renders the ER_WRONG_FIELD_WITH_GROUP error message obsolete as
being more descriptive.
The resolve_ref_in_select_and_group() function now reports the
ER_NON_GROUPING_FIELD_FOUND error if the strict mode is set and the field for
HAVING clause is found in the SELECT item list only.
sql/share/errmsg.txt:
Added the new ER_NON_GROUPING_FIELD_USED error message for the bug#14169.
mysql-test/t/having.test:
Added test case for the bug#18739: non-standard HAVING extension was allowed in strict ANSI sql mode.
mysql-test/r/having.result:
Added test case for the bug#18739: non-standard HAVING extension was allowed in strict ANSI sql mode.
sql/sql_select.cc:
Added TODO comment to change the ER_WRONG_FIELD_WITH_GROUP to more detailed ER_NON_GROUPING_FIELD_USED message.
sql/item.cc:
Fixed bug#18739: non-standard HAVING extension was allowed in strict ANSI sql mode.
The resolve_ref_in_select_and_group() function now reports the
ER_NON_GROUPING_FIELD_FOUND error if the strict MODE_ONLY_FULL_GROUP_BY mode
is set and the field for HAVING clause is found in the SELECT item list only.