- Debug build: make DBUG_ASSERT statements really look at the first table
One can't expect cursors to work with semi-join subqueries anyway,
though (which isn't a problem because cursors are not used anywhere)
mysql-test/r/alter_table_online.result:
Test new feature
mysql-test/t/alter_table_online.test:
Test new feature
sql/handler.cc:
Added comment
sql/lex.h:
Added ONLINE keyword
sql/mysql_priv.h:
Added option to alter table to require online operation
sql/share/errmsg.txt:
Added error message if ONLINE can't be done
sql/sql_lex.h:
Added online option
sql/sql_parse.cc:
Added online option to mysql_alter_table()
sql/sql_table.cc:
Added test that gives error if table can't be done instantly when requsted to be online.
Fixed wrong test if table includes a VARCHAR
Fixed wrong (but unlikely) handling of error conditions in ALTER table
sql/sql_yacc.yy:
Added ALTER ONLINE TABLE syntax
storage/maria/ha_maria.cc:
Fixed bug where 'start_bulk_insert' used too small buffer if used with unknown number of rows
This makes it possible to do safe multi volume snapshots as long as one snapshots the volume with the transaction logs last.
include/mysql_com.h:
Added REFRESH_CHECKPOINT
mysql-test/r/flush.result:
Added test of new FLUSH TABLES syntax + calls to checkpoint_status handler calls
mysql-test/t/flush.test:
Added test of new FLUSH TABLES syntax + calls to checkpoint_status handler calls
sql/handler.cc:
Added code to call checkpoint_state for all handlertons that supports it
sql/handler.h:
Added new checkpoint_state() handlerton call to temporarly disable checkpoints.
sql/lex.h:
Added CHECKPOINT keyword
sql/sql_yacc.yy:
Added support for FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT
storage/maria/ha_maria.cc:
Added handlerton call to disable checkpoints.
storage/maria/ma_checkpoint.c:
Don't do checkpoint if checkpoints are disabled.
storage/maria/ma_static.c:
Added maria_checkpoint_disabled
storage/maria/maria_def.h:
Added maria_checkpoint_disabled
storage/xtradb/handler/ha_innodb.cc:
Added handlerton call to disable checkpoints.
storage/xtradb/include/log0log.h:
Added option to log_checkpoint() to allow one to ignore not critical checkpoints during the time checkpoints are disabled.
storage/xtradb/log/log0log.c:
Added code to allow one to disable checkpoints during a FLUSH TABLES ... DISABLE CHECKPOINT
This was done by adding a new argument to log_checkpoint() which tells us when the checkpoint is called by srv_master_thread (which are safe to ignore)
storage/xtradb/srv/srv0srv.c:
Tell log_checkpoint() that checkpoints from srv_master_thread() are safe to ignore (will just delay recovery time a bit).
Analysis:
Build_equal_items_for_cond() rewrites the WHERE clause in such a way,
that it may merge the list join->cond_equal->current_level with the
list of child Items in an AND condition of the WHERE clause.
The place where this is done is:
static COND *build_equal_items_for_cond(THD *thd, COND *cond,
COND_EQUAL *inherited)
{
...
if (and_level)
{
args->concat(&eq_list);
args->concat((List<Item> *)&cond_equal.current_level);
}
...
}
As a result, later transformations on the WHERE clause may change the
structure of the list join->cond_equal->current_level without knowing this.
Specifically in this bug, Item_in_subselect::inject_in_to_exists_cond
creates a new AND of the old WHERE clause and the IN->EXISTS conditions.
It then calls fix_fields() for the new AND. Among other things, fix_fields
flattens all nested ANDs into one by merging the AND argument lists.
When there is a cond_equal for the JOIN, its list of Item_equal objects
is attached to the end of the original AND. When a lower-level AND is
merged into the top-level one, the argument list of the lower-level AND
is concatenated to the list of multiple equalities in the upper-level AND.
As a result, when substitute_for_best_equal_field processes the
multiple equalities, it turns out that the multiple equality list contains
the Items from the lower-level AND which were concatenated to the end of
the join->cond_equal->current_level list. This results in a crash because
this list must not contain any other Items except for the previously found
Item_equal ones.
Solution:
When performing IN->EXIST predicate injection, and the where clause is an
AND, detach the list of Item_equal objects before calling fix_fields on
the injected where clause.
After fix_fields is done, reattach back the multiple equalities list to
the end of the argument list of the new AND.
The function test_quick_select by mistake did not update the value
of table->quick_condition_rows for index intersection scans though
the specification explicitly required to do so from any table access
plan once the plan provided a better upper bound for the number of
rows selected from the table. It resulted in a bogus, usually very
big number saved as the cost of the table access. This, in its turn,
in many cases forced the optimizer to make a bad choice of the
execution plan for join queries.
calc_daynr() function returns negative result
if malformed date with zero year and month is used.
Attempt to calculate week day on negative value
leads to crash. The fix is return NULL for
'W', 'a', 'w' specifiers if zero year and month is used.
Additional fix for calc_daynr():
--added assertion that result can not be negative
--return 0 if zero year and month is used
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql-common/my_time.c:
--added assertion that result can not be negative
--return 0 if zero year and month is used
sql/item_timefunc.cc:
eturn NULL for 'W', 'a', 'w' specifiers
if zero year and month is used.
Both these two bugs happened due to the following problem.
When a view column is referenced in the query an Item_direct_view_ref
object is created that is refers to the Item_field for the column.
All references to the same view column refer to the same Item_field.
Different references can belong to different AND/OR levels and,
as a result, can be included in different Item_equal object.
These Item_equal objects may include different constant objects.
If these constant objects are substituted for the Item_field created
for a view column we have a conflict situation when the second
substitution annuls the first substitution. This leads to
wrong result sets returned by the query. Bug #724942 demonstrates
such an erroneous behaviour.
Test case of the bug #717577 produces wrong result sets because best
equal fields of the multiple equalities built for different OR levels
of the WHERE condition differs. The subsitution for the best equal field
in the second OR branch overwrites the the substitution made for the
first branch.
To avoid such conflicts we have to substitute for the references
to the view columns rather than for the underlying field items.
To make such substitutions possible we have to include into
multiple equalities references to view columns rather than
field items created for such columns.
This patch modifies the Item_equal class to include references
to view columns into multiple equality objects. It also performs
a clean up of the class methods and adds more comments. The methods
of the Item_direct_view_ref class that assist substitutions for
references to view columns has been also added by this patch.
Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
Update for previous patch according to reviewers comments.
Updated the constructors for ha_partitions to use the common
init_handler_variables functions
Added use of defines for size and offset to get better readability for the code that reads
and writes the .par file. Also refactored the get_from_handler_file function.
Analysis:
The wrong result is a consquence of sorting the subquery
result and then selecting only the first row due to the
artificial LIMIT 1 introduced by the fix_fields phase.
Normally, if there is an ORDER BY in a subquery, the ORDER
is removed (Item_in_subselect::select_in_like_transformer),
however if a GROUP BY is transformed into ORDER, this happens
later, after the removal of the ORDER clause of subqueries, so
we end up with a subquery with an ORDER clause, and an artificially
added LIMIT 1.
The reason why the same works in the main 5.3 without MWL#89, is
that the 5.3 performs all subquery transformations, including
IN->EXISTS before JOIN::optimize(). The beginning of JOIN::optimize
does:
if (having || (select_options & OPTION_FOUND_ROWS))
select_limit= HA_POS_ERROR;
which sets the limit back to infinity, thus 5.3 sorts the whole
subquery result, and IN performs the lookup into all subquery result
rows.
Solution:
Sorting of subqueries without LIMIT is meaningless. Since LIMIT in
subqueries is not supported, the patch removes sorting by setting
join->skip_sort_order= true
for each subquery JOIN object. This improves a number of execution
plans to not perform unnecessary sorting at all.
FAILS ON SOLARIS
This assertion was triggered if gethostbyaddr_r cannot do a
reverse lookup on an ip address. The reason was a missing
DBUG_RETURN macro. The problem affected only debug versions of
the server.
This patch fixes the problem by replacing return with DBUG_RETURN.
No test case added.
Bug#11764671 57533: UNINITIALISED VALUES IN COPY_AND_CONVERT (SQL_STRING.CC) WITH CERTAIN CHA
When ROUND evaluates decimal result it uses Item::decimal
value as fraction value for the result. In some cases
Item::decimal is greater than real result fraction value
and uninitialised memory of result(decimal) buffer can be
used in further calculations. Issue is introduced by
Bug33143 fix. The fix is to remove erroneous assignment.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
remove erroneous assignment
Some multibyte sequences could be considered by my_mbcharlen() functions
as multibyte character but more exact my_ismbchar() does not think so.
In such a case this multibyte sequences is pushed into 'stack' buffer which
is too small to accommodate the sequence.
The fix is to allocate stack buffer in
compliance with max character length.
mysql-test/r/loaddata.result:
test case
mysql-test/t/loaddata.test:
test case
sql/sql_load.cc:
allocate stack buffer in compliance with max character length.
Valgrind warnings were caused by comparing index values to an un-initialized field.
mysql-test/r/subselect.result:
New test cases.
mysql-test/t/subselect.test:
New test cases.
sql/opt_sum.cc:
Add thd to opt_sum_query enabling it to test for errors.
If we have a non-nullable index, we cannot use it to match null values,
since set_null() will be ignored, and we might compare uninitialized data.
sql/sql_select.cc:
Add thd to opt_sum_query, enabling it to test for errors.
sql/sql_select.h:
Add thd to opt_sum_query, enabling it to test for errors.
There are two problems with ANALYSE():
1. Memory leak
it happens because do_select() can overwrite
JOIN::procedure field(with zero value in our case) and
JOIN destructor don't free the memory allocated for
JOIN::procedure. The fix is to save original JOIN::procedure
before do_select() call and restore it after do_select
execution.
2. Wrong result
If ANALYSE() procedure is used for the statement with LIMIT clause
it could retrun empty result set. It happens because of missing
analyse::end_of_records() call. First end_send() function call
returns NESTED_LOOP_QUERY_LIMIT and second call of end_send() with
end_of_records flag enabled does not happen. The fix is to return
NESTED_LOOP_OK from end_send() if procedure is active.
mysql-test/r/analyse.result:
test case
mysql-test/t/analyse.test:
test case
sql/sql_select.cc:
--save original JOIN::procedure before do_select() call and
restore it after do_select execution.
--return NESTED_LOOP_OK from end_send() if procedure is active
When we create temporary result table for UNION
incorrect max_length for YEAR field is used and
it leads to incorrect field value and incorrect
result string length as YEAR field value calculation
depends on field length.
The fix is to use underlying item max_length for
Item_sum_hybrid::max_length intialization.
mysql-test/r/func_group.result:
test case
mysql-test/t/func_group.test:
test case
sql/field.cc:
added assert
sql/item_sum.cc:
init Item_sum_hybrid::max_length with
use underlying item max_length for
INT result type.
Valgrind warning happens due to early null values check
in Item_func_in::fix_length_and_dec(before item evaluation).
As result null value items with uninitialized values are
placed into array and it leads to valgrind warnings during
value array sorting.
The fix is to check null value after item evaluation, item
is evaluated in in_array::set() method.
mysql-test/r/func_in.result:
test case
mysql-test/t/func_in.test:
test case
sql/item_cmpfunc.cc:
The fix is to check null value after item evaluation.
Select from a view with the underlying HAVING clause failed with a
message: "1356: View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights to use them"
The bug is a regression of the fix for bug 11750328 - 40825 (similar
case, but the HAVING cause references an aliased field).
In the old fix for bug 40825 the Item_field::name_length value has
been used in place of the real length of Item_field::name. However,
in some cases Item_field::name_length is not in sync with the
actual name length (TODO: combine name and name_length into a
solid String field).
The Item_ref::print() method has been modified to calculate actual
name length every time.
mysql-test/r/view.result:
Test case for bug #11829681
mysql-test/t/view.test:
Test case for bug #11829681
sql/item.cc:
Bug #11829681 - 60295: ERROR 1356 ON VIEW THAT EXECUTES FINE AS A QUERY
The Item_ref::print() method has been modified to calculate actual
name length every time.
sql/item.h:
Minor commentary.
- Let advance_sj_state() save the value of JOIN::cur_dups_producing_tables
in POSITION::prefix_dups_producing_tables, and restore_sj_state() restore
it.
mysql-test/t/loaddata.test:
test for bug; without fix, running the test with --valgrind would show the leak
and make the test fail.
sql/sql_load.cc:
* In READ_INFO class, 'need_end_io_cache' is true as long as init_io_cache() was called,
so if it's true, we need to call end_io_cache(), to free memory allocated
by init_io_cache(). No matter the value of 'error'. In the bug's scenario,
'error' was set to true in read_sep_field() because
'1' (read from file) isn't suitable to load into a geometric column. Because of
'error', end_io_cache() was not called.
Note: end_io_cache() calls my_b_flush_io_cache(), which will do nothing wrong given
that the file is opened for reads only; see the init_io_cache() call which uses
only those read-only types:
(get_it_from_net) ? READ_NET : (is_fifo ? READ_FIFO : READ_CACHE).
IF the cache were rather used to write to the file, my_b_flush_io_cache() may
write to it, and it may be questionable to write to the file
if 'error' is true. But here there's no problem.
* Now that 'need_end_io_cache' is checked even if 'error' is true, it needs
to be initialized in all cases.
* Bonus: move some variables to the initialization list.
on lctn2 systems
There was a local variable in get_all_tables() to store the
"original" value of the database name as it can get lowercased
depending on the lower_case_table_name value.
get_all_tables() iterates over database names and for each
database iterates over the tables in it.
The "original" db name was assigned in the table names loop.
Thus the first table is ok, but the second and subsequent tables
get the lowercased name from processing the first table.
Fixed by moving the assignment of the original database name
from the inner (table name) to the outer (database name) loop.
Test suite added.
- "Using MRR" is no longer shown with range access.
- Instead, both range and BKA accesses will show one of the following:
= "Rowid-ordered scan"
= "Key-ordered scan"
= "Key-ordered Rowid-ordered scan"
depending on whether DS-MRR implementation will do scan keys in order, rowids in order,
or both.
- The patch also introduces a way for other storage engines/MRR implementations to
pass information to EXPLAIN output about the properties of employed MRR scans.