Fixed timing test failures.
Fixed a failure in the Aria engines page cache and log handler (found with maria.maria-big test)
- This could cause a core dump when deleting big blobs.
- Added test to end_pagecache() to verify that page cache was correctly used.
- inc_counter_for_resize_op and dec_counter_for_resize_op are called same number of times.
- All page cache blocks was properly deallocated (empty)
mysql-test/suite/innodb/t/innodb_bug38231.test:
Fixed timing issue (code comment says it all)
mysql-test/suite/innodb_plugin/t/innodb_bug38231.test:
Fixed timing issue (code comment says it all)
sql/debug_sync.cc:
Fixed compiler warning
storage/maria/ma_loghandler.c:
Fixed bug found by maria.maria-big test:
- Fixed race condition between update thread logging a very big blog and checkpoint thread.
storage/maria/ma_pagecache.c:
Added assert to ensure mutex was properly locked.
Added test to end_pagecache() to verify that page cache was correctly used.
- inc_counter_for_resize_op and dec_counter_for_resize_op are called same number of times.
- All page cache blocks was properly deallocated (empty)
In pagecache_delete_internal(), properly reset counters and pins if functions aborts.
Added missing inc_counter_for_resize_op() to pagecache_wait_lock().
Added missing dec_counter_for_resize_op() to pagecache_delete()
- Make sure creation of t1 is replicated before trying to create trigger on it on slave
- Use safe #ifdef for declaration as for definition to avoid warning about unused static function.
mysql-test/suite/innodb_plugin/t/innodb_bug38231.test:
Sometimes you get a timeout here; Disable the not fatal error message.
storage/xtradb/sync/sync0rw.c:
Disable compiler warning
An INSERT query log event is preceeded by an INSERT_ID intvar event if the
INSERT allocates a new auto_increment value. But if we ignore the INSERT
due to --replicate-ignore-table or similar, then the INSERT_ID event is
still executed, and the set value of INSERT_ID lingers around in the
slave sql thread THD object indefinitely until the next INSERT that
happens to need allocation of a new auto_increment value.
Normally this does not cause problems as such following INSERT would
normally come with its own INSERT_ID event. In this bug, the user had
a trigger on the slave which was missing on the master, and this
trigger had an INSERT which could be affected. In any case, it seems
better to not leave a stray INSERT_ID hanging around in the sql thread
THD indefinitely.
Note that events can also be skipped from apply_event_and_update_pos();
however it is not possible in that code to skip the INSERT without also
skipping the INSERT_ID event.
configure.in:
Added comment
mysql-test/suite/innodb_plugin/t/innodb_bug56680.test:
Disable test when run with valgrind as we get errors from buf_buddy_relocate() on work for this test.
(Should probably be investigated as this may be an issue in xtradb, but probably harmless)
Work is an amd-64 running openSUSE 1.11 and valgrind 3.4.1
mysys/charset.c:
Remove static function if not used (to remove compiler warning)
storage/xtradb/srv/srv0srv.c:
Added casts to get rid of compiler warnings
- Removed SCCS rule from Makefile.am
- Made dummy rule in sql_yacc.yy to get rid of compiler warning about not used label.
Don't use maintainer mode with valgrind (as we don't want to initialize all variables)
config/ac-macros/maintainer.m4:
Don't use maintainer mode with valgrind (as we don't want to initialize all variables)
Force initialization of variables when using -Werror (To get rid of compiler warnings)
configure.in:
Don't use maintainer mode with valgrind (as we don't want to initialize all variables)
sql/sql_yacc.yy:
Made dummy rule in sql_yacc.yy to get rid of compiler warning about not used label.
mysql-test/include/have_not_innodb_plugin.inc:
Also detect if xtradb is installed
mysql-test/suite/innodb/t/innodb_bug56143.test:
Disabled test case that doesn't work for innodb_plugin/xtradb.
mysql-test/suite/innodb_plugin/r/innodb_bug56632.result:
Updated result (key_block_size is lower case in MariaDB, as all other options)
mysql-test/suite/pbxt/r/partition_hash.result:
Updated results (after partition row count optimization changes)
mysql-test/suite/pbxt/r/partition_pruning.result:
Updated results (after partition row count optimization changes)
mysql-test/suite/pbxt/r/partition_range.result:
Updated results (after partition row count optimization changes)
mysql-test/suite/pbxt/r/subselect.result:
Updated result after ROW() changes.
Open issues:
- A better fix for #57688; Igor is working on this
- Test failure in index_merge_innodb.test ; Igor promised to look at this
- Some Innodb tests fails (need to merge with latest xtradb) ; Kristian promised to look at this.
- Failing tests: innodb_plugin.innodb_bug56143 innodb_plugin.innodb_bug56632 innodb_plugin.innodb_bug56680 innodb_plugin.innodb_bug57255
- Werror is disabled; Should be enabled after merge with xtradb.
Registration of pointer change if we assign it to other pointer which should be identical after statement execution (PS/SP).
mysql-test/r/subselect.result:
Test suite.
mysql-test/t/subselect.test:
Test suite.
sql/sql_class.cc:
The procedure of the pointer registration.
sql/sql_class.h:
The procedure of the pointer registration.
sql/sql_lex.cc:
Registration of pointer change if we assign it to other pointer which should be identical after statement execution (PS/SP).
- Fixed bug in pagecache_delete_internal() when deleting block that was flushed by another thread (fixed bug when block->next_used was unexpectedly null)
Fixed some using uninitialized memory warnings found by valgrind.
mysql-test/t/information_schema_all_engines-master.opt:
Added options to make slow test run faster
sql/sp.cc:
Fixed valgrind warning.
sql/sql_show.cc:
Fixed valgrind warning.
storage/maria/ma_bitmap.c:
Fixed wrong call parameter to pagecache_unlock_by_link() that caused assert crash in pagecache
storage/maria/ma_pagecache.c:
Fixed possible error in pagecache_wait_lock() when hash_link was not set. (We never hit this issue, but could be possible when two threads updates the same page.
Fixed bug in pagecache_delete_internal() when deleting block that was flushed by another thread (fixed bug when block->next_used was unexpectedly null)
Cleanup: moved pagecache_pthread_mutex_unlock() over comments and asserts to be just before pagecache_fwrite()
Implement os_event_wait_time() for POSIX systems.
In the purge thread, use os_event_wait_time() when sleeping rather than sleep,
and signal the event when server shuts down, so we do not need to wait for
upto 10 seconds until the purge thread wakes up.
Also fix bug that warnings that were pushed after we call set_ok_status() were
not included in the waning count sent to the client in the result packet.
Also in mysqltest, in recursive die() invocation at least print the message
before aborting.
client/mysqltest.cc:
If we detect recursive die(), at least print the message before aborting.
mysql-test/r/warnings_debug.result:
Test case.
mysql-test/t/warnings_debug.test:
Test case.
sql/handler.cc:
Force generation of a warning with specific debug option, for testing.
sql/protocol.cc:
Fix wrong DBUG_ENTER
sql/sql_class.h:
Add method to count warnings pushed after set_ok_status() is called.
sql/sql_error.cc:
Also count warnings pushed after set_ok_status() is called.
storage/xtradb/include/os0sync.h:
Implement working os_sync_wait_time() for POSIX.
storage/xtradb/include/srv0srv.h:
Make the purge thread wait for an event when sleeping, so we can signal it to
wakeup immediately at shutdown.
storage/xtradb/log/log0log.c:
Make the purge thread wait for an event when sleeping, so we can signal it to
wakeup immediately at shutdown.
storage/xtradb/os/os0sync.c:
Implement working os_sync_wait_time() for POSIX.
storage/xtradb/srv/srv0srv.c:
Make the purge thread wait for an event when sleeping, so we can signal it to
wakeup immediately at shutdown.
Added locking of lock mutex when updating status in external_unlock() for Aria and MyISAM tables.
Fixed that 'source' command doesn't cause mysql command line tool to exit on error.
DEBUG_EXECUTE() and DEBUG_EVALUATE_IF() should not execute things based on wildcards. (Allows one to run --debug with mysql-test-run scripts that uses @debug)
Fixed several core dump, deadlock and crashed table bugs in handling of LOCK TABLE with MERGE tables:
- Added priority of locks to avoid crashes with MERGE tables.
- Added thr_lock_merge() to allow one to merge two results of thr_lock().
Fixed 'not found row' bug in REPLACE with Aria tables.
Mark MyISAM tables that are part of MERGE with HA_OPEN_MERGE_TABLE and set the locks to have priority THR_LOCK_MERGE_PRIV.
- By sorting MERGE tables last in thr_multi_unlock() it's safer to release and relock them many times (can happen when TRIGGERS are created)
Avoid printing (null) in debug file (to easier find out wrong NULL pointer usage with %s).
client/mysql.cc:
Fixed that 'source' command doesn't cause mysql command line tool to exit on error.
client/mysqltest.cc:
Don't send NULL to fn_format(). (Can cause crash on Solaris when using --debug)
dbug/dbug.c:
DEBUG_EXECUTE() and DEBUG_EVALUATE_IF() should not execute things based on wildcards.
include/my_base.h:
Added flag to signal if one opens a MERGE table.
Added extra() command to signal that one is not part of a MERGE table anymore.
include/thr_lock.h:
Added priority for locks (needed to fix bug in thr_lock when using MERGE tables)
Added option to thr_unlock() if get_status() should be called.
Added prototype for thr_merge_locks().
mysql-test/mysql-test-run.pl:
Ignore crashed table warnings for tables named 'crashed'.
mysql-test/r/merge.result:
Renamed triggers to make debugging easier.
Added some CHECK TABLES to catch errors earlier.
Additional tests.
mysql-test/r/merge_debug.result:
Test of error handling when reopening MERGE tables.
mysql-test/r/udf_query_cache.result:
Added missing flush status
mysql-test/suite/parts/r/partition_repair_myisam.result:
Update results
mysql-test/t/merge.test:
Renamed triggers to make debugging easier.
Added some CHECK TABLES to catch errors earlier.
Additional tests.
mysql-test/t/merge_debug.test:
Test of error handling when reopening MERGE tables.
mysql-test/t/udf_query_cache.test:
Added missing flush status
mysys/my_getopt.c:
Removed not used variable
mysys/my_symlink2.c:
Changed (null) to (NULL) to make it easier to find NULL arguments to DBUG_PRINT() functions.
(On linux, NULL to sprintf is printed 'null')
mysys/thr_lock.c:
Added priority of locks to avoid crashes with MERGE tables.
Added thr_lock_merge() to allow one to merge two results of thr_lock().
- This is needed for MyISAM as all locked table must share the same status. If not, you will not see newly inserted rows in other instances of the table.
If calling thr_unlock() with THR_UNLOCK_UPDATE_STATUS, call update_status() and restore_status() for the locks. This is needed in some rare cases where we call thr_unlock() followed by thr_lock() without calling external_unlock/external_lock in between.
Simplify loop in thr_multi_lock().
Added 'start_trans', which is called at end of thr_multi_lock() when all locks are taken.
- This was needed by Aria to ensure that transaction is started when we got all locks, not at get_status(). Without this, some rows could not be visible when we lock two tables at the same time, causing REPLACE using two tables to fail unexpectedly.
sql/handler.cc:
Add an assert() in handler::print_error() for "impossible errors" (like table is crashed) when --debug-assert-if-crashed-table is used.
sql/lock.cc:
Simplify mysql_lock_tables() code if get_lock_data() returns 0 locks.
Added new parameter to thr_multi_unlock()
In mysql_unlock_read_tables(), call first externa_unlock(), then thr_multi_unlock(); This is same order as we do in mysql_unlock_tables().
Don't abort locks in mysql_lock_abort() for merged tables when a MERGE table is deleted; Would cause a spin lock.
Added call to thr_merge_locks() in mysql_lock_merge() to ensure consistency in thr_locks().
- New locks of same type and table is stored after the old lock to ensure that we get the status from the original lock.
sql/mysql_priv.h:
Added debug_assert_if_crashed_table
sql/mysqld.cc:
Added --debug-assert-if-crashed-table
sql/parse_file.cc:
Don't print '(null)' in DBUG_PRINT of no dir given
sql/set_var.cc:
Increase default size of buffer for @debug variable.
sql/sql_base.cc:
In case of error from reopen_table() in reopen_tables(), call unlock_open_table() and restart loop.
- This fixed bug when we twice deleted same table from open_cache.
Don't take name lock for already name locked table in open_unireg_entry().
- Fixed bug when doing repair in reopen_table().
- In detach_merge_children(), always detach if 'clear_refs' is given. We can't trust parent->children_attached as this function can be called twice, first time with clear_refs set to 0.
sql/sql_class.cc:
Changed printing of (null) to "" in set_thd_proc_info()
sql/sql_parse.cc:
Added DBUG
sql/sql_trigger.cc:
Don't call unlink_open_table() if reopen_table() fails as the table may already be freed.
storage/maria/ma_bitmap.c:
Fixed DBUG_ASSERT() in allocate_tail()
storage/maria/ma_blockrec.c:
Fixed wrong calculation of row length for very small rows in undo_row_update().
- Fixes ASSERT() when doing undo.
storage/maria/ma_blockrec.h:
Added _ma_block_start_trans() and _ma_block_start_trans_no_versioning()
storage/maria/ma_locking.c:
Call _ma_update_status_with_lock() when releasing write locks.
- Fixes potential problem with updating status without the proper lock.
storage/maria/ma_open.c:
Changed to use start_trans() instead of get_status() to ensure that we see all rows in all locked tables when we got the locks.
- Fixed 'not found row' bug in REPLACE with Aria tables.
storage/maria/ma_state.c:
Added _ma_update_status_with_lock() and _ma_block_start_trans().
This is to ensure that we see all rows in all locked tables when we got the locks.
storage/maria/ma_state.h:
Added _ma_update_status_with_lock()
storage/maria/ma_write.c:
More DBUG_PRINT
storage/myisam/mi_check.c:
Fixed error message
storage/myisam/mi_extra.c:
Added HA_EXTRA_DETACH_CHILD:
- Detach MyISAM table to not be part of MERGE table (remove flag & lock priority).
storage/myisam/mi_locking.c:
Call mi_update_status_with_lock() when releasing write locks.
- Fixes potential problem with updating status without the proper lock.
Change to use new HA_OPEN_MERGE_TABLE flag to test if MERGE table.
Added mi_fix_status(), called by thr_merge().
storage/myisam/mi_open.c:
Added marker if part of MERGE table.
Call mi_fix_status() in thr_lock() for transactional tables.
storage/myisam/myisamdef.h:
Change my_once_flag to uint, as it stored different values than just 0/1
Added 'open_flag' to store state given to mi_open()
storage/myisammrg/ha_myisammrg.cc:
Add THR_LOCK_MERGE_PRIV to THR_LOCK_DATA to get MERGE locks sorted after other types of locks.
storage/myisammrg/myrg_locking.c:
Remove windows specific code.
storage/myisammrg/myrg_open.c:
Use HA_OPEN_MERGE_TABLE to mi_open().
Set HA_OPEN_MERGE_TABLE for linked MyISAM tables.
storage/xtradb/buf/buf0buf.c:
Fixed compiler warning
storage/xtradb/buf/buf0lru.c:
Initialize variable that could be used not initialized.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
mysql-test/r/group_by.result:
test case
mysql-test/r/join_outer.result:
test case
mysql-test/t/group_by.test:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
--remove wrong code
--use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join
The problem is caused by bug49487 fix and became visible
after after bug56679 fix.
Items are cleaned up and set to unfixed state after filling derived table.
So we can not rely on item::fixed state in Item_func_group_concat::print
and we can not use 'args' array as items there may be cleaned up.
The fix is always to use orig_args array of items as it
always should contain the correct data.
mysql-test/r/func_gconcat.result:
test case
mysql-test/t/func_gconcat.test:
test case
sql/item_sum.cc:
The fix is always to use orig_args array of items.
The problem is dividing by const value when
the result is out of supported range.
The fix:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
The main.ps_ddl test does SELECT * FROM mysql.general_log; that can be really
expensive with --valgrind if previous test cases put lots of data in the
general log since last server restart. Fix by truncating the log at test start.
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
mysql-test/suite/rpl/r/rpl_do_grant.result:
Add test for this bug.
mysql-test/suite/rpl/t/rpl_do_grant.test:
Add test for this bug.
sql/log_event.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.h:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_parse.cc:
Call binlog_invoker() for GRANT and REVOKE statements.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
main.mysqltest skipped on Windows because a perl intentionally does exit(1)
Use exit(2), as exit(1) on Windows is indistinguishable from failing to
execute perl.
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
sql/sql_db.cc:
Normalize table name (i.e lower case it)
sql/sql_table.cc:
table_name contains the normalized name
alias contains the real table name
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
btr_cur_update_alloc_zip(): Make the function public.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
REC_INFO_DELETED_FLAG: Move the definition from rem0rec.ic to rem0rec.h.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
buf_LRU_free_block(): Refactored from
buf_LRU_search_and_free_block(). This is needed for the
innodb_change_buffering_debug diagnostics.
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
mysql-test/r/explain.result:
test case
mysql-test/t/explain.test:
test case
sql/item_subselect.cc:
Make subquery uncacheable in case of EXPLAIN. It allows to keep
original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
on re-execution.
sql/sql_select.cc:
-restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
-Update TABLE::maybe_null field as it could have
the value(maybe_null==TRUE) based on the assumption
that this join is outer(see setup_table_map() func).
This change is not related to the crash problem but
affects EXPLAIN results in the test case.
This is a merge from 5.1/builtin to 5.1/plugin of:
--------------
revision-id: vasil.dimov@oracle.com-20101018104811-nwqhg9vav17kl5s1
committer: Vasil Dimov <vasil.dimov@oracle.com>
timestamp: Mon 2010-10-18 13:48:11 +0300
message:
Fix Bug#57252 disabling innobase_stats_on_metadata disables ANALYZE
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
--------------
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
mysql-test/r/fulltext.result:
test case
mysql-test/t/fulltext.test:
test case
sql/item_func.cc:
reinit ft_handler before re-execution of subquery
sql/item_func.h:
Fixed method name
sql/sql_select.cc:
reinit ft_handler before re-execution of subquery
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
mysql-test/extra/rpl_tests/rpl_stop_slave.test:
Auxiliary file which is used to test this bug.
mysql-test/suite/rpl/t/rpl_stop_slave.test:
Test case for this bug.
sql/slave.cc:
Checking if OPTION_KEEP_LOG is set. If it is set, SQL thread should wait
until the transaction ends.
sql/sql_parse.cc:
Add a debug point for testing this bug.