The test case for this bug relies on getting a ER_LOCK_WAIT_TIMEOUT
error. However with the introduction of MDL, the test would hang
forever since the metadata locks would not timeout.
MDL timeouts are now introduced in the scope of Bug#45225. This
patch changes the testcase for Bug#34604 to set the new server
variable "lock_wait_timeout" to one second which makes the test
generate the necessary timeout again.
This patch introduces timeouts for metadata locks.
The timeout is specified in seconds using the new dynamic system
variable "lock_wait_timeout" which has both GLOBAL and SESSION
scopes. Allowed values range from 1 to 31536000 seconds (= 1 year).
The default value is 1 year.
The new server parameter "lock-wait-timeout" can be used to set
the default value parameter upon server startup.
"lock_wait_timeout" applies to all statements that use metadata locks.
These include DML and DDL operations on tables, views, stored procedures
and stored functions. They also include LOCK TABLES, FLUSH TABLES WITH
READ LOCK and HANDLER statements.
The patch also changes thr_lock.c code (table data locks used by MyISAM
and other simplistic engines) to use the same system variable.
InnoDB row locks are unaffected.
One exception to the handling of the "lock_wait_timeout" variable
is delayed inserts. All delayed inserts are executed with a timeout
of 1 year regardless of the setting for the global variable. As the
connection issuing the delayed insert gets no notification of
delayed insert timeouts, we want to avoid unnecessary timeouts.
It's important to note that the timeout value is used for each lock
acquired and that one statement can take more than one lock.
A statement can therefore block for longer than the lock_wait_timeout
value before reporting a timeout error. When lock timeout occurs,
ER_LOCK_WAIT_TIMEOUT is reported.
Test case added to lock_multi.test.
added:
include/ctype_numconv.inc
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/t/ctype_binary.test
Adding tests
modified:
mysql-test/r/bigint.result
mysql-test/r/case.result
mysql-test/r/create.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
mysql-test/r/func_gconcat.result
mysql-test/r/func_str.result
mysql-test/r/metadata.result
mysql-test/r/ps_1general.result
mysql-test/r/ps_2myisam.result
mysql-test/r/ps_3innodb.result
mysql-test/r/ps_4heap.result
mysql-test/r/ps_5merge.result
mysql-test/r/show_check.result
mysql-test/r/type_datetime.result
mysql-test/r/type_ranges.result
mysql-test/r/union.result
mysql-test/suite/ndb/r/ps_7ndb.result
mysql-test/t/ctype_cp1251.test
mysql-test/t/ctype_latin1.test
mysql-test/t/ctype_ucs.test
mysql-test/t/func_str.test
Fixing tests
@ sql/field.cc
- Return str result using my_charset_numeric.
- Using real multi-byte aware str_to_XXX functions
to handle tricky charset values propely (e.g. UCS2)
@ sql/field.h
- Changing derivation of non-string field types to DERIVATION_NUMERIC.
- Changing binary() for numeric/datetime fields to always
return TRUE even if charset is not my_charset_bin. We need
this to keep ha_base_keytype() return HA_KEYTYPE_BINARY.
- Adding BINARY_FLAG into some fields, because it's not
being set automatically anymore with
"my_charset_bin to my_charset_numeric" change.
- Changing derivation for numeric/datetime datatypes to a weaker
value, to make "SELECT concat('string', field)" use character
set of the string literal for the result of the function.
@ sql/item.cc
- Implementing generic val_str_ascii().
- Using max_char_length() instead of direct read of max_length
to make "tricky" charsets like UCS2 work.
NOTE: in the future we'll possibly remove all direct reads of max_length
- Fixing Item_num::safe_charset_converter().
Previously it alligned binary string to
character string (for example by adding leading 0x00
when doing binary->UCS2 conversion). Now it just
converts from my_charset_numbner to "tocs".
- Using val_str_ascii() in Item::get_time() to make UCS2 arguments work.
- Other misc changes
@ sql/item.h
- Changing MY_COLL_CMP_CONV and MY_COLL_ALLOW_CONV to
bit operations instead of hard-coded bit masks.
- Addding new method DTCollation.set_numeric().
- Adding new methods to Item.
- Adding helper functions to make code look nicer:
agg_item_charsets_for_string_result()
agg_item_charsets_for_comparison()
- Changing charset for Item_num-derived items
from my_charset_bin to my_charset_numeric
(which is an alias for latin1).
@ sql/item_cmpfunc.cc
- Using new helper functions
- Other misc changes
@ sql/item_cmpfunc.h
- Fixing strcmp() to return max_length=2.
Previously it returned 1, which was wrong,
because it did not fit '-1'.
@ sql/item_func.cc
- Using new helper functions
- Other minor changes
@ sql/item_func.h
- Removing unused functions
- Adding helper functions
agg_arg_charsets_for_string_result()
agg_arg_charsets_for_comparison()
- Adding set_numeric() into constructors of numeric items.
- Using fix_length_and_charset() and fix_char_length()
instead of direct write to max_length.
@ sql/item_geofunc.cc
- Changing class for Item_func_geometry_type and
Item_func_as_wkt from Item_str_func to
Item_str_ascii_func, to make them return UCS2 result
properly (when character_set_connection=ucs2).
@ sql/item_geofunc.h
- Changing class for Item_func_geometry_type and
Item_func_as_wkt from Item_str_func to
Item_str_ascii_func, to make them return UCS2 result
properly (when @@character_set_connection=ucs2).
@ sql/item_strfunc.cc
- Implementing Item_str_func::val_str().
- Renaming val_str to val_str_ascii for some items,
to make them work with UCS2 properly.
- Using new helper functions
- All single-argument functions that expect string
result now call this method:
agg_arg_charsets_for_string_result(collation, args, 1);
This enables character set conversion to @@character_set_connection
in case of pure numeric input.
@ sql/item_strfunc.h
- Introducing Item_str_ascii_func - for functions
which return pure ASCII data, for performance purposes,
as well as for the cases when the old implementation
of val_str() was heavily 8-bit oriented and implementing
a UCS2-aware version is tricky.
@ sql/item_sum.cc
- Using new helper functions.
@ sql/item_timefunc.cc
- Using my_charset_numeric instead of my_charset_bin.
- Using fix_char_length(), fix_length_and_charset()
and fix_length_and_charset_datetime()
instead of direct write to max_length.
- Using tricky-charset aware function str_to_time_with_warn()
@ sql/item_timefunc.h
- Using new helper functions for charset and length initialization.
- Changing base class for Item_func_get_format() to make
it return UCS2 properly (when character_set_connection=ucs2).
@ sql/item_xmlfunc.cc
- Using new helper function
@ sql/my_decimal.cc
- Adding a new DECIMAL to CHAR converter
with real multibyte support (e.g. UCS2)
@ sql/mysql_priv.h
- Introducing a new derivation level for numeric/datetime data types.
- Adding macros for my_charset_numeric and MY_REPERTOIRE_NUMERIC.
- Adding prototypes for str_set_decimal()
- Adding prototypes for character-set aware str_to_xxx() functions.
@ sql/protocol.cc
- Changing charsetnr to "binary" client-side metadata for
numeric/datetime data types.
@ sql/time.cc
- Adding to_ascii() helper function, to convert a string
in any character set to ascii representation. In the
future can be extended to understand digits written
in various non-Latin word scripts.
- Adding real multy-byte character set aware versions for str_to_XXXX,
to make these these type of queries work correct:
INSERT INTO t1 SET datetime_column=ucs2_expression;
@ strings/ctype-ucs2.c
- endptr was not calculated correctly. INSERTing of UCS2
values into numeric columns returned warnings about
truncated wrong data.
We found that there are some tests that are not cleaning
up properly:
1. rpl_tmp_table_and_DDL
2. rpl_do_grant
3. rpl_sync
For #1 and #2 we found that the slave would not, for some
cases, replicate all the instructions the master processed
in the cleanup section. We fix these by deploying some
synchronization commands in the test cases so that slave
processes all clean up instructions.
As for #3, this is tracked as part of another bug
(BUG@50442).
Problem was that in mysql-trunk the ER() macro is now dependent on current_thd
and the innodb monitor thread has no binding to that thd object. This cause
the crash because of bad derefencing.
Solution was to add a new macro which take the thd as an argument (which the innodb
thread uses for the call).
(Updated according to reviewers comments, i.e. added ER_THD_OR_DEFAULT and
moved test to suite parts.)
As part of BUG@39934 fix, the public:
- THD::current_stmt_binlog_row_based
variable had been removed and replaced by a private variable:
- THD::current_stmt_binlog_format.
THD was refactored and some modifiers and accessors were
implemented for the new variable.
However, due to a bad merge, the
THD::current_stmt_binlog_row_based variable is back as a public
member of THD. This in itself is already potentially
harmful. What's even worse is that while merging some more
patches and resolving conflicts, the variable started being used
again, which is obviously wrong.
To fix this we:
1. remove the extraneous variable from sql_class.h
2. revert a bad merge for BUG#49132
3. merge BUG#49132 properly again (actually, making use of the
cset used to merge the original patch to mysql-pe).
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/suite/rpl/r/rpl_slow_query_log.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Conflict adding files to server-tools. Created directory.
Conflict because server-tools is not versioned, but has versioned children. Versioned directory.
Conflict adding files to server-tools/instance-manager. Created directory.
Conflict because server-tools/instance-manager is not versioned, but has versioned children. Versioned directory.
Contents conflict in server-tools/instance-manager/options.cc
Text conflict in sql/mysqld.cc
into slow log
While processing a statement, down the mysql_parse execution
stack, the thd->enable_slow_log can be assigned to
opt_log_slow_admin_statements, depending whether one is executing
administrative statements, such as ALTER TABLE, OPTIMIZE,
ANALYZE, etc, or not. This can have an impact on slow logging for
statements that are executed after an administrative statement
execution is completed.
When executing statements directly from the user this is fine
because, the thd->enable_slow_log is reset right at the beginning
of the dispatch_command function, ie, everytime a new statement
is set is set to execute.
On the other hand, for slave SQL thread (sql_thd) the story is a
bit different. When in SBR the sql_thd applies statements by
calling mysql_parse. Right after, it calls log_slow_statement
function to log them if they take too long. Calling mysql_parse
directly is fine, but also means that dispatch_command function
is bypassed. As a consequence, thd->enable_slow_log does not get
a chance to be reset before the next statement to be executed by
the sql_thd. If the statement just executed by the sql_thd was an
administrative statement and logging of admin statements was
disabled, this means that sql_thd->enable_slow_log will be set to
0 (disabled) from that moment on. End result: sql_thd stops
logging slow statements.
We fix this by resetting the value of sql_thd->enable_slow_log to
the value of opt_log_slow_slave_statements right after
log_slow_stement is called by the sql_thd.
To 5.x Release
Notes
=====
This is a backport of BUG#23300 into 5.1 GA.
Original cset revid (in betony):
luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h
Description
===========
When using replication, the slave will not log any slow query
logs queries replicated from the master, even if the
option "--log-slow-slave-statements" is set and these take more
than "log_query_time" to execute.
In order to log slow queries in replicated thread one needs to
set the --log-slow-slave-statements, so that the SQL thread is
initialized with the correct switch. Although setting this flag
correctly configures the slave thread option to log slow queries,
there is an issue with the condition that is used to check
whether to log the slow query or not. When replaying binlog
events the statement contains the SET TIMESTAMP clause which will
force the slow logging condition check to fail. Consequently, the
slow query logging will not take place.
This patch addresses this issue by removing the second condition
from the log_slow_statements as it prevents slow queries to be
binlogged and seems to be deprecated.
Cherry-pick a fix Bug#37148 from next-mr, to preserve
file ids of the added files, and ensure that all the necessary
changes have been pulled.
Since initially Bug#37148 was null-merged into 6.0,
the changeset that is now being cherry-picked was likewise
null merged into next-4284.
Now that Bug#37148 has been reapplied to 6.0, try to make
it work with next-4284. This is also necessary to be able
to pull other changes from 5.1-rep into next-4284.
To resolve the merge issues use this changeset applied
to 6.0:
revid:jperkin@sun.com-20091216103628-ylhqf7s6yegui2t9
revno: 3776.1.1
committer: He Zhenxing <zhenxing.he@sun.com>
branch nick: 6.0-codebase-bugfixing
timestamp: Thu 2009-12-17 17:02:50 +0800
message:
Fix merge problem with Bug#37148
fails in PB sporadically)
The IO thread can concurrently access the relay log IO_CACHE
while another thread is performing an FLUSH LOGS procedure.
FLUSH LOGS closes and reopens the relay log and while doing so it
(re)initializes its IO_CACHE. During this procedure the IO_CACHE
mutex is also reinitialized, which can cause problems if some
other thread (namely the IO THREAD) is concurrently accessing it
at the time .
This patch fixes the problem by extending the interface of the
flush_master_info function to also include a second paramater,
"need_relay_log_lock", stating whether the thread should grab the
relay log lock or not before actually flushing the relay log.
Also, IO thread now calls flush_master_info with this flag set
when it flushes master info with in the event read_event loop.
Finally, we also increase loop time in rpl_heartbeat_basic test
case, so that the number of calls to flush logs doubles, stressing
this part of the code a little more.