VARIABLES
Analysis:
-------------
After executing the query, new value of the user defined
variables are set in the function "select_dumpvar::send_data".
"select_dumpvar::send_data" first calls function
"Item_func_set_user_var::save_item_result()". This function
checks the nullness of the Item_field passed as parameter
to it and saves it. The nullness of item is stored with
arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls
"Item_func_set_user_var::update()" which notices null
result that was saved and calls "Item_func_set_user_var::
update_hash". But here null_value is not set and args[0]
is different from that given to function "Item_func_set_user_var::
set_item_result()". This causes "Item_func_set_user_var::
update_hash" function to believe that its getting non-null value.
"user_var_entry::length" set to 0 and hence "user_var_entry::value"
is made to point to extra_area allocated in "user_var_entry".
And "Item_func_set_user_var::update_hash" tries to write
at memory beyond extra_area for result type DECIMAL. Because of
this invalid write issue is reported by Valgrind.
Before this bug was introduced, we avoided this problem by
creating "Item_func_set_user_var" object with the same
Item_field as arg[0] and as parameter to
Item_func_set_user_var::save_item_result(). But now
they are refering to different args[0]. Because of this
null_value flag set in parameter Item_field in function
"Item_func_set_user_var::save_item_result()" is not
reflected in "Item_func_set_user_var" object.
Fix:
------------
This issue is reported on versions 5.5.24. Issue does not exists
in 5.5.23, 5.1, 5.6 and trunk.
This issue was introduced by
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for
bug #12408412), which was pushed into 5.5 and later releases. This patch
has later been reversed in 5.6 and trunk by
revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for
bug #14664077). Backported this patch in 5.5 also to fix this issue.
PROBLEM:
mysql provides a feature where in a session which is
idle for a period specified by the wait_timeout variable
(whose value is in seconds), the session is closed
This feature is not present when we use thread pool.
FIX:
This patch implements the interface functions which is
required to implement the wait_timeout functionality
in the thread pool plugin.
KEY UPDATES WITH A LIMIT OF 1
Problem: The unsafety warning for statements such as
update...limit1 where pk=1 are thrown when binlog-format
= STATEMENT,despite of the fact that such statements are
actually safe. this leads to filling up of the disk space
with false warnings.
Solution: This is not a complete fix for the problem, but
prevents the disks from getting filled up. This should
therefore be regarded as a workaround. In the future this
should be overriden by server general suppress/filtering
framework. It should also be noted that another worklog is
supposed to defeat this case's artificial unsafety.
We use a warning suppression mechanism to detect warning flood,
enable the suppression, and disable this when the average
warnings/second has reduced to acceptable limits.
Activation: The supression for LIMIT unsafe statements are
activated when the last 50 warnings were logged in less
than 50 seconds.
Supression: Once activated this supression will prevent the
individual warnings to be logged in the error log, but print
the warning for every 50 warnings with the note:
"The last warning was repeated N times in last S seconds"
Noteworthy is the fact that this supression works only on the
error logs and the warnings seen by the clients will remain as
it is (i.e. one warning/ unsafe statement)
Deactivation: The supression will be deactivated once the
average # of warnings/sec have gone down to the acceptable limits.
Analysis:
-------------
If server is started with limit of MAX_CONNECTIONS and
MAX_USER_CONNECTIONS then only MAX_USER_CONNECTIONS of any particular
users can be connected to server and total MAX_CONNECTIONS of client can
be connected to server.
Server maintains a counter for total CONNECTIONS and total CONNECTIONS
from particular user.
Here, MAX_CONNECTIONS of connections are created to server. Out of this
MAX_CONNECTIONS, connections from particular user (say USER1) are
also created. The connections from USER1 is lesser than
MAX_USER_CONNECTIONS. After that there was one more connection request from
USER1. Since USER1 can still create connections as he havent reached
MAX_USER_CONNECTIONS, server increments counter of CONNECTIONS per user.
As server already has MAX_CONNECTIONS of connections, next check to total
CONNECTION count fails. In this case control is returned WITHOUT
decrementing the CONNECTIONS per user. So the counter per user CONNECTIONS goes
on incrementing for each attempt until current connections are closed.
And because of this counter per CONNECTIONS reached MAX_USER_CONNECTIONS.
So, next connections form USER1 user always returns with MAX_USER_CONNECTION
limit error, even when total connection to sever are less than MAX_CONNECTIONS.
Fix:
-------------
This issue is occurred because of not handling counters properly in the
server. Changed the code to handle per user connection counters properly.
BUG#11761686 insert_id event is not filtered.
Two issues are covered.
INSERT into autoincrement field which is not the first part in the composed primary key
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
Fixed with deferring their execution until the parent query.
******
Bug#11754117
Post review fixes.
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.
KEY HANDLING ON SUBSEQUENT CREATE TABLE IF NOT EXISTS
PROBLEM:
--------
Consider a SP routine which does CREATE TABLE
with REFERENCES clause. The first call to this routine
invokes parser and the parsed items are cached, so as
to avoid parsing for the second execution of the routine.
It is obsevered that valgrind reports a warning
upon read of thd->lex->alter_info->key_list->Foreign_key object,
which seem to be pointing to a invalid memory address
during second time execution of the routine. Accessing this object
theoretically could cause a crash.
ANALYSIS:
---------
The problem stems from the fact that for some reason
elements of ref_columns list in thd->lex->alter_info->
key_list->Foreign_key object are changed to point to
objects allocated on runtime memory root.
During the first execution of routine we create
a copy of thd->lex->alter_info object.
As part of this process we create a clones of objects in
Alter_info::key_list and of Foreign_key object in particular.
Then Foreign_key object is cloned for some reason we
perform shallow copies of both Foreign_key::ref_columns
and Foreign_key::columns list. So new instance of
Foreign_key object starts to SHARE contents of ref_columns
and columns list with the original instance.
After that as part of cloning process we call
list_copy_and_replace_each_value() for elements of
ref_columns list. As result ref_columns lists in both
original and cloned Foreign_key object start to contain
pointers to Key_part_spec objects allocated on runtime
memory root because of shallow copy.
So when we start copying of thd->lex->alter_info object
during the second execution of stored routine we indeed
encounter pointer to the Key_part_spec object allocated
on runtime mem-root which was cleared during at the end
of previous execution. This is done in sp_head::execute(),
by a call to free_root(&execute_mem_root,MYF(0));
As result we get valgrind warnings about accessing
unreferenced memory.
FIX:
----
The safest solution to this problem is to
fix Foreign_key(Foreign_key, MEM_ROOT) constructor to do
a deep copy of columns lists, similar to Key(Key, MEM_ROOT)
constructor.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
Blind attempt to fix BUG 12881278 - MAIN.MYISAM TEST FAILS ON LINUX
The printed text is truncated on char 63:
"MySQL thread id 1236, OS thread handle 0x7ff187b96700, query id"
still I do not understand how this truncation could have caused the
main.myisam failure but anyway - the buffer needs to be increased.
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
Issue:
While running embedded server, if client issues TEE command (\T foo/bar) and
"foo/bar" directory doesn't exist, it is suppose to give error. But it was
aborting. This was happening because wrong error handler was being called.
Solution:
Modified calls to correct error handler. In embedded server case, there are
two error handler (client and server) which are supposed to be called based
on which context code is in. If it is in client context, client error handler
should be called otherwise server.
Test case:
Test case automation is not possible as current (following) code doesn't
allow '\T' to be executed from command line (OR command read from a file):
[client/mysql.cc]
...
static int
com_tee(String *buffer __attribute__((unused)),
char *line __attribute__((unused)))
{
char file_name[FN_REFLEN], *end, *param;
if (status.batch) << THIS IS TRUE WHILE EXECUTING FROM COMMAND LINE.
return 0;
...
So, not adding test case in GA. WIll add a test case in mysql-trunk after
removing above code so that this could be properly tested before GA.
In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for
ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is
"Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld".
So 'ha_rows' was used as 'long'.
On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes.
So the printf-like code was reading only the first 4 bytes.
Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01
so the first four bytes yield 0. So the warning message had "row 0" instead of
"row 1" in test outfile_loaddata.test:
-Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1
+Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0
All error-messaging functions which internally invoke some printf-life function
are potential candidate for such mistakes.
One apparently easy way to catch such mistakes is to use
ATTRIBUTE_FORMAT (from my_attribute.h).
But this works only when call site has both:
a) the format as a string literal
b) the types of arguments.
So:
func(ER(ER_BLAH), 10);
will silently not be checked, because ER(ER_BLAH) is not known at
compile time (it is known at run-time, and depends on the chosen
language).
And
func("%s", a va_list argument);
has the same problem, as the *real* type of arguments is not
known at this site at compile time (it's known in some caller).
Moreover,
func(ER(ER_BLAH));
though possibly correct (if ER(ER_BLAH) has no '%' markers), will not
compile (gcc says "error: format not a string literal and no format
arguments").
Consequences:
1) ATTRIBUTE_FORMAT is here added only to functions which in practice
take "string literal" formats: "my_error_reporter" and "print_admin_msg".
2) it cannot be added to the other functions: my_error(),
push_warning_printf(), Table_check_intact::report_error(),
general_log_print().
To do a one-time check of functions listed in (2), the following
"static code analysis" has been done:
1) replace
my_error(ER_xxx, arguments for substitution in format)
with the equivalent
my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in
format),
so that we have ER(ER_xxx) and the arguments *in the same call site*
2) add ATTRIBUTE_FORMAT to push_warning_printf(),
Table_check_intact::report_error(), general_log_print()
3) replace ER(xxx) with the hard-coded English text found in
errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with
"Unknown error"), so that a call site has the format as string literal
4) this way, ATTRIBUTE_FORMAT can effectively do its job
5) compile, fix errors detected by ATTRIBUTE_FORMAT
6) revert steps 1-2-3.
The present patch has no compiler error when submitted again to the
static code analysis above.
It cannot catch all problems though: see Field::set_warning(), in
which a call to push_warning_printf() has a variable error
(thus, not replacable by a string literal); I checked set_warning() calls
by hand though.
See also WL 5883 for one proposal to avoid such bugs from appearing
again in the future.
The issues fixed in the patch are:
a) mismatch in types (like 'int' passed to '%ld')
b) more arguments passed than specified in the format.
This patch resolves mismatches by changing the type/number of arguments,
not by changing error messages of sql/share/errmsg.txt. The latter would be wrong,
per the following old rule: errmsg.txt must be as stable as possible; no insertions
or deletions of messages, no changes of type or number of printf-like format specifiers,
are allowed, as long as the change impacts a message already released in a GA version.
If this rule is not followed:
- Connectors, which use error message numbers, will be confused (by insertions/deletions
of messages)
- using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1)
could produce wrong messages or crash; such usage can easily happen if
installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx;
or if copying mysqld from 5.1.(n+1) into a 5.1.n installation.
When fixing b), I have verified that the superfluous arguments were not used in the format
in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm').
Had they been used, then passing them today, even if the message doesn't use them
anymore, would have been necessary, as explained above.
GRADUALLY IF A TRIGGER EXISTS".
This bug manifested itself in two ways:
- Firstly execution of any data-changing statement which
required prelocking (i.e. involved stored function or
trigger) as part of transaction slowed down a bit all
subsequent statements in this transaction. So performance
in transaction which periodically involved such statements
gradually degraded over time.
- Secondly execution of any data-changing statement which
required prelocking as part of transaction prevented
concurrent FLUSH TABLES WITH READ LOCK from proceeding
until the end of transaction instead of end of particular
statement.
The problem was caused by incorrect handling of metadata lock
used in FTWRL implementation for statements requiring prelocked
mode.
Each statement which changes data acquires global IX lock
with STATEMENT duration. This lock is supposed to block
concurrent FTWRL from proceeding until the statement ends.
When entering prelocked mode, durations of all metadata locks
acquired so far were changed to EXPLICIT, to prevent
substatements from releasing these locks. When prelocked mode
was left, durations of metadata locks were changed to
TRANSACTIONAL (with a few exceptions) so they can be properly
released at the end of transaction.
Unfortunately, this meant that the global IX lock blocking
FTWRL with STATEMENT duration was moved to TRANSACTIONAL
duration after execution of statement requiring prelocking.
Since each subsequent statement that required prelocking and
tried to acquire global IX lock with STATEMENT duration got
a new instance of MDL_ticket, which was later moved to
TRANSACTIONAL duration, this led to unwarranted growth of
number of tickets with TRANSACITONAL duration in this
connection's MDL_context. As result searching for other
tickets in it became slow and acquisition of other metadata
locks by this transaction started to hog CPU.
Moreover, this also meant that after execution of statement
requiring prelocking concurrent FTWRL was blocked
until the end of transaction instead of end of statement.
This patch solves this problem by not moving locks to EXPLICIT
duration when thread enters prelocked mode (unless it is a real
LOCK TABLES mode). This step turned out to be not really
necessary as substatements don't try to release metadata locks.
Consequently, the global IX lock blocking FTWRL keeps its
STATEMENT duration and is properly released at the end of
statement and the above issue goes away.
result set when SQLEXCEPTION is active.
The problem was in a hackish THD::no_warnings_for_error attribute.
When it was set, an error was not written to Warning_info -- only
Diagnostics_area state was changed. That means, Diagnostics_area
might contain error state, which is not present in Warning_info.
The user-visible problem was that in some cases SHOW WARNINGS
returned empty result set (i.e. there were no warnings) while
the previous SQL statement failed. According to the MySQL
protocol errors must be presented in warning list.
The main idea of this patch is to remove THD::no_warnings_for_error.
There were few places where it was used:
- sql_admin.cc, handling of REPAIR TABLE USE_FRM.
- sql_show.cc, when calling fill_schema_table_from_frm().
- sql_show.cc, when calling fill_table().
The fix is to either use internal-error-handlers, or to use
temporary Warning_info storing warnings, which might be ignored.
This patch is needed to fix Bug 11763162 (55843).
causes future shutdown hang
InnoDB would hang on shutdown if any XA transactions exist in the
system in the PREPARED state. This has been masked by the fact that
MySQL would roll back any PREPARED transaction on shutdown, in the
spirit of Bug #12161 Xa recovery and client disconnection.
[mysql-test-run] do_shutdown_server: Interpret --shutdown_server 0 as
a request to kill the server immediately without initiating a
shutdown procedure.
xid_cache_insert(): Initialize XID_STATE::rm_error in order to avoid a
bogus error message on XA ROLLBACK of a recovered PREPARED transaction.
innobase_commit_by_xid(), innobase_rollback_by_xid(): Free the InnoDB
transaction object after rolling back a PREPARED transaction.
trx_get_trx_by_xid(): Only consider transactions whose
trx->is_prepared flag is set. The MySQL layer seems to prevent
attempts to roll back connected transactions that are in the PREPARED
state from another connection, but it is better to play it safe. The
is_prepared flag was introduced in the InnoDB Plugin.
trx_n_prepared: A new counter, counting the number of InnoDB
transactions in the PREPARED state.
logs_empty_and_mark_files_at_shutdown(): On shutdown, allow
trx_n_prepared transactions to exist in the system.
trx_undo_free_prepared(), trx_free_prepared(): New functions, to free
the memory objects of PREPARED transactions on shutdown. This is not
needed in the built-in InnoDB, because it would collect all allocated
memory on shutdown. The InnoDB Plugin needs this because of
innodb_use_sys_malloc.
trx_sys_close(): Invoke trx_free_prepared() on all remaining
transactions.
The problem is a race between a session closing its vio
(i.e. after a COM_QUIT) at the same time it is being killed by
another thread. This could trigger a assertion in vio_close()
as the two threads could end up closing the same vio, at the
same time. This could happen due to the implementation of
SIGNAL_WITH_VIO_CLOSE, which closes the vio of the thread
being killed.
The solution is to serialize the close of the Vio under
LOCK_thd_data, which protects THD data.
No regression test is added as this is essentially a debug
issue and the test case would be quite convoluted as we would
need to synchronize a session that is being killed -- which
is a bit difficult since debug sync points code does not
synchronize killed sessions.
Problem: Extended characters outside of ASCII range where not displayed
properly in SHOW PROCESSLIST, because thd_info->query was always sent as
system_character_set (utf8). This was wrong, because query buffer
is never converted to utf8 - it is always have client character set.
Fix: sending query buffer using query character set
@ sql/sql_class.cc
@ sql/sql_class.h
Introducing a new class CSET_STRING, a LEX_STRING with character set.
Adding set_query(&CSET_STRING)
Adding reset_query(), to use instead of set_query(0, NULL).
@ sql/event_data_objects.cc
Using reset_query()
@ sql/log_event.cc
Using reset_query()
Adding charset argument to set_query_and_id().
@ sql/slave.cc
Using reset_query().
@ sql/sp_head.cc
Changing backing up and restore code to use CSET_STRING.
@ sql/sql_audit.h
Using CSET_STRING.
In the "else" branch it's OK not to use
global_system_variables.character_set_client.
&my_charset_latin1, which is set in constructor, is fine
(verified with Sergey Vojtovich).
@ sql/sql_insert.cc
Using set_query() with proper character set: table_name is utf8.
@ sql/sql_parse.cc
Adding character set argument to set_query_and_id().
(This is the main point where thd->charset() is stored
into thd->query_string.cs, for use in "SHOW PROCESSLIST".)
Using reset_query().
@ sql/sql_prepare.cc
Storing client character set into thd->query_string.cs.
@ sql/sql_show.cc
Using CSET_STRING to fetch and send charset-aware query information
from threads.
@ storage/myisam/ha_myisam.cc
Using set_query() with proper character set: table_name is utf8.
@ mysql-test/r/show_check.result
@ mysql-test/t/show_check.test
Adding tests