Analysis:
-------------
If server is started with limit of MAX_CONNECTIONS and
MAX_USER_CONNECTIONS then only MAX_USER_CONNECTIONS of any particular
users can be connected to server and total MAX_CONNECTIONS of client can
be connected to server.
Server maintains a counter for total CONNECTIONS and total CONNECTIONS
from particular user.
Here, MAX_CONNECTIONS of connections are created to server. Out of this
MAX_CONNECTIONS, connections from particular user (say USER1) are
also created. The connections from USER1 is lesser than
MAX_USER_CONNECTIONS. After that there was one more connection request from
USER1. Since USER1 can still create connections as he havent reached
MAX_USER_CONNECTIONS, server increments counter of CONNECTIONS per user.
As server already has MAX_CONNECTIONS of connections, next check to total
CONNECTION count fails. In this case control is returned WITHOUT
decrementing the CONNECTIONS per user. So the counter per user CONNECTIONS goes
on incrementing for each attempt until current connections are closed.
And because of this counter per CONNECTIONS reached MAX_USER_CONNECTIONS.
So, next connections form USER1 user always returns with MAX_USER_CONNECTION
limit error, even when total connection to sever are less than MAX_CONNECTIONS.
Fix:
-------------
This issue is occurred because of not handling counters properly in the
server. Changed the code to handle per user connection counters properly.
BUG#11761686 insert_id event is not filtered.
Two issues are covered.
INSERT into autoincrement field which is not the first part in the composed primary key
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
Fixed with deferring their execution until the parent query.
******
Bug#11754117
Post review fixes.
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.
KEY HANDLING ON SUBSEQUENT CREATE TABLE IF NOT EXISTS
PROBLEM:
--------
Consider a SP routine which does CREATE TABLE
with REFERENCES clause. The first call to this routine
invokes parser and the parsed items are cached, so as
to avoid parsing for the second execution of the routine.
It is obsevered that valgrind reports a warning
upon read of thd->lex->alter_info->key_list->Foreign_key object,
which seem to be pointing to a invalid memory address
during second time execution of the routine. Accessing this object
theoretically could cause a crash.
ANALYSIS:
---------
The problem stems from the fact that for some reason
elements of ref_columns list in thd->lex->alter_info->
key_list->Foreign_key object are changed to point to
objects allocated on runtime memory root.
During the first execution of routine we create
a copy of thd->lex->alter_info object.
As part of this process we create a clones of objects in
Alter_info::key_list and of Foreign_key object in particular.
Then Foreign_key object is cloned for some reason we
perform shallow copies of both Foreign_key::ref_columns
and Foreign_key::columns list. So new instance of
Foreign_key object starts to SHARE contents of ref_columns
and columns list with the original instance.
After that as part of cloning process we call
list_copy_and_replace_each_value() for elements of
ref_columns list. As result ref_columns lists in both
original and cloned Foreign_key object start to contain
pointers to Key_part_spec objects allocated on runtime
memory root because of shallow copy.
So when we start copying of thd->lex->alter_info object
during the second execution of stored routine we indeed
encounter pointer to the Key_part_spec object allocated
on runtime mem-root which was cleared during at the end
of previous execution. This is done in sp_head::execute(),
by a call to free_root(&execute_mem_root,MYF(0));
As result we get valgrind warnings about accessing
unreferenced memory.
FIX:
----
The safest solution to this problem is to
fix Foreign_key(Foreign_key, MEM_ROOT) constructor to do
a deep copy of columns lists, similar to Key(Key, MEM_ROOT)
constructor.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
Blind attempt to fix BUG 12881278 - MAIN.MYISAM TEST FAILS ON LINUX
The printed text is truncated on char 63:
"MySQL thread id 1236, OS thread handle 0x7ff187b96700, query id"
still I do not understand how this truncation could have caused the
main.myisam failure but anyway - the buffer needs to be increased.
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
Issue:
While running embedded server, if client issues TEE command (\T foo/bar) and
"foo/bar" directory doesn't exist, it is suppose to give error. But it was
aborting. This was happening because wrong error handler was being called.
Solution:
Modified calls to correct error handler. In embedded server case, there are
two error handler (client and server) which are supposed to be called based
on which context code is in. If it is in client context, client error handler
should be called otherwise server.
Test case:
Test case automation is not possible as current (following) code doesn't
allow '\T' to be executed from command line (OR command read from a file):
[client/mysql.cc]
...
static int
com_tee(String *buffer __attribute__((unused)),
char *line __attribute__((unused)))
{
char file_name[FN_REFLEN], *end, *param;
if (status.batch) << THIS IS TRUE WHILE EXECUTING FROM COMMAND LINE.
return 0;
...
So, not adding test case in GA. WIll add a test case in mysql-trunk after
removing above code so that this could be properly tested before GA.
In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for
ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is
"Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld".
So 'ha_rows' was used as 'long'.
On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes.
So the printf-like code was reading only the first 4 bytes.
Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01
so the first four bytes yield 0. So the warning message had "row 0" instead of
"row 1" in test outfile_loaddata.test:
-Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1
+Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0
All error-messaging functions which internally invoke some printf-life function
are potential candidate for such mistakes.
One apparently easy way to catch such mistakes is to use
ATTRIBUTE_FORMAT (from my_attribute.h).
But this works only when call site has both:
a) the format as a string literal
b) the types of arguments.
So:
func(ER(ER_BLAH), 10);
will silently not be checked, because ER(ER_BLAH) is not known at
compile time (it is known at run-time, and depends on the chosen
language).
And
func("%s", a va_list argument);
has the same problem, as the *real* type of arguments is not
known at this site at compile time (it's known in some caller).
Moreover,
func(ER(ER_BLAH));
though possibly correct (if ER(ER_BLAH) has no '%' markers), will not
compile (gcc says "error: format not a string literal and no format
arguments").
Consequences:
1) ATTRIBUTE_FORMAT is here added only to functions which in practice
take "string literal" formats: "my_error_reporter" and "print_admin_msg".
2) it cannot be added to the other functions: my_error(),
push_warning_printf(), Table_check_intact::report_error(),
general_log_print().
To do a one-time check of functions listed in (2), the following
"static code analysis" has been done:
1) replace
my_error(ER_xxx, arguments for substitution in format)
with the equivalent
my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in
format),
so that we have ER(ER_xxx) and the arguments *in the same call site*
2) add ATTRIBUTE_FORMAT to push_warning_printf(),
Table_check_intact::report_error(), general_log_print()
3) replace ER(xxx) with the hard-coded English text found in
errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with
"Unknown error"), so that a call site has the format as string literal
4) this way, ATTRIBUTE_FORMAT can effectively do its job
5) compile, fix errors detected by ATTRIBUTE_FORMAT
6) revert steps 1-2-3.
The present patch has no compiler error when submitted again to the
static code analysis above.
It cannot catch all problems though: see Field::set_warning(), in
which a call to push_warning_printf() has a variable error
(thus, not replacable by a string literal); I checked set_warning() calls
by hand though.
See also WL 5883 for one proposal to avoid such bugs from appearing
again in the future.
The issues fixed in the patch are:
a) mismatch in types (like 'int' passed to '%ld')
b) more arguments passed than specified in the format.
This patch resolves mismatches by changing the type/number of arguments,
not by changing error messages of sql/share/errmsg.txt. The latter would be wrong,
per the following old rule: errmsg.txt must be as stable as possible; no insertions
or deletions of messages, no changes of type or number of printf-like format specifiers,
are allowed, as long as the change impacts a message already released in a GA version.
If this rule is not followed:
- Connectors, which use error message numbers, will be confused (by insertions/deletions
of messages)
- using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1)
could produce wrong messages or crash; such usage can easily happen if
installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx;
or if copying mysqld from 5.1.(n+1) into a 5.1.n installation.
When fixing b), I have verified that the superfluous arguments were not used in the format
in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm').
Had they been used, then passing them today, even if the message doesn't use them
anymore, would have been necessary, as explained above.
GRADUALLY IF A TRIGGER EXISTS".
This bug manifested itself in two ways:
- Firstly execution of any data-changing statement which
required prelocking (i.e. involved stored function or
trigger) as part of transaction slowed down a bit all
subsequent statements in this transaction. So performance
in transaction which periodically involved such statements
gradually degraded over time.
- Secondly execution of any data-changing statement which
required prelocking as part of transaction prevented
concurrent FLUSH TABLES WITH READ LOCK from proceeding
until the end of transaction instead of end of particular
statement.
The problem was caused by incorrect handling of metadata lock
used in FTWRL implementation for statements requiring prelocked
mode.
Each statement which changes data acquires global IX lock
with STATEMENT duration. This lock is supposed to block
concurrent FTWRL from proceeding until the statement ends.
When entering prelocked mode, durations of all metadata locks
acquired so far were changed to EXPLICIT, to prevent
substatements from releasing these locks. When prelocked mode
was left, durations of metadata locks were changed to
TRANSACTIONAL (with a few exceptions) so they can be properly
released at the end of transaction.
Unfortunately, this meant that the global IX lock blocking
FTWRL with STATEMENT duration was moved to TRANSACTIONAL
duration after execution of statement requiring prelocking.
Since each subsequent statement that required prelocking and
tried to acquire global IX lock with STATEMENT duration got
a new instance of MDL_ticket, which was later moved to
TRANSACTIONAL duration, this led to unwarranted growth of
number of tickets with TRANSACITONAL duration in this
connection's MDL_context. As result searching for other
tickets in it became slow and acquisition of other metadata
locks by this transaction started to hog CPU.
Moreover, this also meant that after execution of statement
requiring prelocking concurrent FTWRL was blocked
until the end of transaction instead of end of statement.
This patch solves this problem by not moving locks to EXPLICIT
duration when thread enters prelocked mode (unless it is a real
LOCK TABLES mode). This step turned out to be not really
necessary as substatements don't try to release metadata locks.
Consequently, the global IX lock blocking FTWRL keeps its
STATEMENT duration and is properly released at the end of
statement and the above issue goes away.
result set when SQLEXCEPTION is active.
The problem was in a hackish THD::no_warnings_for_error attribute.
When it was set, an error was not written to Warning_info -- only
Diagnostics_area state was changed. That means, Diagnostics_area
might contain error state, which is not present in Warning_info.
The user-visible problem was that in some cases SHOW WARNINGS
returned empty result set (i.e. there were no warnings) while
the previous SQL statement failed. According to the MySQL
protocol errors must be presented in warning list.
The main idea of this patch is to remove THD::no_warnings_for_error.
There were few places where it was used:
- sql_admin.cc, handling of REPAIR TABLE USE_FRM.
- sql_show.cc, when calling fill_schema_table_from_frm().
- sql_show.cc, when calling fill_table().
The fix is to either use internal-error-handlers, or to use
temporary Warning_info storing warnings, which might be ignored.
This patch is needed to fix Bug 11763162 (55843).
causes future shutdown hang
InnoDB would hang on shutdown if any XA transactions exist in the
system in the PREPARED state. This has been masked by the fact that
MySQL would roll back any PREPARED transaction on shutdown, in the
spirit of Bug #12161 Xa recovery and client disconnection.
[mysql-test-run] do_shutdown_server: Interpret --shutdown_server 0 as
a request to kill the server immediately without initiating a
shutdown procedure.
xid_cache_insert(): Initialize XID_STATE::rm_error in order to avoid a
bogus error message on XA ROLLBACK of a recovered PREPARED transaction.
innobase_commit_by_xid(), innobase_rollback_by_xid(): Free the InnoDB
transaction object after rolling back a PREPARED transaction.
trx_get_trx_by_xid(): Only consider transactions whose
trx->is_prepared flag is set. The MySQL layer seems to prevent
attempts to roll back connected transactions that are in the PREPARED
state from another connection, but it is better to play it safe. The
is_prepared flag was introduced in the InnoDB Plugin.
trx_n_prepared: A new counter, counting the number of InnoDB
transactions in the PREPARED state.
logs_empty_and_mark_files_at_shutdown(): On shutdown, allow
trx_n_prepared transactions to exist in the system.
trx_undo_free_prepared(), trx_free_prepared(): New functions, to free
the memory objects of PREPARED transactions on shutdown. This is not
needed in the built-in InnoDB, because it would collect all allocated
memory on shutdown. The InnoDB Plugin needs this because of
innodb_use_sys_malloc.
trx_sys_close(): Invoke trx_free_prepared() on all remaining
transactions.
The problem is a race between a session closing its vio
(i.e. after a COM_QUIT) at the same time it is being killed by
another thread. This could trigger a assertion in vio_close()
as the two threads could end up closing the same vio, at the
same time. This could happen due to the implementation of
SIGNAL_WITH_VIO_CLOSE, which closes the vio of the thread
being killed.
The solution is to serialize the close of the Vio under
LOCK_thd_data, which protects THD data.
No regression test is added as this is essentially a debug
issue and the test case would be quite convoluted as we would
need to synchronize a session that is being killed -- which
is a bit difficult since debug sync points code does not
synchronize killed sessions.
Problem: Extended characters outside of ASCII range where not displayed
properly in SHOW PROCESSLIST, because thd_info->query was always sent as
system_character_set (utf8). This was wrong, because query buffer
is never converted to utf8 - it is always have client character set.
Fix: sending query buffer using query character set
@ sql/sql_class.cc
@ sql/sql_class.h
Introducing a new class CSET_STRING, a LEX_STRING with character set.
Adding set_query(&CSET_STRING)
Adding reset_query(), to use instead of set_query(0, NULL).
@ sql/event_data_objects.cc
Using reset_query()
@ sql/log_event.cc
Using reset_query()
Adding charset argument to set_query_and_id().
@ sql/slave.cc
Using reset_query().
@ sql/sp_head.cc
Changing backing up and restore code to use CSET_STRING.
@ sql/sql_audit.h
Using CSET_STRING.
In the "else" branch it's OK not to use
global_system_variables.character_set_client.
&my_charset_latin1, which is set in constructor, is fine
(verified with Sergey Vojtovich).
@ sql/sql_insert.cc
Using set_query() with proper character set: table_name is utf8.
@ sql/sql_parse.cc
Adding character set argument to set_query_and_id().
(This is the main point where thd->charset() is stored
into thd->query_string.cs, for use in "SHOW PROCESSLIST".)
Using reset_query().
@ sql/sql_prepare.cc
Storing client character set into thd->query_string.cs.
@ sql/sql_show.cc
Using CSET_STRING to fetch and send charset-aware query information
from threads.
@ storage/myisam/ha_myisam.cc
Using set_query() with proper character set: table_name is utf8.
@ mysql-test/r/show_check.result
@ mysql-test/t/show_check.test
Adding tests
during EXPLAIN
Before the patch, send_eof() of some subclasses of
select_result (e.g., select_send::send_eof()) could
handle being called after an error had occured while others
could not. The methods that were not well-behaved would trigger
an ASSERT on debug builds. Release builds were not affected.
Consider the following query as an example for how the ASSERT
could be triggered:
A user without execute privilege on f() does
SELECT MAX(key1) INTO @dummy FROM t1 WHERE f() < 1;
resulting in "ERROR 42000: execute command denied to user..."
The server would end the query by calling send_eof(). The
fact that the error had occured would make the ASSERT trigger.
select_dumpvar::send_eof() was the offending method in the
bug report, but the problem also applied to other
subclasses of select_result. This patch uniforms send_eof()
of all subclasses of select_result to handle being called
after an error has occured.
bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ
LOCK" and bug #54673 "It takes too long to get readlock for
'FLUSH TABLES WITH READ LOCK'".
The first bug manifested itself as a deadlock which occurred
when a connection, which had some table open through HANDLER
statement, tried to update some data through DML statement
while another connection tried to execute FLUSH TABLES WITH
READ LOCK concurrently.
What happened was that FTWRL in the second connection managed
to perform first step of GRL acquisition and thus blocked all
upcoming DML. After that it started to wait for table open
through HANDLER statement to be flushed. When the first connection
tried to execute DML it has started to wait for GRL/the second
connection creating deadlock.
The second bug manifested itself as starvation of FLUSH TABLES
WITH READ LOCK statements in cases when there was a constant
stream of concurrent DML statements (in two or more
connections).
This has happened because requests for protection against GRL
which were acquired by DML statements were ignoring presence of
pending GRL and thus the latter was starved.
This patch solves both these problems by re-implementing GRL
using metadata locks.
Similar to the old implementation acquisition of GRL in new
implementation is two-step. During the first step we block
all concurrent DML and DDL statements by acquiring global S
metadata lock (each DML and DDL statement acquires global IX
lock for its duration). During the second step we block commits
by acquiring global S lock in COMMIT namespace (commit code
acquires global IX lock in this namespace).
Note that unlike in old implementation acquisition of
protection against GRL in DML and DDL is semi-automatic.
We assume that any statement which should be blocked by GRL
will either open and acquires write-lock on tables or acquires
metadata locks on objects it is going to modify. For any such
statement global IX metadata lock is automatically acquired
for its duration.
The first problem is solved because waits for GRL become
visible to deadlock detector in metadata locking subsystem
and thus deadlocks like one in the first bug become impossible.
The second problem is solved because global S locks which
are used for GRL implementation are given preference over
IX locks which are acquired by concurrent DML (and we can
switch to fair scheduling in future if needed).
Important change:
FTWRL/GRL no longer blocks DML and DDL on temporary tables.
Before this patch behavior was not consistent in this respect:
in some cases DML/DDL statements on temporary tables were
blocked while in others they were not. Since the main use cases
for FTWRL are various forms of backups and temporary tables are
not preserved during backups we have opted for consistently
allowing DML/DDL on temporary tables during FTWRL/GRL.
Important change:
This patch changes thread state names which are used when
DML/DDL of FTWRL is waiting for global read lock. It is now
either "Waiting for global read lock" or "Waiting for commit
lock" depending on the stage on which FTWRL is.
Incompatible change:
To solve deadlock in events code which was exposed by this
patch we have to replace LOCK_event_metadata mutex with
metadata locks on events. As result we have to prohibit
DDL on events under LOCK TABLES.
This patch also adds extensive test coverage for interaction
of DML/DDL and FTWRL.
Performance of new and old global read lock implementations
in sysbench tests were compared. There were no significant
difference between new and old implementations.