Commit graph

62 commits

Author SHA1 Message Date
Kent Boortz
9da00ebec9 Updated/added copyright headers 2011-06-30 17:46:53 +02:00
Mikael Ronstrom
a41873670c More review fixes 2011-03-04 13:12:31 +01:00
Mikael Ronstrom
c1986098bf merge 2011-01-04 18:46:01 +01:00
Dmitry Lenev
378cdc58c1 Patch that refactors global read lock implementation and fixes
bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ
LOCK" and bug #54673 "It takes too long to get readlock for
'FLUSH TABLES WITH READ LOCK'".

The first bug manifested itself as a deadlock which occurred
when a connection, which had some table open through HANDLER
statement, tried to update some data through DML statement
while another connection tried to execute FLUSH TABLES WITH
READ LOCK concurrently.

What happened was that FTWRL in the second connection managed
to perform first step of GRL acquisition and thus blocked all
upcoming DML. After that it started to wait for table open
through HANDLER statement to be flushed. When the first connection
tried to execute DML it has started to wait for GRL/the second
connection creating deadlock.

The second bug manifested itself as starvation of FLUSH TABLES
WITH READ LOCK statements in cases when there was a constant
stream of concurrent DML statements (in two or more
connections).

This has happened because requests for protection against GRL
which were acquired by DML statements were ignoring presence of
pending GRL and thus the latter was starved.

This patch solves both these problems by re-implementing GRL
using metadata locks.

Similar to the old implementation acquisition of GRL in new
implementation is two-step. During the first step we block
all concurrent DML and DDL statements by acquiring global S
metadata lock (each DML and DDL statement acquires global IX
lock for its duration). During the second step we block commits
by acquiring global S lock in COMMIT namespace (commit code
acquires global IX lock in this namespace).

Note that unlike in old implementation acquisition of
protection against GRL in DML and DDL is semi-automatic.
We assume that any statement which should be blocked by GRL
will either open and acquires write-lock on tables or acquires
metadata locks on objects it is going to modify. For any such
statement global IX metadata lock is automatically acquired
for its duration.

The first problem is solved because waits for GRL become
visible to deadlock detector in metadata locking subsystem
and thus deadlocks like one in the first bug become impossible.

The second problem is solved because global S locks which
are used for GRL implementation are given preference over
IX locks which are acquired by concurrent DML (and we can
switch to fair scheduling in future if needed).

Important change:
FTWRL/GRL no longer blocks DML and DDL on temporary tables.
Before this patch behavior was not consistent in this respect:
in some cases DML/DDL statements on temporary tables were
blocked while in others they were not. Since the main use cases
for FTWRL are various forms of backups and temporary tables are
not preserved during backups we have opted for consistently
allowing DML/DDL on temporary tables during FTWRL/GRL.

Important change:
This patch changes thread state names which are used when
DML/DDL of FTWRL is waiting for global read lock. It is now
either "Waiting for global read lock" or "Waiting for commit
lock" depending on the stage on which FTWRL is.

Incompatible change:
To solve deadlock in events code which was exposed by this
patch we have to replace LOCK_event_metadata mutex with
metadata locks on events. As result we have to prohibit
DDL on events under LOCK TABLES.

This patch also adds extensive test coverage for interaction
of DML/DDL and FTWRL.

Performance of new and old global read lock implementations
in sysbench tests were compared. There were no significant
difference between new and old implementations.
2010-11-11 20:11:05 +03:00
Mikael Ronstrom
1c0d998a12 Added more wait states for THD wait service 2010-10-27 20:29:09 +02:00
Dmitry Lenev
8322cad015 Reverted a temporary workaround for bug #56405 "Deadlock
in the MDL deadlock detector".

It is no longer needed as a better fix for this bug has
been pushed.
2010-09-30 17:29:12 +04:00
Dmitry Lenev
ac35157899 A temporary workaround for bug #56405 "Deadlock in the
MDL deadlock detector".

Deadlock could have occurred when workload containing mix
of DML, DDL and FLUSH TABLES statements affecting same
set of tables was executed in heavily concurrent environment.

This deadlock occurred when several connections tried to
perform deadlock detection in metadata locking subsystem.
The first connection started traversing wait-for graph,
encountered sub-graph representing wait for flush, acquired
LOCK_open and dived into sub-graph inspection. When it has
encounterd sub-graph corresponding to wait for metadata lock
and blocked while trying to acquire rd-lock on
MDL_lock::m_rwlock (*) protecting this subgraph, since some
other thread had wr-lock on it. When this wr-lock was released
it could have happened (if there was other pending wr-lock
against this rwlock) that rd-lock from the first connection
was left unsatisfied but at the same time new rd-lock request
from the second connection sneaked in and was satisfied (for
this to be possible second rd- request should come exactly
after wr-lock is released but before pending wr-lock manages
to grab rwlock, which is possible both on Linux and in our
own rwlock implementation). If this second connection
continued traversing wait-for graph and encountered sub-graph
representing wait for flush it tried to acquire LOCK_open
and thus deadlock was created.

This patch tries to workaround this problem but not allowing
deadlock detector to lock LOCK_open mutex if some other thread
doing deadlock detection already owns it and current search
depth is greater than 0. Instead deadlock is reported.

Other possible solutions are either known to have negative
effects on performance or require much more time for proper
implementation and testing.

No test case is provided as this bug is very hard to repeat
in MTR environment but is repeatable with the help of RQG
tests.
2010-09-06 21:29:02 +04:00
Konstantin Osipov
8673d2b20f Commit on behalf of Dmitry Lenev.
Merge his patch for Bug#52044 into 5.5, and apply 
review comments.
2010-08-12 17:50:23 +04:00
Konstantin Osipov
f8bfa3287d A fix for Bug#41158 "DROP TABLE holds LOCK_open during unlink()".
Remove acquisition of LOCK_open around file system operations,
since such operations are now protected by metadata locks.
Rework table discovery algorithm to not require LOCK_open.

No new tests added since all MDL locking operations are covered
in lock.test and mdl_sync.test, and as long as these tests
pass despite the increased concurrency, consistency must be
unaffected.
2010-08-09 22:33:47 +04:00
Dmitry Lenev
a6c00c276e Part of fix for bug#52044 "FLUSH TABLES WITH READ LOCK and
FLUSH TABLES <list> WITH READ LOCK are incompatible" to
be pushed as separate patch.

Replaced thread state name "Waiting for table", which was
used by threads waiting for a metadata lock or table flush, 
with a set of names which better reflect types of resources
being waited for.

Also replaced "Table lock" thread state name, which was used 
by threads waiting on thr_lock.c table level lock, with more
elaborate "Waiting for table level lock", to make it 
more consistent with other thread state names.

Updated test cases and their results according to these 
changes.

Fixed sys_vars.query_cache_wlock_invalidate_func test to not
to wait for timeout of wait_condition.inc script.
2010-08-06 15:29:37 +04:00
Dmitry Lenev
5fff906edd Fix for bug #52044 "FLUSH TABLES WITH READ LOCK and FLUSH
TABLES <list> WITH READ LOCK are incompatible".

The problem was that FLUSH TABLES <list> WITH READ LOCK
which was issued when other connection has acquired global
read lock using FLUSH TABLES WITH READ LOCK was blocked
and has to wait until global read lock is released.

This issue stemmed from the fact that FLUSH TABLES <list>
WITH READ LOCK implementation has acquired X metadata locks
on tables to be flushed. Since these locks required acquiring
of global IX lock this statement was incompatible with global
read lock.

This patch addresses problem by using SNW metadata type of
lock for tables to be flushed by FLUSH TABLES <list> WITH
READ LOCK. It is OK to acquire them without global IX lock
as long as we won't try to upgrade those locks. Since SNW
locks allow concurrent statements using same table FLUSH
TABLE <list> WITH READ LOCK now has to wait until old
versions of tables to be flushed go away after acquiring
metadata locks. Since such waiting can lead to deadlock
MDL deadlock detector was extended to take into account
waits for flush and resolve such deadlocks.

As a bonus code in open_tables() which was responsible for
waiting old versions of tables to go away was refactored.
Now when we encounter old version of table in open_table()
we don't back-off and wait for all old version to go away,
but instead wait for this particular table to be flushed.
Such approach supported by deadlock detection should reduce
number of scenarios in which FLUSH TABLES aborts concurrent
multi-statement transactions.

Note that active FLUSH TABLES <list> WITH READ LOCK still
blocks concurrent FLUSH TABLES WITH READ LOCK statement
as the former keeps tables open and thus prevents the
latter statement from doing flush.
2010-07-27 17:34:58 +04:00
Jon Olav Hauglid
131095149f Bug #55223 assert in Protocol::end_statement during CREATE DATABASE
The problem was that a statement could cause an assert if it was aborted by
KILL QUERY while it waited on a metadata lock. This assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of
ER_QUERY_INTERRUPTED.

The root cause of the problem was that there are two separate ways to tell if a
statement is killed: thd->killed and mysys_var->abort. KILL QUERY causes both
to be set, thd->killed before mysys_var->abort. Also, both values are reset
at the end of statement execution. This means that it is possible for
KILL QUERY to first set thd->killed, then have the killed statement reset
both thd->killed and mysys_var->abort and finally have KILL QUERY set
mysys_var->abort. This means that the connection with the killed statement
will start executing the next statement with the two values out of sync - i.e.
thd->killed not set but mysys_var->abort set.

Since mysys_var->abort is used to check if a wait for a metadata lock should
be aborted, the next statement would immediately abort any such waiting.
When waiting is aborted, no OK message is sent and thd->killed is checked to
see if ER_QUERY_INTERRUPTED should be sent to the client. But since
the->killed had been reset, neither OK nor an error message was sent to the
client. This then triggered the assert.

This patch fixes the problem by changing the metadata lock waiting code to
check thd->killed.

No test case added as reproducing the assert is dependent on very exact timing
of two (or more) threads. The patch has been checked using RQG and the grammar
posted on the bug report.
2010-07-22 10:00:32 +02:00
Dmitry Lenev
cf93bc7181 A pre-requisite for patch fixing bug #52044 "FLUSH TABLES
WITH READ LOCK and FLUSH TABLES <list> WITH READ LOCK are
incompatible", which adds information about waits caused by 
FLUSH TABLES statement to deadlock detector in MDL subsystem.

Remove API supporting caching of pointers to TABLE_SHARE 
object in MDL subsystem and all code related to it. 

The problem was that locking requirements of code 
implementing this API conflicted with locking requirements 
of code which adds information about waits caused by flushes 
to deadlock detector in MDL subsystem (the former needed to
lock LOCK_open or its future equivalent while having 
write-lock on MDL_lock's rwlock, and the latter needs to be 
able to read-lock MDL_lock rwlock while owning LOCK_open or 
its future equivalent).

Since caching of pointers to TABLE_SHARE objects in MDL 
subsystem didn't bring expected performance benefits we 
decided to remove caching API rather than try to come up 
with some complex solution for this problem.
2010-07-13 22:01:54 +04:00
Jon Olav Hauglid
a5d72c498c merge from mysql-trunk-bugfixing 2010-07-13 10:39:24 +02:00
Davi Arnaut
a10ae35328 Bug#34043: Server loops excessively in _checkchunk() when safemalloc is enabled
Essentially, the problem is that safemalloc is excruciatingly
slow as it checks all allocated blocks for overrun at each
memory management primitive, yielding a almost exponential
slowdown for the memory management functions (malloc, realloc,
free). The overrun check basically consists of verifying some
bytes of a block for certain magic keys, which catches some
simple forms of overrun. Another minor problem is violation
of aliasing rules and that its own internal list of blocks
is prone to corruption.

Another issue with safemalloc is rather the maintenance cost
as the tool has a significant impact on the server code.
Given the magnitude of memory debuggers available nowadays,
especially those that are provided with the platform malloc
implementation, maintenance of a in-house and largely obsolete
memory debugger becomes a burden that is not worth the effort
due to its slowness and lack of support for detecting more
common forms of heap corruption.

Since there are third-party tools that can provide the same
functionality at a lower or comparable performance cost, the
solution is to simply remove safemalloc. Third-party tools
can provide the same functionality at a lower or comparable
performance cost. 

The removal of safemalloc also allows a simplification of the
malloc wrappers, removing quite a bit of kludge: redefinition
of my_malloc, my_free and the removal of the unused second
argument of my_free. Since free() always check whether the
supplied pointer is null, redudant checks are also removed.

Also, this patch adds unit testing for my_malloc and moves
my_realloc implementation into the same file as the other
memory allocation primitives.
2010-07-08 18:20:08 -03:00
Jon Olav Hauglid
41a3dfe490 A 5.5 version of the fix for Bug #54360 "Deadlock DROP/ALTER/CREATE
DATABASE with open HANDLER"

Remove LOCK_create_db, database name locks, and use metadata locks instead.
This exposes CREATE/DROP/ALTER DATABASE statements to the graph-based
deadlock detector in MDL, and paves the way for a safe, deadlock-free
implementation of RENAME DATABASE.

Database DDL statements will now take exclusive metadata locks on
the database name, while table/view/routine DDL statements take
intention exclusive locks on the database name. This prevents race
conditions between database DDL and table/view/routine DDL.
(e.g. DROP DATABASE with concurrent CREATE/ALTER/DROP TABLE)

By adding database name locks, this patch implements
WL#4450 "DDL locking: CREATE/DROP DATABASE must use database locks" and
WL#4985 "DDL locking: namespace/hierarchical locks".

The patch also changes code to use init_one_table() where appropriate.
The new lock_table_names() function requires TABLE_LIST::db_length to
be set correctly, and this is taken care of by init_one_table().

This patch also adds a simple template to help work with 
the mysys HASH data structure.

Most of the patch was written by Konstantin Osipov.
2010-07-01 15:53:46 +02:00
Konstantin Osipov
94174db16b A new implementation for the TABLE_SHARE cache in MDL
subsystem. Fix a number of caveates that the previous
implementation suffered from, including unprotected
access to shared data and lax resource accounting
(share->ref_count) that could lead to deadlocks.

The new implementation still suffers from a number
of potential deadlocks in some edge cases, and this is 
still not enabled by default. Especially since performance
testing has shown that it gives only marginable (not even 
exceeding measuring accuracy) improvements.

@todo: 
- Remove calls to close_cached_tables() with REFRESH_FAST,
and have_lock, because they break the MDL cache. 
- rework FLUSH TABLES <list> to not use close_cached_tables()
- make sure that whenever we set TABLE_SHARE::version to
0 we free MDL cache references to it.
2010-06-18 20:14:10 +04:00
Dmitry Lenev
571acc878b Patch that changes approach to how we acquire metadata
locks for DML statements and changes the way MDL locks
are acquired/granted in contended case.

Instead of backing-off when a lock conflict is encountered
and waiting for it to go away before restarting open_tables()
process we now wait for lock to be released without releasing
any previously acquired locks. If conflicting lock goes away
we resume opening tables. If waiting leads to a deadlock we
try to resolve it by backing-off and restarting open_tables()
immediately.

As result both waiting for possibility to acquire and
acquiring of a metadata lock now always happen within the
same MDL API call. This has allowed to make release of a lock
and granting it to the most appropriate pending request an
atomic operation.
Thanks to this it became possible to wake up during release
of lock only those waiters which requests can be satisfied
at the moment as well as wake up only one waiter in case
when granting its request would prevent all other requests
from being satisfied. This solves thundering herd problem
which occured in cases when we were releasing some lock and
woke up many waiters for SNRW or X locks (this was the issue
in bug#52289 "performance regression for MyISAM in sysbench
OLTP_RW test".
This also allowed to implement more fair (FIFO) scheduling
among waiters with the same priority.
It also opens the door for introducing new types of requests
for metadata locks such as low-prio SNRW lock which is
necessary in order to support LOCK TABLES LOW_PRIORITY WRITE.

Notice that after this sometimes can report ER_LOCK_DEADLOCK
error in cases in which it has not happened before.
Particularly we will always report this error if waiting for
conflicting lock has happened in the middle of transaction
and resulted in a deadlock. Before this patch the error was
not reported if deadlock could have been resolved by backing
off all metadata locks acquired by the current statement.
2010-06-07 11:06:55 +04:00
Konstantin Osipov
ec97ce952e A follow up for the previous patch, titled:
A code review comment for Bug#52289.
Encapsulate the deadlock detection functionality into
a visitor class...

Remove a race introduced by omission: 
initialize iterators under a read lock on the object.
2010-06-03 18:32:56 +04:00
Konstantin Osipov
f29298d91c A code review comment for Bug#52289.
Encapsulate the deadlock detection functionality into 
a visitor class, and separate it from the wait-for graph
traversal code.

Use "Internal iterator" and "Visitor" patterns to 
achieve the desired separation of responsibilities.

Add comments.
2010-06-03 18:08:22 +04:00
Konstantin Osipov
f3e12c7567 Add comments to a few MDL deadlock-search related variables
and methods.
2010-06-02 12:06:07 +04:00
Dmitry Lenev
c070e5a1ed Fix for bug #51263 "Deadlock between transactional
SELECT and ALTER TABLE ...  REBUILD PARTITION".

ALTER TABLE on InnoDB table (including partitioned tables)
acquired exclusive locks on rows of table being altered.
In cases when there was concurrent transaction which did
locking reads from this table this sometimes led to a
deadlock which was not detected by MDL subsystem nor by
InnoDB engine (and was reported only after exceeding
innodb_lock_wait_timeout).

This problem stemmed from the fact that ALTER TABLE acquired
TL_WRITE_ALLOW_READ lock on table being altered. This lock
was interpreted as a write lock and thus for table being
altered handler::external_lock() method was called with
F_WRLCK as an argument. As result InnoDB engine treated
ALTER TABLE as an operation which is going to change data
and acquired LOCK_X locks on rows being read from old
version of table.

In case when there was a transaction which already acquired
SR metadata lock on table and some LOCK_S locks on its rows
(e.g. by using it in subquery of DML statement) concurrent
ALTER TABLE was blocked at the moment when it tried to
acquire LOCK_X lock before reading one of these rows.
The transaction's attempt to acquire SW metadata lock on
table being altered led to deadlock, since it had to wait
for ALTER TABLE to release SNW lock. This deadlock was not
detected and got resolved only after timeout expiring
because waiting were happening in two different subsystems.

Similar deadlocks could have occured in other situations.
This patch tries to solve the problem by changing ALTER TABLE
implementation to use TL_READ_NO_INSERT lock instead of
TL_WRITE_ALLOW_READ. After this step handler::external_lock()
is called with F_RDLCK as an argument and InnoDB engine
correctly interprets ALTER TABLE as operation which only
reads data from original version of table. Thanks to this
ALTER TABLE acquires only LOCK_S locks on rows it reads.
This, in its turn, causes inter-subsystem deadlocks to go
away, as all potential lock conflicts and thus deadlocks will
be limited to metadata locking subsystem:

- When ALTER TABLE reads rows from table being altered it
  can't encounter any locks which conflict with LOCK_S row
  locks. There should be no concurrent transactions holding
  LOCK_X row locks. Such a transaction should have been
  acquired SW metadata lock on table first which would have
  conflicted with ALTER's SNW lock.
- Vice versa, when DML which runs concurrently with ALTER
  TABLE tries to lock row it should be requesting only LOCK_S
  lock which is compatible with locks acquired by ALTER,
  as otherwise such DML must own an SW metadata lock on table
  which would be incompatible with ALTER's SNW lock.
2010-05-26 16:18:08 +04:00
Marc Alff
2399b61443 Bug#51295 Build warnings in mdl.cc
Before this fix, the performance schema instrumentation
in mdl.h / mdl.cc was incomplete, causing:
- build warnings,
- no data collection for the performance schema

This fix:
- added instrumentation helpers for the new preferred
  reader read write lock, mysql_prlock_*
- implemented completely the performance schema
  instrumentation of mdl.h / mdl.cc
2010-03-07 10:50:47 -07:00
Dmitry Lenev
dcaa144852 Fix for bug #51105 "MDL deadlock in rqg_mdl_stability test
on Windows".

On platforms where read-write lock implementation does not
prefer readers by default (Windows, Solaris) server might
have deadlocked while detecting MDL deadlock.

MDL deadlock detector relies on the fact that read-write
locks which are used in its implementation prefer readers
(see new comment for MDL_lock::m_rwlock for details).
So far MDL code assumed that default implementation of
read/write locks for the system has this property.
Indeed, this turned out ot be wrong, for example, for
Windows or Solaris. Thus MDL deadlock detector might have
deadlocked on these systems.

This fix simply adds portable implementation of read/write
lock which prefer readers and changes MDL code to use this
new type of synchronization primitive.

No test case is added as existing rqg_mdl_stability test can
serve as one.
2010-02-28 07:35:09 +03:00
Dmitry Lenev
aab777ca1e Fix for bug #51093 "Crash (possibly stack overflow) in
MDL_lock::find_deadlock".

On some platforms deadlock detector in metadata locking 
subsystem under certain conditions might have exhausted
stack space causing server crashes.

Particularly this caused failures of rqg_mdl_stability
test on Solaris in PushBuild.

During search for deadlock MDL deadlock detector could 
sometimes encounter loop in the waiters graph in which 
MDL_context which has started search for a deadlock 
does not participate. In such case our algorithm will 
continue looping assuming that either this deadlock will 
be resolved by MDL_context which has created it (i.e.
by one of loop participants) or maximum search depth
will be reached. 
Since max search depth was set to 1000 in the latter case 
on platforms where each iteration of deadlock search 
algorithm needs more than DEFAULT_STACK_SIZE/1000 bytes 
of stack (around 192 bytes for 32-bit and around 256 bytes 
for 64-bit platforms) we might have exhausted stack space.

This patch solves this problem by reducing maximum search
depth for MDL deadlock detector to 32. This should be safe
at the moment as it is unlikely that each iteration of the 
current deadlock detector algorithm will consume more than 
1K of stack (thus total amount of stack required can't be
more than 32K) and we require at least 80K of stack in order
to open any table. Also this value should be (hopefully) big
enough to not cause too much false deadlock errors (there
is an anecdotal evidence that real-life deadlocks are
typically shorter than that).

Additional reasearch should be conducted in future in order
to determine the more optimal value of maximum search depth.

This patch does not include test case as existing
rqg_mdl_stability test can serve as one.
2010-02-15 15:37:48 +03:00
Dmitry Lenev
22bc48b280 Fix for bug #51134 "Crash in MDL_lock::destroy on a concurrent
DDL workload".

When a RENAME TABLE or LOCK TABLE ... WRITE statement which
mentioned the same table several times were aborted during 
the process of acquring metadata locks (due to deadlock 
which was discovered or because of KILL statement) server 
might have crashed.

When attempt to acquire all locks requested had failed we
went through the list of requests and released locks which
we have managed to acquire by that moment one by one. Since 
in the scenario described above list of requests contained 
duplicates this led to releasing the same ticket twice and 
a crash as result.

This patch solves the problem by employing different approach
to releasing locks in case of failure to acquire all locks
requested. 
Now we take a MDL savepoint before starting acquiring locks 
and simply rollback to it if things go bad.
2010-02-15 13:23:34 +03:00
Jon Olav Hauglid
5bb67f34b8 Bug #45225 Locking: hang if drop table with no timeout
This patch introduces timeouts for metadata locks. 

The timeout is specified in seconds using the new dynamic system 
variable  "lock_wait_timeout" which has both GLOBAL and SESSION
scopes. Allowed values range from 1 to 31536000 seconds (= 1 year). 
The default value is 1 year.

The new server parameter "lock-wait-timeout" can be used to set
the default value parameter upon server startup.

"lock_wait_timeout" applies to all statements that use metadata locks.
These include DML and DDL operations on tables, views, stored procedures
and stored functions. They also include LOCK TABLES, FLUSH TABLES WITH
READ LOCK and HANDLER statements.

The patch also changes thr_lock.c code (table data locks used by MyISAM
and other simplistic engines) to use the same system variable.
InnoDB row locks are unaffected.

One exception to the handling of the "lock_wait_timeout" variable
is delayed inserts. All delayed inserts are executed with a timeout
of 1 year regardless of the setting for the global variable. As the
connection issuing the delayed insert gets no notification of 
delayed insert timeouts, we want to avoid unnecessary timeouts.

It's important to note that the timeout value is used for each lock
acquired and that one statement can take more than one lock.
A statement can therefore block for longer than the lock_wait_timeout 
value before reporting a timeout error. When lock timeout occurs, 
ER_LOCK_WAIT_TIMEOUT is reported.

Test case added to lock_multi.test.
2010-02-11 11:23:39 +01:00
Dmitry Lenev
f229ac073a Fix for bug #50998 "Deadlock in MDL code during test
rqg_mdl_stability".

When start of statement's waiting on a metadata lock 
created more than one loop in waiters graph server might 
have entered deadlock condition.

The problem was that in the case described above MDL deadlock 
detector had to perform several searches for deadlock but
forgot to reset Deadlock_detection_context before performing 
new search. 
Failure to do so has broken assumption in code resposible for 
choosing victim that if Deadlock_detection_context::victim
is set we also have read lock on m_waiting_for_lock for this
context. As result this lock could have been unlocked more
times than it was acquired which corrupted rwlock's state
which led to server deadlock.

This fix ensures that such reset is done before each attempt
to find a deadlock.
2010-02-10 18:46:03 +03:00
Jon Olav Hauglid
a4819c5d04 Bug #50912 Assertion `ticket->m_type >= mdl_request->type'
failed on HANDLER + I_S

This assert was triggered when an I_S query tried to acquire a
metadata lock on a table which was already locked by a HANDLER
statement in the same connection.

First the HANDLER took a MDL_SHARED lock. Afterwards, the I_S query
requested a MDL_SHARED_HIGH_PRIO lock. The existing MDL_SHARED ticket
is found in find_ticket() since it satisfies 
ticket->has_stronger_or_equal_type(mdl_request->type) as MDL_SHARED
and MDL_SHARED_HIGH_PRIO have equal strengths, just different priority.

However, two asserts later check lock type strengths using relational
operators (>= and <=) rather than MDL_ticket::has_stronger_or_equal_type().
These asserts are triggered since MDL_SHARED >= MDL_SHARED_HIGH_PRIORITY
is false (mapped to 1 and 2 respectively).

This patch updates the asserts to use MDL_ticket::has_stronger_or_equal_type()
rather than relational operators to check lock type strength.

Test case added to include/handler.inc.
2010-02-06 10:44:03 +01:00
Konstantin Osipov
5deaf55a1c A post-merge fix for next-mr -> next-4284 merge:
Make all mutexes and conditions of type mysql_mutex_t, mysql_cond_t,
since it's now the expectation of THD::awake().
2010-02-05 01:37:44 +03:00
Dmitry Lenev
967bd206ae Improve concurrency in metadata locking subsystem by
moving calculation of hash value when looking up
MDL_lock objects in MDL_map out of critical section.
2010-02-04 09:25:32 +03:00
Dmitry Lenev
bcf70096e0 A follow-up for the patch which implemented new
type-of-operation-aware metadata locks and added a
wait-for graph based deadlock detector to the MDL
subsystem (this patch fixed bug #46272 "MySQL 5.4.4,
new MDL: unnecessary deadlock" and bug #37346
"innodb does not detect deadlock between update and
alter table").

Removed unused and redundant method.
2010-02-03 22:55:46 +03:00
Konstantin Osipov
2c6015e8dc Merge next-mr -> next-4284. 2010-02-02 02:22:16 +03:00
Dmitry Lenev
19940fa7fc Fix for sporadical crashes of lock_multi_bug38499.test
caused by patch which implemented new type-of-operation-aware
metadata locks and added a wait-for graph based deadlock
detector to the MDL subsystem (this patch fixed bug #46272
"MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346
"innodb does not detect deadlock between update and alter
table").

Crashes were caused by a race in MDL_context::try_acquire_lock().
This method added MDL_ticket to the list of granted tickets and
released lock protecting list before setting MDL_ticket::m_lock.
Thus some other thread was able to see ticket without properly
set m_lock member for some short period of time. If this thread
called method involving this member during this period crash
happened.

This fix ensures that MDL_ticket::m_lock is set in all cases
when ticket is added to granted/pending lists in MDL_lock.
2010-02-01 17:38:50 +03:00
Konstantin Osipov
c5b48ab365 Fix a Windows compilation warning (req_count is later used
in a pointer arithmetics expression).
2010-02-01 17:12:56 +03:00
Dmitry Lenev
afd15c43a9 Implement new type-of-operation-aware metadata locks.
Add a wait-for graph based deadlock detector to the
MDL subsystem.

Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and
bug #37346 "innodb does not detect deadlock between update and
alter table".

The first bug manifested itself as an unwarranted abort of a
transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER
statement, when this transaction tried to repeat use of a
table, which it has already used in a similar fashion before
ALTER started.

The second bug showed up as a deadlock between table-level
locks and InnoDB row locks, which was "detected" only after
innodb_lock_wait_timeout timeout.

A transaction would start using the table and modify a few
rows.
Then ALTER TABLE would come in, and start copying rows
into a temporary table. Eventually it would stumble on
the modified records and get blocked on a row lock.
The first transaction would try to do more updates, and get
blocked on thr_lock.c lock.
This situation of circular wait would only get resolved
by a timeout.

Both these bugs stemmed from inadequate solutions to the
problem of deadlocks occurring between different
locking subsystems.

In the first case we tried to avoid deadlocks between metadata
locking and table-level locking subsystems, when upgrading shared
metadata lock to exclusive one.
Transactions holding the shared lock on the table and waiting for
some table-level lock used to be aborted too aggressively.

We also allowed ALTER TABLE to start in presence of transactions
that modify the subject table. ALTER TABLE acquires
TL_WRITE_ALLOW_READ lock at start, and that block all writes
against the table (naturally, we don't want any writes to be lost
when switching the old and the new table). TL_WRITE_ALLOW_READ
lock, in turn, would block the started transaction on thr_lock.c
lock, should they do more updates. This, again, lead to the need
to abort such transactions.

The second bug occurred simply because we didn't have any
mechanism to detect deadlocks between the table-level locks
in thr_lock.c and row-level locks in InnoDB, other than
innodb_lock_wait_timeout.

This patch solves both these problems by moving lock conflicts
which are causing these deadlocks into the metadata locking
subsystem, thus making it possible to avoid or detect such
deadlocks inside MDL.

To do this we introduce new type-of-operation-aware metadata
locks, which allow MDL subsystem to know not only the fact that
transaction has used or is going to use some object but also what
kind of operation it has carried out or going to carry out on the
object.

This, along with the addition of a special kind of upgradable
metadata lock, allows ALTER TABLE to wait until all
transactions which has updated the table to go away.
This solves the second issue.
Another special type of upgradable metadata lock is acquired
by LOCK TABLE WRITE. This second lock type allows to solve the
first issue, since abortion of table-level locks in event of
DDL under LOCK TABLES becomes also unnecessary.

Below follows the list of incompatible changes introduced by
this patch:

- From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those
  statements that acquire TL_WRITE_ALLOW_READ lock)
  wait for all transactions which has *updated* the table to
  complete.

- From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE
  (i.e. all statements which acquire TL_WRITE table-level lock) wait
  for all transaction which *updated or read* from the table
  to complete.
  As a consequence, innodb_table_locks=0 option no longer applies
  to LOCK TABLES ... WRITE.

- DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort
  statements or transactions which use tables being dropped or
  renamed, and instead wait for these transactions to complete.

- Since LOCK TABLES WRITE now takes a special metadata lock,
  not compatible with with reads or writes against the subject table
  and transaction-wide, thr_lock.c deadlock avoidance algorithm
  that used to ensure absence of deadlocks between LOCK TABLES
  WRITE and other statements is no longer sufficient, even for
  MyISAM. The wait-for graph based deadlock detector of MDL
  subsystem may sometimes be necessary and is involved. This may
  lead to ER_LOCK_DEADLOCK error produced for multi-statement
  transactions even if these only use MyISAM:

  session 1:         session 2:
  begin;

  update t1 ...      lock table t2 write, t1 write;
                     -- gets a lock on t2, blocks on t1

  update t2 ...
  (ER_LOCK_DEADLOCK)

- Finally,  support of LOW_PRIORITY option for LOCK TABLES ... WRITE
  was abandoned.
  LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same
  priority as the usual LOCK TABLE ... WRITE.
  SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE  in
  the wait queue.

- We do not take upgradable metadata locks on implicitly
  locked tables. So if one has, say, a view v1 that uses
  table t1, and issues:
  LOCK TABLE v1 WRITE;
  FLUSH TABLE t1; -- (or just 'FLUSH TABLES'),
  an error is produced.
  In order to be able to perform DDL on a table under LOCK TABLES,
  the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 14:43:06 +03:00
Dmitry Lenev
a63f8480db Patch that changes metadata locking subsystem to use mutex per lock and
condition variable per context instead of one mutex and one conditional
variable for the whole subsystem.

This should increase concurrency in this subsystem.

It also opens the way for further changes which are necessary to solve
such bugs as bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock"
and bug #37346 "innodb does not detect deadlock between update and alter
table".

Two other notable changes done by this patch:

- MDL subsystem no longer implicitly acquires global intention exclusive
  metadata lock when per-object metadata lock is acquired. Now this has
  to be done by explicit calls outside of MDL subsystem.
- Instead of using separate MDL_context for opening system tables/tables
  for purposes of I_S we now create MDL savepoint in the main context
  before opening tables and rollback to this savepoint after closing
  them. This means that it is now possible to get ER_LOCK_DEADLOCK error
  even not inside a transaction. This might happen in unlikely case when
  one runs DDL on one of system tables while also running DDL on some
  other tables. Cases when this ER_LOCK_DEADLOCK error is not justified
  will be addressed by advanced deadlock detector for MDL subsystem which
  we plan to implement.
2010-01-21 23:43:03 +03:00
Dmitry Lenev
236539b471 Implementation of simple deadlock detection for metadata locks.
This change is supposed to reduce number of ER_LOCK_DEADLOCK
errors which occur when multi-statement transaction encounters
conflicting metadata lock in cases when waiting is possible.

The idea is not to fail ER_LOCK_DEADLOCK error immediately when
we encounter conflicting metadata lock. Instead we release all
metadata locks acquired by current statement and start to wait
until conflicting lock go away. To avoid deadlocks we use simple
empiric which aborts waiting with ER_LOCK_DEADLOCK error if it
turns out that somebody is waiting for metadata locks owned by
this transaction.

This patch also fixes bug #46273 "MySQL 5.4.4 new MDL: Bug#989
is not fully fixed in case of ALTER".

The bug was that concurrent execution of UPDATE or MULTI-UPDATE
statement as a part of multi-statement transaction that already
has used table being updated and ALTER TABLE statement might have
resulted of loss of isolation between this transaction and ALTER
TABLE statement, which manifested itself as changes performed by
ALTER TABLE becoming visible in transaction and wrong binary log
order as a consequence.

This problem occurred when UPDATE or MULTI-UPDATE's wait in
mysql_lock_tables() call was aborted due to metadata lock
upgrade performed by concurrent ALTER TABLE. After such abort all
metadata locks held by transaction were released but transaction
silently continued to be executed as if nothing has happened.

We solve this problem by changing our code not to release all
locks in such case. Instead we release only locks which were
acquired by current statement and then try to reacquire them
by restarting open/lock tables process. We piggyback on simple
deadlock detector implementation since this change has to be
done anyway for it.
2009-12-30 20:53:30 +03:00
Konstantin Osipov
39a1a50dfb A prerequisite patch for the fix for Bug#46224
"HANDLER statements within a transaction might lead to deadlocks".
Introduce a notion of a sentinel to MDL_context. A sentinel
is a ticket that separates all tickets in the context into two
groups: before and after it. Currently we can have (and need) only
one designated sentinel -- it separates all locks taken by LOCK
TABLE or HANDLER statement, which must survive COMMIT and ROLLBACK
and all other locks, which must be released at COMMIT or ROLLBACK.
The tricky part is maintaining the sentinel up to date when
someone release its corresponding ticket. This can happen, e.g.
if someone issues DROP TABLE under LOCK TABLES (generally,
see all calls to release_all_locks_for_name()).
MDL_context::release_ticket() is modified to take care of it.

******
A fix and a test case for Bug#46224 "HANDLER statements within a
transaction might lead to deadlocks".

An attempt to mix HANDLER SQL statements, which are transaction-
agnostic, an open multi-statement transaction,
and DDL against the involved tables (in a concurrent connection) 
could lead to a deadlock. The deadlock would occur when
HANDLER OPEN or HANDLER READ would have to wait on a conflicting
metadata lock. If the connection that issued HANDLER statement
also had other metadata locks (say, acquired in scope of a 
transaction), a classical deadlock situation of mutual wait
could occur.

Incompatible change: entering LOCK TABLES mode automatically
closes all open HANDLERs in the current connection.

Incompatible change: previously an attempt to wait on a lock
in a connection that has an open HANDLER statement could wait
indefinitely/deadlock. After this patch, an error ER_LOCK_DEADLOCK
is produced.

The idea of the fix is to merge thd->handler_mdl_context
with the main mdl_context of the connection, used for transactional
locks. This makes deadlock detection possible, since all waits
with locks are "visible" and available to analysis in a single
MDL context of the connection.

Since HANDLER locks and transactional locks have a different life
cycle -- HANDLERs are explicitly open and closed, and so
are HANDLER locks, explicitly acquired and released, whereas
transactional locks "accumulate" till the end of a transaction
and are released only with COMMIT, ROLLBACK and ROLLBACK TO SAVEPOINT,
a concept of "sentinel" was introduced to MDL_context.
All locks, HANDLER and others, reside in the same linked list.
However, a selected element of the list separates locks with
different life cycle. HANDLER locks always reside at the
end of the list, after the sentinel. Transactional locks are
prepended to the beginning of the list, before the sentinel.
Thus, ROLLBACK, COMMIT or ROLLBACK TO SAVEPOINT, only
release those locks that reside before the sentinel. HANDLER locks
must be released explicitly as part of HANDLER CLOSE statement,
or an implicit close. 
The same approach with sentinel
is also employed for LOCK TABLES locks. Since HANDLER and LOCK TABLES
statement has never worked together, the implementation is
made simple and only maintains one sentinel, which is used either
for HANDLER locks, or for LOCK TABLES locks.
2009-12-22 19:09:15 +03:00
Jon Olav Hauglid
ddea504a7d Bug #48541 Deadlock between LOCK_open and LOCK_mdl
The reason for the deadlock was an improper exit from
MDL_context::wait_for_locks() which caused mysys_var->current_mutex to remain
LOCK_mdl even though LOCK_mdl was no longer held by that connection. 

This could for example lead to a deadlock in the following way:
1) INSERT DELAYED tries to open a table but fails, and trying to recover it
calls wait_for_locks().
2) Due to a pending exclusive request, wait_for_locks() fails and exits without
resetting mysys_var->current_mutex for the delayed insert handler thread. So it
continues to point to LOCK_mdl.
3) The handler thread manages to open a table.
4) A different connection takes LOCK_open and tries to take LOCK_mdl.
5) FLUSH TABLES from a third connection notices that the handler thread has a
table open, and tries to kill it. This involves locking mysys_var->current_mutex
while having LOCK_open locked. Since current_mutex mistakenly points to LOCK_mdl,
we have a deadlock.

This patch makes sure MDL_EXIT_COND() is called before exiting wait_for_locks().
This clears mysys->current_mutex which resolves the issue. 

An assert is added to recover_from_failed_open_table_attempt() after
wait_for_locks() is called, to check that current_mutex is indeed reset.
With this assert in place, existing tests in (e.g.) mdl_sync.test will fail
without this patch.
2009-12-16 12:32:11 +01:00
Jon Olav Hauglid
b20a409c38 Backport of revno: 2617.71.1
Bug#42546 Backup: RESTORE fails, thinking it finds an existing table

The problem occured when a MDL locking conflict happened for a non-existent 
table between a CREATE and a INSERT statement. The code for CREATE 
interpreted this lock conflict to mean that the table existed, 
which meant that the statement failed when it should not have.
The problem could occur for CREATE TABLE, CREATE TABLE LIKE and
ALTER TABLE RENAME.

This patch fixes the problem for CREATE TABLE and CREATE TABLE LIKE.
It is based on code backported from the mysql-6.1-fk tree written
by Dmitry Lenev. CREATE now uses normal open_and_lock_tables() code 
to acquire exclusive locks. This means that for the test case in the bug 
description, CREATE will wait until INSERT completes so that it can 
get the exclusive lock. This resolves the reported bug.

The patch also prohibits CREATE TABLE and CREATE TABLE LIKE under 
LOCK TABLES. Note that this is an incompatible change and must 
be reflected in the documentation. Affected test cases have been
updated.

mdl_sync.test contains tests for CREATE TABLE and CREATE TABLE LIKE.

Fixing the issue for ALTER TABLE RENAME is beyond the scope of this
patch. ALTER TABLE cannot be prohibited from working under LOCK TABLES
as this could seriously impact customers and a proper fix would require
a significant rewrite.
2009-12-10 11:53:20 +01:00
Konstantin Osipov
2e73ea7ea8 Backport of:
------------------------------------------------------------
revno: 2617.68.25
committer: Dmitry Lenev <dlenev@mysql.com>
branch nick: mysql-next-bg-pre2-2
timestamp: Wed 2009-09-16 18:26:50 +0400
message:
  Follow-up for one of pre-requisite patches for fixing bug #30977
  "Concurrent statement using stored function and DROP FUNCTION
  breaks SBR".

  Made enum_mdl_namespace enum part of MDL_key class and removed MDL_
  prefix from the names of enum members. In order to do the latter
  changed name of PROCEDURE symbol to PROCEDURE_SYM (otherwise macro
  which was automatically generated for this symbol conflicted with
  MDL_key::PROCEDURE enum member).
2009-12-10 11:21:38 +03:00
Konstantin Osipov
4f85df4b95 Backport of:
------------------------------------------------------------
revno: 2617.68.24
committer: Dmitry Lenev <dlenev@mysql.com>
branch nick: mysql-next-bg-pre2-2
timestamp: Wed 2009-09-16 17:25:29 +0400
message:
  Pre-requisite patch for fixing bug #30977 "Concurrent statement
  using stored function and DROP FUNCTION breaks SBR".

  Added MDL_request for stored routine as member to Sroutine_hash_entry
  in order to be able perform metadata locking for stored routines in
  future (Sroutine_hash_entry is an equivalent of TABLE_LIST class for
  stored routines).
(WL#4284, follow up fixes).
2009-12-09 19:11:26 +03:00
Jon Olav Hauglid
e1992b3026 Backport of revno: 2617.68.39
Bug #47249 assert in MDL_global_lock::is_lock_type_compatible

This assert could be triggered if LOCK TABLES were used to lock
both a table and a view that used the same table. The table would have
to be first WRITE locked and then READ locked. So "LOCK TABLES v1
WRITE, t1 READ" would eventually trigger the assert, "LOCK TABLES
v1 READ, t1 WRITE" would not. The reason is that the ordering of locks
in the interal representation made a difference when executing 
FLUSH TABLE on the table.

During FLUSH TABLE, a lock was upgraded to exclusive. If this lock
was of type MDL_SHARED and not MDL_SHARED_UPGRADABLE, an internal
counter in the MDL subsystem would get out of sync. This would happen
if the *last* mention of the table in LOCK TABLES was a READ lock.

The counter in question is the number exclusive locks (active or intention). 
This is used to make sure a global metadata lock is only taken when the 
counter is zero (= no conflicts). The counter is increased when a
MDL_EXCLUSIVE or MDL_SHARED_UPGRADABLE lock is taken, but not when 
upgrade_shared_lock_to_exclusive() is used to upgrade directly
from MDL_SHARED to MDL_EXCLUSIVE. 

This patch fixes the problem by searching for a TABLE instance locked
with MDL_SHARED_UPGRADABLE or MDL_EXCLUSIVE before calling
upgrade_shared_lock_to_exclusive(). The patch also adds an assert checking
that only MDL_SHARED_UPGRADABLE locks are upgraded to exclusive.

Test case added to lock_multi.test.
2009-12-09 10:44:01 +01:00
Jon Olav Hauglid
4592dd2d81 Backport of revno: 2617.69.40
A pre-requisite patch for Bug#30977 "Concurrent statement using 
stored function and DROP FUNCTION breaks SBR".

This patch changes the MDL API by introducing a namespace for
lock keys: MDL_TABLE for tables and views and MDL_PROCEDURE
for stored procedures and functions. The latter is needed for
the fix for Bug#30977.
2009-12-09 09:51:20 +01:00
Konstantin Osipov
a66a2608ae Backport of:
----------------------------------------------------------
revno: 2617.69.20
committer: Konstantin Osipov <kostja@sun.com>
branch nick: 5.4-4284-1-assert
timestamp: Thu 2009-08-13 18:29:55 +0400
message:
  WL#4284 "Transactional DDL locking"
  A review fix.
  Since WL#4284 implementation separated MDL_request and MDL_ticket,
  MDL_request becamse a utility object necessary only to get a ticket.
  Store it by-value in TABLE_LIST with the intent to merge
  MDL_request::key with table_list->table_name and table_list->db
  in future.
  Change the MDL subsystem to not require MDL_requests to
  stay around till close_thread_tables().
  Remove the list of requests from the MDL context.
  Requests for shared metadata locks acquired in open_tables()
  are only used as a list in recover_from_failed_open_table_attempt(),
  which calls mdl_context.wait_for_locks() for this list.
  To keep such list for recover_from_failed_open_table_attempt(),
  introduce a context class (Open_table_context), that collects
  all requests.
  A lot of minor cleanups and simplications that became possible
  with this change.
2009-12-08 12:57:07 +03:00
Konstantin Osipov
0b39c189ba Backport of revno ## 2617.31.1, 2617.31.3, 2617.31.4, 2617.31.5,
2617.31.12, 2617.31.15, 2617.31.15, 2617.31.16, 2617.43.1
- initial changeset that introduced the fix for 
Bug#989 and follow up fixes for all test suite failures
introduced in the initial changeset. 
------------------------------------------------------------
revno: 2617.31.1
committer: Davi Arnaut <Davi.Arnaut@Sun.COM>
branch nick: 4284-6.0
timestamp: Fri 2009-03-06 19:17:00 -0300
message:
Bug#989: If DROP TABLE while there's an active transaction, wrong binlog order
WL#4284: Transactional DDL locking

Currently the MySQL server does not keep metadata locks on
schema objects for the duration of a transaction, thus failing
to guarantee the integrity of the schema objects being used
during the transaction and to protect then from concurrent
DDL operations. This also poses a problem for replication as
a DDL operation might be replicated even thought there are
active transactions using the object being modified.

The solution is to defer the release of metadata locks until
a active transaction is either committed or rolled back. This
prevents other statements from modifying the table for the
entire duration of the transaction. This provides commitment
ordering for guaranteeing serializability across multiple
transactions.

- Incompatible change:

If MySQL's metadata locking system encounters a lock conflict,
the usual schema is to use the try and back-off technique to
avoid deadlocks -- this schema consists in releasing all locks
and trying to acquire them all in one go.

But in a transactional context this algorithm can't be utilized
as its not possible to release locks acquired during the course
of the transaction without breaking the transaction commitments.
To avoid deadlocks in this case, the ER_LOCK_DEADLOCK will be
returned if a lock conflict is encountered during a transaction.

Let's consider an example:

A transaction has two statements that modify table t1, then table
t2, and then commits. The first statement of the transaction will
acquire a shared metadata lock on table t1, and it will be kept
utill COMMIT to ensure serializability.

At the moment when the second statement attempts to acquire a
shared metadata lock on t2, a concurrent ALTER or DROP statement
might have locked t2 exclusively. The prescription of the current
locking protocol is that the acquirer of the shared lock backs off
-- gives up all his current locks and retries. This implies that
the entire multi-statement transaction has to be rolled back.

- Incompatible change:

FLUSH commands such as FLUSH PRIVILEGES and FLUSH TABLES WITH READ
LOCK won't cause locked tables to be implicitly unlocked anymore.
2009-12-05 02:02:48 +03:00
Konstantin Osipov
fea5e79941 Backport of (WL#4284):
------------------------------------------------------------
revno: 2617.23.23
committer: Davi Arnaut <Davi.Arnaut@Sun.COM>
branch nick: mysql-6.0-runtime
timestamp: Thu 2009-03-05 18:39:58 -0300
message:
Fix for broken build: SHARED is defined by Solaris headers.
2009-12-04 02:55:26 +03:00
Konstantin Osipov
a3a23ec4d3 Backport of:
----------------------------------------------------------
revno: 2617.23.20
committer: Konstantin Osipov <kostja@sun.com>
branch nick: mysql-6.0-runtime
timestamp: Wed 2009-03-04 16:31:31 +0300
message:
  WL#4284 "Transactional DDL locking"
  Review comments: "Objectify" the MDL API.
  MDL_request and MDL_context still need manual construction and
  destruction, since they are used in environment that is averse
  to constructors/destructors.
2009-12-04 02:52:05 +03:00
Konstantin Osipov
195adcd201 Backport of:
----------------------------------------------------------
revno: 2617.23.19
committer: Konstantin Osipov <kostja@sun.com>
branch nick: mysql-6.0-runtime
timestamp: Tue 2009-03-03 01:20:44 +0300
message:
  Metadata locking: realign comments. No semantical changes,
  only enforce a bit of the coding style.
This is a review fix for WL#4284 "Transactional DDL locking".
2009-12-04 02:34:19 +03:00