Bug#46527 COMMIT AND CHAIN RELEASE does not make sense
Bug#53343 completion_type=1, COMMIT/ROLLBACK AND CHAIN don't
preserve the isolation level
Bug#53346 completion_type has strange effect in a stored
procedure/prepared statement
Added test cases to verify the expected behaviour of :
SET SESSION TRANSACTION ISOLATION LEVEL,
SET TRANSACTION ISOLATION LEVEL,
@@completion_type,
COMMIT AND CHAIN,
ROLLBACK AND CHAIN
..and some combinations of the above
This crash happened if a table was listed twice in a DROP TABLE statement,
and the statement was executed while in LOCK TABLES mode. Since the two
elements of table list were identical, they were assigned the same TABLE object.
During processing of the first table element, the TABLE instance was destroyed
and the second table list element was left with a dangling reference.
When this reference was later accessed, the server crashed.
Listing the same table twice in DROP TABLES should give an ER_NONUNIQ_TABLE
error. However, this did not happen as the check for unique table names was
skipped due to the lock type for table list elements being set to TL_IGNORE.
Previously TL_UNLOCK was used and the unique check was performed.
This bug was a regression introduced by a pre-requisite patch for
Bug#51263 "Deadlock between transactional SELECT and ALTER TABLE ...
REBUILD PARTITION". The regression only existed in an internal team
tree and never in any released code.
This patch reverts DROP TABLE (and DROP VIEW) to the old behavior of
using TL_UNLOCK locks. Test case added to drop.test.
locks for DML statements and changes the way MDL locks
are acquired/granted in contended case.
Instead of backing-off when a lock conflict is encountered
and waiting for it to go away before restarting open_tables()
process we now wait for lock to be released without releasing
any previously acquired locks. If conflicting lock goes away
we resume opening tables. If waiting leads to a deadlock we
try to resolve it by backing-off and restarting open_tables()
immediately.
As result both waiting for possibility to acquire and
acquiring of a metadata lock now always happen within the
same MDL API call. This has allowed to make release of a lock
and granting it to the most appropriate pending request an
atomic operation.
Thanks to this it became possible to wake up during release
of lock only those waiters which requests can be satisfied
at the moment as well as wake up only one waiter in case
when granting its request would prevent all other requests
from being satisfied. This solves thundering herd problem
which occured in cases when we were releasing some lock and
woke up many waiters for SNRW or X locks (this was the issue
in bug#52289 "performance regression for MyISAM in sysbench
OLTP_RW test".
This also allowed to implement more fair (FIFO) scheduling
among waiters with the same priority.
It also opens the door for introducing new types of requests
for metadata locks such as low-prio SNRW lock which is
necessary in order to support LOCK TABLES LOW_PRIORITY WRITE.
Notice that after this sometimes can report ER_LOCK_DEADLOCK
error in cases in which it has not happened before.
Particularly we will always report this error if waiting for
conflicting lock has happened in the middle of transaction
and resulted in a deadlock. Before this patch the error was
not reported if deadlock could have been resolved by backing
off all metadata locks acquired by the current statement.
Conflicts:
Text conflict in mysql-test/r/archive.result
Contents conflict in mysql-test/r/innodb_bug38231.result
Text conflict in mysql-test/r/mdl_sync.result
Text conflict in mysql-test/suite/binlog/t/disabled.def
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_binlog_format_errors.result
Text conflict in mysql-test/t/archive.test
Contents conflict in mysql-test/t/innodb_bug38231.test
Text conflict in mysql-test/t/mdl_sync.test
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_show.cc
Text conflict in sql/table.cc
Text conflict in sql/table.h
Problems:
- regression (compating to version 5.1) in metadata for BLOB types
- inconsistency between length metadata in server and embedded for BLOB types
- wrong max_length calculation in items derived from BLOB columns
@ libmysqld/lib_sql.cc
Calculating length metadata in embedded similary to server version,
using new function char_to_byte_length_safe().
@ mysql-test/r/ctype_utf16.result
Adding tests
@ mysql-test/r/ctype_utf32.result
Adding tests
@ mysql-test/r/ctype_utf8.result
Adding tests
@ mysql-test/r/ctype_utf8mb4.result
Adding tests
@ mysql-test/t/ctype_utf16.test
Adding tests
@ mysql-test/t/ctype_utf32.test
Adding tests
@ mysql-test/t/ctype_utf8.test
Adding tests
@ mysql-test/t/ctype_utf8mb4.test
Adding tests
@ sql/field.cc
Overriding char_length() for Field_blob:
unlike in generic Item::char_length() we don't
divide to mbmaxlen for BLOBs.
@ sql/field.h
- Making Field::char_length() virtual
- Adding prototype for Field_blob::char_length()
@ sql/item.h
- Adding new helper function char_to_byte_length_safe()
- Using new function
@ sql/protocol.cc
Using new function char_to_byte_length_safe().
modified:
libmysqld/lib_sql.cc
mysql-test/r/ctype_utf16.result
mysql-test/r/ctype_utf32.result
mysql-test/r/ctype_utf8.result
mysql-test/r/ctype_utf8mb4.result
mysql-test/t/ctype_utf16.test
mysql-test/t/ctype_utf32.test
mysql-test/t/ctype_utf8.test
mysql-test/t/ctype_utf8mb4.test
sql/field.cc
sql/field.h
sql/item.h
sql/protocol.cc
Due to a BZR bug, that merge was done by the following command:
bzr merge -r 'revid:tor.didriksen@sun.com-20100527074248-6qtv0p1ugy6o1hjo..' <mysql-trunk-bugfixing path>
- revid:sp1r-svoj@mysql.com/june.mysql.com-20080324111246-00461
- revid:sp1r-svoj@mysql.com/june.mysql.com-20080414125521-40866
BUG#35274 - merge table doesn't need any base tables, gives
error 124 when key accessed
SELECT queries that use index against a merge table with empty
underlying tables list may return with error "Got error 124 from
storage engine".
The problem was that wrong error being returned.
when it should use index
Sometimes the LEFT/RIGHT JOIN with an empty table caused an
unnecessary filesort.
Sample query, where t1.i1 is indexed and t3 is empty:
SELECT t1.*, t2.* FROM t1 JOIN t2 ON t1.i1 = t2.i2
LEFT JOIN t3 ON t2.i2 = t3.i3
ORDER BY t1.i1 LIMIT 5;
The server erroneously used an item of empty outer-joined
table as a common constant of a Item_equal (multi-equivalence
expression).
By the fix for the bug 16590 the constant status of such
an item has been propagated to st_table::const_key_parts
map bits related to other Item_equal argument-related
key parts (those are obviously not constant in our case).
As far as test_if_skip_sort_order function skips constant
prefixes of testing keys, this caused an ignorance of
available indices, since some prefixes were marked as
constant by mistake.
The problem is that if a NULL is stored in an Item_cache_decimal object,
the associated my_decimal object is not initialized. However, it is still
accessed when val_int() is called. The fix is to check for null_value
within val_int(), and return without accessing the my_decimal object when
the cached value is NULL.
Bug#52122 reports the same issue for val_real(), and this patch also includes
fixes for val_real() and val_str() and corresponding test cases from that
bug report.
Also, NULL is returned from val_decimal() when value is null. This will
avoid that callers access an uninitialized my_decimal object.
Made similar changes to all other Item_cache classes. Now all val_*
methods should return a well defined value when actual value is NULL.
is allowed on views (not documented, broken)".
Remove support of ALTER TABLE RENAME for views as:
a) this feature was not documented,
c) does not add any compatibility with other databases,
b) its implementation doesn't follow metadata locking
protocol by accessing .FRM without holding any
metadata lock,
c) its implementation complicates ALTER TABLE's code
by introducing yet another separate branch to it.
After this patch one can rename a view by using the
documented way - RENAME TABLE statement.
We should avoid any SHARE fields assignments as
this is shared structure and assignments may
affect other therads. To avoid this
copy of SHARE struct is created and
stored into TABLE struct which is
used in get_schema_coulumns_record later.
The problem was that mdl_sync.test was failing sporadically,
due to fact that part of the test didn't take into account
effects of MyISAM's concurrent insert.
This patch solves the problem by making test case robust
against concurrent insert.
SELECT and ALTER TABLE ... REBUILD PARTITION".
ALTER TABLE on InnoDB table (including partitioned tables)
acquired exclusive locks on rows of table being altered.
In cases when there was concurrent transaction which did
locking reads from this table this sometimes led to a
deadlock which was not detected by MDL subsystem nor by
InnoDB engine (and was reported only after exceeding
innodb_lock_wait_timeout).
This problem stemmed from the fact that ALTER TABLE acquired
TL_WRITE_ALLOW_READ lock on table being altered. This lock
was interpreted as a write lock and thus for table being
altered handler::external_lock() method was called with
F_WRLCK as an argument. As result InnoDB engine treated
ALTER TABLE as an operation which is going to change data
and acquired LOCK_X locks on rows being read from old
version of table.
In case when there was a transaction which already acquired
SR metadata lock on table and some LOCK_S locks on its rows
(e.g. by using it in subquery of DML statement) concurrent
ALTER TABLE was blocked at the moment when it tried to
acquire LOCK_X lock before reading one of these rows.
The transaction's attempt to acquire SW metadata lock on
table being altered led to deadlock, since it had to wait
for ALTER TABLE to release SNW lock. This deadlock was not
detected and got resolved only after timeout expiring
because waiting were happening in two different subsystems.
Similar deadlocks could have occured in other situations.
This patch tries to solve the problem by changing ALTER TABLE
implementation to use TL_READ_NO_INSERT lock instead of
TL_WRITE_ALLOW_READ. After this step handler::external_lock()
is called with F_RDLCK as an argument and InnoDB engine
correctly interprets ALTER TABLE as operation which only
reads data from original version of table. Thanks to this
ALTER TABLE acquires only LOCK_S locks on rows it reads.
This, in its turn, causes inter-subsystem deadlocks to go
away, as all potential lock conflicts and thus deadlocks will
be limited to metadata locking subsystem:
- When ALTER TABLE reads rows from table being altered it
can't encounter any locks which conflict with LOCK_S row
locks. There should be no concurrent transactions holding
LOCK_X row locks. Such a transaction should have been
acquired SW metadata lock on table first which would have
conflicted with ALTER's SNW lock.
- Vice versa, when DML which runs concurrently with ALTER
TABLE tries to lock row it should be requesting only LOCK_S
lock which is compatible with locks acquired by ALTER,
as otherwise such DML must own an SW metadata lock on table
which would be incompatible with ALTER's SNW lock.
can now view the content of InnoDB System Tables through following
information schema tables:
information_schema.INNODB_SYS_TABLES
information_schema.INNODB_SYS_INDEXES
information_schema.INNODB_SYS_COUMNS
information_schema.INNODB_SYS_FIELDS
information_schema.INNODB_SYS_FOREIGN
information_schema.INNODB_SYS_FOREIGN_COLS
information_schema.INNODB_SYS_TABLESTATS
rb://330 Approved by Marko
The problem was that TRUNCATE TABLE didn't take a exclusive
lock on a table if it resorted to truncating via delete of
all rows in the table. Specifically for InnoDB tables, this
could break proper isolation as InnoDB ends up aborting some
granted locks when truncating a table.
The solution is to take a exclusive metadata lock before
TRUNCATE TABLE can proceed. This guarantees that no other
transaction is using the table.
Incompatible change: Truncate via delete no longer fails
if sql_safe_updates is activated (this was a undocumented
side effect).
data directory name command
The check_db_name function has been modified to validate tails of
#mysql50#-prefixed database names for compliance with MySQL 5.0
database name encoding rules (the check_table_name function call
has been reused).
FOR UPDATE is causing a lock".
This patch tries to address problems which were exposed
during backporting of original patch to 5.1 tree.
- It ensures that we don't change locking behavior of simple
SELECT statements on InnoDB tables when they are executed
under LOCK TABLES ... READ and with @@innodb_table_locks=0.
Also we no longer pass TL_READ_DEFAULT/TL_WRITE_DEFAULT
lock types, which are supposed to be parser-only, to
handler::start_stmt() method.
- It makes check_/no_concurrent_insert.inc auxiliary scripts
more robust against changes in test cases that use them
and also ensures that they don't unnecessarily change
environment of caller.
Server crashes on 64bit linux with 'double free or corruption'
message, on 32bit mysql-test-run silently fails on bootstrap
stage. The problem is that FreeState() is called twice
for init_settings struct in _db_end_ function.
The fix is to remove superfluous FreeState() call.
Additional fix:
fixed discrepancy of result file when
debug & valgrind options are enabled
for MTR.
The problem was that OPTMIZE TABLE was allowed to run on a table
in use by a transaction in a different connection. This caused
repeatable read to break.
This bug was fixed by the introduction of metadata locking, WL#4284.
OPTIMIZE TABLE will now be blocked until the transaction using the
table, has ended.
This patch contains a regression test added to innodb_mysql_lock.test
and no code changes.
Bug #50087 Interval arithmetic for Event_queue_element is not portable.
Subtraction of two unsigned months yielded a (very large) positive value.
Conversion of this to a signed value was not necessarily well defined.
Solution: do the subtraction on signed values.
ha_myisam::index_first(uchar*)") at assert.c:81
Single-table DELETE crash/assertion similar to single-table
UPDATE bug 14272.
Same resolution as for the bug 14272:
Don't run index scan when we should use quick select.
This could cause failures because there are table handlers (like federated)
that support quick select scanning but do not support index scanning.
for ALTER TABLE, LOAD DATA).
ROW_COUNT is now assigned according to the following rules:
- In my_ok():
- for DML statements: to the number of affected rows;
- for DDL statements: to 0.
- In my_eof(): to -1 to indicate that there was a result set.
We derive this semantics from the JDBC specification, where int
java.sql.Statement.getUpdateCount() is defined to (sic) "return the
current result as an update count; if the result is a ResultSet
object or there are no more results, -1 is returned".
- In my_error(): to -1 to be compatible with the MySQL C API and
MySQL ODBC driver.
- For SIGNAL statements: to 0 per WL#2110 specification. Zero is used
since that's the "default" value of ROW_COUNT in the diagnostics area.
NULL from outer join query
Problem: optimising MIN/MAX() queries without GROUP BY clause
by replacing the aggregate expression with a constant, we may set it
to NULL disregarding the fact that there may be outer joins involved.
Fix: don't replace MIN/MAX() with NULL if there're outer joins.
Note: the fix itself is just
- if (!count)
+ if (!count && !outer_tables)
set to NULL
The rest of the patch eliminates repeated code to improve speed
and for easy maintenance of the code.
update statements
Only SELECT statements report any examined rows in the slow
log. Slow UPDATE, DELETE and INSERT statements report 0 rows
examined, unless the statement has a condition including a
SELECT substatement.
This patch adds counting of examined rows for the UPDATE and
DELETE statements. An INSERT ... VALUES statement will still
not report any rows as examined.