Bug #22909 Using CREATE ... LIKE is possible to create
field with invalid default value
Bug #35935 CREATE TABLE under LOCK TABLES ignores FLUSH
TABLES WITH READ LOCK
Bug #37371 CREATE TABLE LIKE merge loses UNION parameter
These bugs were originally fixed in the 6.1-fk tree and the fixes
were backported as part of the fix for Bug #42546 "Backup: RESTORE
fails, thinking it finds an existing table". This patch backports
test coverage missing in the original backport. The patch contains
no code changes.
Bug#46527 COMMIT AND CHAIN RELEASE does not make sense
Bug#53343 completion_type=1, COMMIT/ROLLBACK AND CHAIN don't
preserve the isolation level
Bug#53346 completion_type has strange effect in a stored
procedure/prepared statement
Added test cases to verify the expected behaviour of :
SET SESSION TRANSACTION ISOLATION LEVEL,
SET TRANSACTION ISOLATION LEVEL,
@@completion_type,
COMMIT AND CHAIN,
ROLLBACK AND CHAIN
..and some combinations of the above
This crash happened if a table was listed twice in a DROP TABLE statement,
and the statement was executed while in LOCK TABLES mode. Since the two
elements of table list were identical, they were assigned the same TABLE object.
During processing of the first table element, the TABLE instance was destroyed
and the second table list element was left with a dangling reference.
When this reference was later accessed, the server crashed.
Listing the same table twice in DROP TABLES should give an ER_NONUNIQ_TABLE
error. However, this did not happen as the check for unique table names was
skipped due to the lock type for table list elements being set to TL_IGNORE.
Previously TL_UNLOCK was used and the unique check was performed.
This bug was a regression introduced by a pre-requisite patch for
Bug#51263 "Deadlock between transactional SELECT and ALTER TABLE ...
REBUILD PARTITION". The regression only existed in an internal team
tree and never in any released code.
This patch reverts DROP TABLE (and DROP VIEW) to the old behavior of
using TL_UNLOCK locks. Test case added to drop.test.
locks for DML statements and changes the way MDL locks
are acquired/granted in contended case.
Instead of backing-off when a lock conflict is encountered
and waiting for it to go away before restarting open_tables()
process we now wait for lock to be released without releasing
any previously acquired locks. If conflicting lock goes away
we resume opening tables. If waiting leads to a deadlock we
try to resolve it by backing-off and restarting open_tables()
immediately.
As result both waiting for possibility to acquire and
acquiring of a metadata lock now always happen within the
same MDL API call. This has allowed to make release of a lock
and granting it to the most appropriate pending request an
atomic operation.
Thanks to this it became possible to wake up during release
of lock only those waiters which requests can be satisfied
at the moment as well as wake up only one waiter in case
when granting its request would prevent all other requests
from being satisfied. This solves thundering herd problem
which occured in cases when we were releasing some lock and
woke up many waiters for SNRW or X locks (this was the issue
in bug#52289 "performance regression for MyISAM in sysbench
OLTP_RW test".
This also allowed to implement more fair (FIFO) scheduling
among waiters with the same priority.
It also opens the door for introducing new types of requests
for metadata locks such as low-prio SNRW lock which is
necessary in order to support LOCK TABLES LOW_PRIORITY WRITE.
Notice that after this sometimes can report ER_LOCK_DEADLOCK
error in cases in which it has not happened before.
Particularly we will always report this error if waiting for
conflicting lock has happened in the middle of transaction
and resulted in a deadlock. Before this patch the error was
not reported if deadlock could have been resolved by backing
off all metadata locks acquired by the current statement.
Conflicts:
Text conflict in mysql-test/r/archive.result
Contents conflict in mysql-test/r/innodb_bug38231.result
Text conflict in mysql-test/r/mdl_sync.result
Text conflict in mysql-test/suite/binlog/t/disabled.def
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_binlog_format_errors.result
Text conflict in mysql-test/t/archive.test
Contents conflict in mysql-test/t/innodb_bug38231.test
Text conflict in mysql-test/t/mdl_sync.test
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_show.cc
Text conflict in sql/table.cc
Text conflict in sql/table.h
Problems:
- regression (compating to version 5.1) in metadata for BLOB types
- inconsistency between length metadata in server and embedded for BLOB types
- wrong max_length calculation in items derived from BLOB columns
@ libmysqld/lib_sql.cc
Calculating length metadata in embedded similary to server version,
using new function char_to_byte_length_safe().
@ mysql-test/r/ctype_utf16.result
Adding tests
@ mysql-test/r/ctype_utf32.result
Adding tests
@ mysql-test/r/ctype_utf8.result
Adding tests
@ mysql-test/r/ctype_utf8mb4.result
Adding tests
@ mysql-test/t/ctype_utf16.test
Adding tests
@ mysql-test/t/ctype_utf32.test
Adding tests
@ mysql-test/t/ctype_utf8.test
Adding tests
@ mysql-test/t/ctype_utf8mb4.test
Adding tests
@ sql/field.cc
Overriding char_length() for Field_blob:
unlike in generic Item::char_length() we don't
divide to mbmaxlen for BLOBs.
@ sql/field.h
- Making Field::char_length() virtual
- Adding prototype for Field_blob::char_length()
@ sql/item.h
- Adding new helper function char_to_byte_length_safe()
- Using new function
@ sql/protocol.cc
Using new function char_to_byte_length_safe().
modified:
libmysqld/lib_sql.cc
mysql-test/r/ctype_utf16.result
mysql-test/r/ctype_utf32.result
mysql-test/r/ctype_utf8.result
mysql-test/r/ctype_utf8mb4.result
mysql-test/t/ctype_utf16.test
mysql-test/t/ctype_utf32.test
mysql-test/t/ctype_utf8.test
mysql-test/t/ctype_utf8mb4.test
sql/field.cc
sql/field.h
sql/item.h
sql/protocol.cc
innodb.innodb [ fail ]
Test ended at 2010-06-02 15:04:06
CURRENT_TEST: innodb.innodb
--- /usr/w/mysql-trunk-innodb/mysql-test/suite/innodb/r/innodb.result 2010-05-23 23:10:26.576407000 +0300
+++ /usr/w/mysql-trunk-innodb/mysql-test/suite/innodb/r/innodb.reject 2010-06-02 15:04:05.000000000 +0300
@@ -2648,7 +2648,7 @@
create table t4 (s1 char(2) binary,primary key (s1)) engine=innodb;
insert into t1 values (0x41),(0x4120),(0x4100);
insert into t2 values (0x41),(0x4120),(0x4100);
-ERROR 23000: Duplicate entry 'A\x00' for key 'PRIMARY'
+ERROR 23000: Duplicate entry 'A' for key 'PRIMARY'
insert into t2 values (0x41),(0x4120);
The change in the printout was introduced in:
------------------------------------------------------------
revno: 3008.6.2
revision-id: sergey.glukhov@sun.com-20100527160143-57nas8nplzpj26dz
parent: sergey.glukhov@sun.com-20100527155443-24vqi9o8rpnkyci7
committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
branch nick: mysql-trunk-bugfixing
timestamp: Thu 2010-05-27 20:01:43 +0400
message:
Bug#52430 Incorrect key in the error message for duplicate key error involving BINARY type
For BINARY(N) strip trailing zeroes to make the error message nice-looking
@ mysql-test/r/errors.result
test case
@ mysql-test/r/type_binary.result
result fix
@ mysql-test/t/errors.test
test case
@ sql/key.cc
For BINARY(N) strip trailing zeroes to make the error message nice-looking
and its author (Sergey) did not notice the test failure because that test
has been disabled in his tree.
------------------------------------------------------------
revno: 3495
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Wed 2010-06-02 13:37:14 +0300
message:
Bug#53674: InnoDB: Error: unlock row could not find a 4 mode lock on the record
In semi-consistent read, only unlock freshly locked non-matching records.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
enum db_err: Add DB_SUCCESS_LOCKED_REC for indicating a successful
operation where a record lock was created.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
errors
In the fix of BUG#39934 in 5.1-rep+3, errors are generated when
binlog_format=row and a statement modifies a table restricted to
statement-logging (ER_BINLOG_ROW_MODE_AND_STMT_ENGINE); or if
binlog_format=statement and a statement modifies a table restricted to
row-logging (ER_BINLOG_STMT_MODE_AND_ROW_ENGINE).
However, some DDL statements that lock tables (e.g. ALTER TABLE,
CREATE INDEX and CREATE TRIGGER) were causing spurious errors,
although no row might be inserted into the binary log.
To fix the problem, we tagged statements that may generate
rows into the binary log and thence the warning messages are
only printed out when the appropriate conditions hold and rows
might be changed.
breaks
When a "CREATE TEMPORARY TABLE SELECT * FROM" was executed the OPTION_KEEP_LOG was
not set into the thd->variables.option_bits. For that reason, if the transaction
had updated only transactional engines and was rolled back at the end (.e.g due to
a deadlock) the changes were not written to the binary log, including the creation
of the temporary table.
To fix the problem, we have set the OPTION_KEEP_LOG into the
thd->variables.option_bits when a "CREATE TEMPORARY TABLE
SELECT * FROM" is executed.
Due to a BZR bug, that merge was done by the following command:
bzr merge -r 'revid:tor.didriksen@sun.com-20100527074248-6qtv0p1ugy6o1hjo..' <mysql-trunk-bugfixing path>
- revid:sp1r-svoj@mysql.com/june.mysql.com-20080324111246-00461
- revid:sp1r-svoj@mysql.com/june.mysql.com-20080414125521-40866
BUG#35274 - merge table doesn't need any base tables, gives
error 124 when key accessed
SELECT queries that use index against a merge table with empty
underlying tables list may return with error "Got error 124 from
storage engine".
The problem was that wrong error being returned.
when it should use index
Sometimes the LEFT/RIGHT JOIN with an empty table caused an
unnecessary filesort.
Sample query, where t1.i1 is indexed and t3 is empty:
SELECT t1.*, t2.* FROM t1 JOIN t2 ON t1.i1 = t2.i2
LEFT JOIN t3 ON t2.i2 = t3.i3
ORDER BY t1.i1 LIMIT 5;
The server erroneously used an item of empty outer-joined
table as a common constant of a Item_equal (multi-equivalence
expression).
By the fix for the bug 16590 the constant status of such
an item has been propagated to st_table::const_key_parts
map bits related to other Item_equal argument-related
key parts (those are obviously not constant in our case).
As far as test_if_skip_sort_order function skips constant
prefixes of testing keys, this caused an ignorance of
available indices, since some prefixes were marked as
constant by mistake.
The problem is that if a NULL is stored in an Item_cache_decimal object,
the associated my_decimal object is not initialized. However, it is still
accessed when val_int() is called. The fix is to check for null_value
within val_int(), and return without accessing the my_decimal object when
the cached value is NULL.
Bug#52122 reports the same issue for val_real(), and this patch also includes
fixes for val_real() and val_str() and corresponding test cases from that
bug report.
Also, NULL is returned from val_decimal() when value is null. This will
avoid that callers access an uninitialized my_decimal object.
Made similar changes to all other Item_cache classes. Now all val_*
methods should return a well defined value when actual value is NULL.
is allowed on views (not documented, broken)".
Remove support of ALTER TABLE RENAME for views as:
a) this feature was not documented,
c) does not add any compatibility with other databases,
b) its implementation doesn't follow metadata locking
protocol by accessing .FRM without holding any
metadata lock,
c) its implementation complicates ALTER TABLE's code
by introducing yet another separate branch to it.
After this patch one can rename a view by using the
documented way - RENAME TABLE statement.
We should avoid any SHARE fields assignments as
this is shared structure and assignments may
affect other therads. To avoid this
copy of SHARE struct is created and
stored into TABLE struct which is
used in get_schema_coulumns_record later.
The thd->variables.option_bits & OPTION_BIN_LOG is currently abused:
it's both a system variable and an implementation switch. The current
approach to this option bit breaks the session variable encapsulation.
Besides it is allowed to change @@session.sql_bin_log within a
transaction what may lead to not correctly logging a transaction.
To fix the problems, we created a thd->variables variable to represent
the "sql_log_bin" and prohibited its update inside a transaction or
sub-statement.
The problem was that mdl_sync.test was failing sporadically,
due to fact that part of the test didn't take into account
effects of MyISAM's concurrent insert.
This patch solves the problem by making test case robust
against concurrent insert.
SELECT and ALTER TABLE ... REBUILD PARTITION".
ALTER TABLE on InnoDB table (including partitioned tables)
acquired exclusive locks on rows of table being altered.
In cases when there was concurrent transaction which did
locking reads from this table this sometimes led to a
deadlock which was not detected by MDL subsystem nor by
InnoDB engine (and was reported only after exceeding
innodb_lock_wait_timeout).
This problem stemmed from the fact that ALTER TABLE acquired
TL_WRITE_ALLOW_READ lock on table being altered. This lock
was interpreted as a write lock and thus for table being
altered handler::external_lock() method was called with
F_WRLCK as an argument. As result InnoDB engine treated
ALTER TABLE as an operation which is going to change data
and acquired LOCK_X locks on rows being read from old
version of table.
In case when there was a transaction which already acquired
SR metadata lock on table and some LOCK_S locks on its rows
(e.g. by using it in subquery of DML statement) concurrent
ALTER TABLE was blocked at the moment when it tried to
acquire LOCK_X lock before reading one of these rows.
The transaction's attempt to acquire SW metadata lock on
table being altered led to deadlock, since it had to wait
for ALTER TABLE to release SNW lock. This deadlock was not
detected and got resolved only after timeout expiring
because waiting were happening in two different subsystems.
Similar deadlocks could have occured in other situations.
This patch tries to solve the problem by changing ALTER TABLE
implementation to use TL_READ_NO_INSERT lock instead of
TL_WRITE_ALLOW_READ. After this step handler::external_lock()
is called with F_RDLCK as an argument and InnoDB engine
correctly interprets ALTER TABLE as operation which only
reads data from original version of table. Thanks to this
ALTER TABLE acquires only LOCK_S locks on rows it reads.
This, in its turn, causes inter-subsystem deadlocks to go
away, as all potential lock conflicts and thus deadlocks will
be limited to metadata locking subsystem:
- When ALTER TABLE reads rows from table being altered it
can't encounter any locks which conflict with LOCK_S row
locks. There should be no concurrent transactions holding
LOCK_X row locks. Such a transaction should have been
acquired SW metadata lock on table first which would have
conflicted with ALTER's SNW lock.
- Vice versa, when DML which runs concurrently with ALTER
TABLE tries to lock row it should be requesting only LOCK_S
lock which is compatible with locks acquired by ALTER,
as otherwise such DML must own an SW metadata lock on table
which would be incompatible with ALTER's SNW lock.
can now view the content of InnoDB System Tables through following
information schema tables:
information_schema.INNODB_SYS_TABLES
information_schema.INNODB_SYS_INDEXES
information_schema.INNODB_SYS_COUMNS
information_schema.INNODB_SYS_FIELDS
information_schema.INNODB_SYS_FOREIGN
information_schema.INNODB_SYS_FOREIGN_COLS
information_schema.INNODB_SYS_TABLESTATS
rb://330 Approved by Marko
The problem was that TRUNCATE TABLE didn't take a exclusive
lock on a table if it resorted to truncating via delete of
all rows in the table. Specifically for InnoDB tables, this
could break proper isolation as InnoDB ends up aborting some
granted locks when truncating a table.
The solution is to take a exclusive metadata lock before
TRUNCATE TABLE can proceed. This guarantees that no other
transaction is using the table.
Incompatible change: Truncate via delete no longer fails
if sql_safe_updates is activated (this was a undocumented
side effect).