Commit graph

888 commits

Author SHA1 Message Date
Praveenkumar Hulakund
5411549732 Bug#17474166 - EXECUTING STATEMENT LIKE 'SHOW ENGINE INNODB'
AND 'KILL SESSION' LEAD TO CRASH               

Analysis:
--------
This situation occurs when the connection executes query 
"show engine innodb status" and this connection is killed by
executing statement "kill <con>" by another connection.

In function "innodb_show_status", function "stat_print"
is called to print the status but return value of function
is not checked.  After killing connection, if write to 
connection fails then error is returned and same is set
in Diagnostic area. Since FALSE is returned from
"innodb_show_status" now, assert to check no error
is set in function "set_eof_status" (called from
my_eof) is failing. 

Fix:
----
Changed code to check return value of function "stat_print"
in "innodb_show_status".
2013-10-09 13:32:31 +05:30
Dmitry Lenev
b07ec61f85 Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".

The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.

The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.

The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.

The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.

This patch solves these two problems in the following way:

- In case when MDL deadlock is detect transaction rollback is requested
  by setting THD::transaction_rollback_request flag.

- Code performing rollback of transaction if THD::transaction_rollback_request
  is moved out from trans_rollback_stmt(). Now we handle rollback request
  on the same level as we call trans_rollback_stmt() and release statement/
  transaction MDL locks.
2013-08-20 13:12:34 +04:00
Mattias Jonsson
89681f6dc6 Bug#16274455: CAN NOT ACESS PARTITIONED TABLES WHEN
DOWNGRADED FROM 5.6.11 TO 5.6.10

Problem was new syntax not accepted by previous version.

Fixed by adding version comment of /*!50531 around the
new syntax.

Like this in the .frm file:
'PARTITION BY KEY /*!50611 ALGORITHM = 2 */ () PARTITIONS 3'
and also changing the output from SHOW CREATE TABLE to:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */

It will always add the ALGORITHM into the .frm for KEY [sub]partitioned
tables, but for SHOW CREATE TABLE it will only add it in case it is the non
default ALGORITHM = 1.

Also notice that for 5.5, it will say /*!50531 instead of /*!50611, which
will make upgrade from 5.5 > 5.5.31 to 5.6 < 5.6.11 fail!
If one downgrades an fixed version to the same major version (5.5 or 5.6) the
bug 14521864 will be visible again, but unless the .frm is updated, it will
work again when upgrading again.

Also fixed so that the .frm does not get updated version
if a single partition check passes.
2013-02-14 17:03:49 +01:00
Mattias Jonsson
f693203e80 Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).

Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).

These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.

Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.

The solution for this situation is:

1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).


2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.

This new syntax should probably be ignored by NDB.


3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()


4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)

CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1`  PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)

CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.

REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.


5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.

This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.


Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
2013-01-30 17:51:52 +01:00
Nisha Gopalakrishnan
62e8f25677 Bug#11757464:SERVER CRASH IN RECURSIVE CALL WHEN OOM
Analysis:
---------

When the server is out of memory, an error is raised
to indicate the same. Handling the error requires
more memory to be allocated which fails, hence the
error handling loops in a recursion and causes the
server to crash.

Fix:
---
a) Prevents pushing the 'out of memory' error condition
to the diagnostic area as it requires memory allocation.
GET DIAGNOSTICS, SHOW WARNINGS and SHOW ERRORS statements
will not show information about this error. However the
'out of memory' error is returned to the client.
b) It sets the ME_FATALERROR flag when 'out of memory' errors
are reported (for places where the flag is not already set).
This flag prevents activation of SP error handlers which also
require memory allocation and therefore are likely to fail.
2013-01-15 15:30:26 +05:30
Annamalai Gurusami
bd7c9815ce Bug #14036214 MYSQLD CRASHES WHEN EXECUTING UPDATE IN TRX WITH
CONSISTENT SNAPSHOT OPTION

A transaction is started with a consistent snapshot.  After 
the transaction is started new indexes are added to the 
table.  Now when we issue an update statement, the optimizer
chooses an index.  When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports 
the error code HA_ERR_TABLE_DEF_CHANGED, with message 
stating that "insufficient history for index".

This error message is propagated up to the SQL layer.  But
the my_error() api is never called.  The statement level
diagnostics area is not updated with the correct error 
status (it remains in Diagnostics_area::DA_EMPTY).  

Hence the following check in the Protocol::end_statement()
fails.

 516   case Diagnostics_area::DA_EMPTY:
 517   default:
 518     DBUG_ASSERT(0);
 519     error= send_ok(thd->server_status, 0, 0, 0, NULL);
 520     break;

The fix is to backport the fix of bugs 14365043, 11761652 
and 11746399. 

14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED

rb://1227 approved by guilhem and mattiasj.
2012-10-08 19:40:30 +05:30
Annamalai Gurusami
e231e7f8eb Merging from mysql-5.1 to mysql-5.5. 2012-07-12 16:48:21 +05:30
Annamalai Gurusami
e612f55237 Bug #11765218 58157: INNODB LOCKS AN UNMATCHED ROW EVEN THOUGH USING
RBR AND RC

Description: When scanning and locking rows with < or <=, InnoDB locks
the next row even though row based binary logging and read committed
is used.

Solution: In the handler, when the row is identified to fall outside
of the range (as specified in the query predicates), then request the
storage engine to unlock the row (if possible). This is done in
handler::read_range_first() and handler::read_range_next().
2012-07-12 16:42:07 +05:30
Annamalai Gurusami
0b6a2e75bf Merge from mysql-5.1 to mysql-5.5 2012-05-21 17:27:21 +05:30
Annamalai Gurusami
3fcd55f687 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

Modified the fix based on review comment by Svoj.
2012-05-21 17:25:40 +05:30
Annamalai Gurusami
f23215ee9c Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

rb://1074 approved by Alexander Nozdrin
2012-05-16 11:17:48 +05:30
Annamalai Gurusami
0de21532cb Merge from mysql-5.1 to mysql-5.5. 2012-05-16 14:10:18 +05:30
Georgi Kodinov
d59986d974 merge mysql-5.5->mysql-5.5-security 2012-04-12 14:04:12 +03:00
gopal.shankar@oracle.com
796fad1424 Bug#11815557 60269: MYSQL SHOULD REJECT ATTEMPTS TO CREATE SYSTEM
TABLES IN INCORRECT ENGINE

PROBLEM:
  CREATE/ALTER TABLE currently can move system tables like
mysql.db, user, host etc, to engines other than MyISAM. This is not
completely supported as of now, by mysqld. When some of system tables
like plugin, servers, event, func, *_priv, time_zone* are moved
to innodb, mysqld restart crashes. Currently system tables
can be moved to BLACKHOLE also!!!.

ANALYSIS:
  The problem is that there is no check before creating or moving
a system table to some particular engine.

  System tables are suppose to be residing in MyISAM. We can think
of restricting system tables to exist only in MyISAM. But, there could
be future needs of these system tables to be part of other engines
by design. For eg, NDB cluster expects some tables to be on innodb
or ndb engine. This calls for a solution, by which system
tables can be supported by any desired engine, with minimal effort.

FIX:
  The solution provides a handlerton interface using which,
mysqld server can query particular storage engine handlerton for
system tables that it supports. This way each storage engine
layer can define their own system database and system tables.

  The check_engine() function uses the new handlerton function
ha_check_if_supported_system_table() to check if db.tablename
provided in the DDL is supported by the SE.

Note: This fix has modified a test in help.test, which was moving
mysql.help_* to innodb. The primary intention of the test was not
to move them between engines.
2012-04-11 15:53:17 +05:30
Annamalai Gurusami
27ecea534c Bug#13635833: MULTIPLE CRASHES IN FOREIGN KEY CODE WITH CONCURRENT DDL/DML
There are two threads.  In one thread, dml operation is going on 
involving cascaded update operation.  In another thread, alter 
table add foreign key constraint is happening.  Under these 
circumstances, it is possible for the dml thread to access a 
dict_foreign_t object that has been freed by the ddl thread.  
The debug sync test case provides the sequence of operations.  
Without fix, the test case will crash the server (because of 
newly added assert).  With fix, the alter table stmt will return 
an error message.  
      
Backporting the fix from MySQL 5.5 to 5.1

rb:961
rb:947
2012-03-01 11:05:51 +05:30
Annamalai Gurusami
152bb4c17d Bug#13635833: MULTIPLE CRASHES IN FOREIGN KEY CODE WITH CONCURRENT DDL/DML
There are two threads.  In one thread, dml operation is going on 
involving cascaded update operation.  In another thread, alter 
table add foreign key constraint is happening.  Under these 
circumstances, it is possible for the dml thread to access a 
dict_foreign_t object that has been freed by the ddl thread.  
The debug sync test case provides the sequence of operations.  
Without fix, the test case will crash the server (because of 
newly added assert).  With fix, the alter table stmt will return 
an error message.  
      
rb:947
approved by Jimmy Yang
2012-02-27 17:23:56 +05:30
Vasil Dimov
c89cbbc6d3 Merge mysql-5.1-security -> mysql-5.5-security (Fix Bug#12661768) 2011-10-25 16:48:23 +03:00
Vasil Dimov
7312f83cb9 Fix Bug#12661768 UPDATE IGNORE CRASHES SERVER IF TABLE IS INNODB AND IT IS
PARENT FOR OTHER ONE

Do not try to lookup key_nr'th key in 'table' because there may not be such
a key there. key_nr is the number of the key in the _child_ table name, not
in the parent table.

Instead just print the fields of the record that are covered by the first key
defined on the parent table.

This bug gets a better fix in MySQL 5.6, which is too risky for 5.1 and 5.5.

Approved by:	Jon Olav Hauglid (via IM)
2011-10-25 16:46:38 +03:00
Marko Mäkelä
4c57188c9c Bug#12547647 UPDATE LOGGING COULD EXCEED LOG PAGE SIZE
This fix was accidentally pushed to mysql-5.1 after the 5.1.59 clone-off in
bzr revision id marko.makela@oracle.com-20110829081642-z0w992a0mrc62s6w
with the fix of Bug#12704861 Corruption after a crash during BLOB update
but not merged to mysql-5.5 and upwards.

In the Barracuda formats, the clustered index record no longer
contains a prefix of off-page columns. Because of this, the undo log
must contain these prefixes, so that purge and multi-versioning will
continue to work. However, this also means that an undo log record can
become too big to fit in an undo log page. (It is a limitation of the
undo log that undo records cannot span across multiple pages.)

In case the checks for undo log size fail when CREATE TABLE or CREATE
INDEX is executed, we need a fallback that blocks a modification
operation when the undo log record would exceed the maximum size.

trx_undo_free_last_page_func(): Renamed from trx_undo_free_page_in_rollback().
Define the trx_t parameter only in debug builds.

trx_undo_free_last_page(): Wrapper for trx_undo_free_last_page_func().
Pass the trx_t parameter only in debug builds.

trx_undo_truncate_end_func(): Renamed from trx_undo_truncate_end().
Define the trx_t parameter only in debug builds. Rewrite a for(;;) loop
as a while loop for clarity.

trx_undo_truncate_end(): Wrapper for from trx_undo_truncate_end_func().
Pass the trx_t parameter only in debug builds.

trx_undo_erase_page_end(): Return TRUE if the page was non-empty
to begin with. Refuse to erase empty pages.

trx_undo_report_row_operation(): If the page for which the undo log
was too big was empty, free the undo page and return DB_TOO_BIG_RECORD.

rb:749 approved by Inaam Rana
2011-09-01 21:48:04 +03:00
Jimmy Yang
177d8b0c12 Fix bug #11830883, SUPPORT "CORRUPTED" BIT FOR INNODB TABLES AND INDEXES.
Also addressed issues in bug #11745133, where we could mark a table
corrupted instead of crashing the server when found a corrupted buffer/page
if the table created with innodb_file_per_table on.
2011-08-16 18:07:59 -07:00
Guilhem Bichot
25221cccd2 Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'.
In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for
ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is
"Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld".
So 'ha_rows' was used as 'long'.
On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes.
So the printf-like code was reading only the first 4 bytes.
Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01
so the first four bytes yield 0. So the warning message had "row 0" instead of
"row 1" in test outfile_loaddata.test:
-Warning	1366	Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1
+Warning	1366	Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0

All error-messaging functions which internally invoke some printf-life function
are potential candidate for such mistakes.
One apparently easy way to catch such mistakes is to use
ATTRIBUTE_FORMAT (from my_attribute.h).
But this works only when call site has both:
a) the format as a string literal
b) the types of arguments.
So:
  func(ER(ER_BLAH), 10);
will silently not be checked, because ER(ER_BLAH) is not known at
compile time (it is known at run-time, and depends on the chosen
language).
And
  func("%s", a va_list argument);
has the same problem, as the *real* type of arguments is not
known at this site at compile time (it's known in some caller).
Moreover,
  func(ER(ER_BLAH));
though possibly correct (if ER(ER_BLAH) has no '%' markers), will not
compile (gcc says "error: format not a string literal and no format
arguments").

Consequences:
1) ATTRIBUTE_FORMAT is here added only to functions which in practice
take "string literal" formats: "my_error_reporter" and "print_admin_msg".
2) it cannot be added to the other functions: my_error(),
push_warning_printf(), Table_check_intact::report_error(),
general_log_print().

To do a one-time check of functions listed in (2), the following
"static code analysis" has been done:
1) replace
  my_error(ER_xxx, arguments for substitution in format)
with the equivalent
  my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in
format),
so that we have ER(ER_xxx) and the arguments *in the same call site*
2) add ATTRIBUTE_FORMAT to push_warning_printf(),
Table_check_intact::report_error(), general_log_print()
3) replace ER(xxx) with the hard-coded English text found in
errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with
"Unknown error"), so that a call site has the format as string literal
4) this way, ATTRIBUTE_FORMAT can effectively do its job
5) compile, fix errors detected by ATTRIBUTE_FORMAT
6) revert steps 1-2-3.
The present patch has no compiler error when submitted again to the
static code analysis above.
It cannot catch all problems though: see Field::set_warning(), in
which a call to push_warning_printf() has a variable error
(thus, not replacable by a string literal); I checked set_warning() calls
by hand though.

See also WL 5883 for one proposal to avoid such bugs from appearing
again in the future.

The issues fixed in the patch are:
a) mismatch in types (like 'int' passed to '%ld')
b) more arguments passed than specified in the format.
This patch resolves mismatches by changing the type/number of arguments,
not by changing error messages of sql/share/errmsg.txt. The latter would be wrong,
per the following old rule: errmsg.txt must be as stable as possible; no insertions
or deletions of messages, no changes of type or number of printf-like format specifiers,
are allowed, as long as the change impacts a message already released in a GA version.
If this rule is not followed:
- Connectors, which use error message numbers, will be confused (by insertions/deletions
of messages)
- using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1)
could produce wrong messages or crash; such usage can easily happen if
installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx;
or if copying mysqld from 5.1.(n+1) into a 5.1.n installation.
When fixing b), I have verified that the superfluous arguments were not used in the format
in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm').
Had they been used, then passing them today, even if the message doesn't use them
anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
Jimmy Yang
bd708b4240 Implement worklog #5743 InnoDB: Lift the limit of index key prefixes.
With this change, the index prefix column length lifted from 767 bytes
to 3072 bytes if "innodb_large_prefix" is set to "true".

rb://603 approved by Marko
2011-05-31 02:12:32 -07:00
Guilhem Bichot
12f651ac9d Merge from 5.1. 2011-05-21 10:21:08 +02:00
Mattias Jonsson
f6641998be Bug#11766249 bug#59316: PARTITIONING AND INDEX_MERGE MEMORY LEAK
Update for previous patch according to reviewers comments.

Updated the constructors for ha_partitions to use the common
init_handler_variables functions

Added use of defines for size and offset to get better readability for the code that reads
and writes the .par file. Also refactored the get_from_handler_file function.
2011-04-20 17:52:33 +02:00
Mattias Jonsson
0d857c2c84 Manual merge from 5.1 2011-04-20 19:53:08 +02:00
Mattias Jonsson
a6b70da9a3 Bug#11766249 bug#59316: PARTITIONING AND INDEX_MERGE MEMORY LEAK
When executing row-ordered-retrieval index merge,
the handler was cloned, but it used the wrong
memory root, so instead of allocating memory
on the thread/query's mem_root, it used the table's
mem_root, resulting in non released memory in the
table object, and was not freed until the table was
closed.

Solution was to ensure that memory used during cloning
of a handler was allocated from the correct memory root.

This was implemented by fixing handler::clone() to also
take a name argument, so it can be used with partitioning.
And in ha_partition only allocate the ha_partition's ref, and
call the original ha_partition partitions clone() and set at cloned
partitions.

Fix of .bzrignore on Windows with VS 2010
2011-03-25 12:36:02 +01:00
Jon Olav Hauglid
984988cfbd Bug #11755431 (former 47205)
MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR

Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.

The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.

This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."

Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.

This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)

Test case added to mysqlcheck.test
2011-03-08 09:41:57 +01:00
Jorgen Loland
44b41979bd BUG#11762751: UPDATE STATEMENT THROWS AN ERROR, BUT STILL
UPDATES THE TABLE ENTRIES (formerly 55385)
BUG#11764529: MULTI UPDATE+INNODB REPORTS ER_KEY_NOT_FOUND 
              IF A TABLE IS UPDATED TWICE (formerly 57373)
            
If multiple-table update updates a row through two aliases and
the first update physically moves the row, the second update will
fail to locate the row. This results in different errors
depending on storage engine:
  * MyISAM: Got error 134 from storage engine
  * InnoDB: Can't find record in 'tbl'
None of these errors accurately describe the problem. 
      
Furthermore, since MyISAM is non-transactional, the update
executed first will be performed while the second will not.
In addition, for two equal multiple-table update statements,
one could succeed and the other fail based on whether or not
the record actually moved or not. This was inconsistent.
      
Two update operations may physically move a row:
  1) Update of a column in a clustered primary key
  2) Update of a column used to calculate which partition the 
     row belongs to
           
BUG#11764529 is about case 1) above, BUG#11762751 was about case 2).
      
The fix for these bugs is to return with an error if multiple-table 
update is about to:
  a) Update a table through multiple aliases, and
  b) Perform an update that may physically more the row 
     in at least one of these aliases
    
This avoids 
  * partial updates as described for MyISAM above,
  * provides the same error message that describes the actual problem
    for all SEs
  * inconsistent behavior where a statement fails or succeeds based on
    e.g. the partitioning algorithm of the table.
2011-02-21 16:49:03 +01:00
Karen Langford
a3acdfacd1 Updating header copyright/README in source for 2011 2011-01-25 15:42:40 +01:00
Georgi Kodinov
1c32b8ee3c weave merge from mysql-5.1 to mysql-5.5
Resolved an innodb conflict thanks to vasil.
2011-02-08 17:47:33 +02:00
Jan Wedvik
f4adb7c6e4 Fix for bug#58553, "Queries with pushed conditions causes 'explain extended'
to crash mysqld". 
      
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
      
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should 
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
2011-01-11 12:09:54 +01:00
Jan Wedvik
b7e3f45011 Merge of fix for bug#58553, "Queries with pushed conditions causes 'explain
extended' to crash mysqld" (see http://lists.mysql.com/commits/128409).
2011-01-11 12:33:28 +01:00
Luis Soares
905e21a0cf BUG#46166
Merging to latest mysql-5.5-bugteam.
2010-12-16 19:12:31 +00:00
Luis Soares
60f650069b BUG#46166
Merging to latest mysql-5.1-bugteam.
2010-12-16 19:11:08 +00:00
Sergey Glukhov
0e77c3295a Bug#39828 : Autoinc wraps around when offset and increment > 1
Auto increment value wraps when performing a bulk insert with
auto_increment_increment and auto_increment_offset greater than
one.
The fix:
If overflow happened then return MAX_ULONGLONG value as an
indication of overflow and check this before storing the
value into the field in update_auto_increment().
2010-12-13 14:48:12 +03:00
Sergey Glukhov
92c303ced6 5.1-bugteam->5.5-bugteam merge 2010-12-13 15:11:16 +03:00
Luis Soares
5d6e142b2b BUG#46166
Manual merge from mysql-5.1-bugteam into mysql-5.5-bugteam.

Conflicts
=========

Text conflict in sql/log.cc
Text conflict in sql/log.h
Text conflict in sql/slave.cc
Text conflict in sql/sql_parse.cc
Text conflict in sql/sql_priv.h
2010-12-07 16:11:13 +00:00
Luis Soares
aaefb52df8 BUG#46166: MYSQL_BIN_LOG::new_file_impl is not propagating error
when generating new name.
      
If find_uniq_filename returns an error, then this error is not
being propagated upwards, and execution does not report error to
the user (although a entry in the error log is generated).
                  
Additionally, some more errors were ignored in new_file_impl:
- when writing the rotate event
- when reopening the index and binary log file
                  
This patch addresses this by propagating the error up in the
execution stack. Furthermore, when rotation of the binary log
fails, an incident event is written, because there may be a
chance that some changes for a given statement, were not properly
logged. For example, in SBR, LOAD DATA INFILE statement requires
more than one event to be logged, should rotation fail while
logging part of the LOAD DATA events, then the logged data would
become inconsistent with the data in the storage engine.
2010-11-30 23:32:51 +00:00
Jon Olav Hauglid
b23c19e82f Merge from mysql-5.5-bugteam to mysql-5.5-runtime
No conflicts
2010-11-17 17:42:28 +01:00
Davi Arnaut
2495e10c11 Merge of mysql-5.1-bugteam into mysql-5.5-bugteam. 2010-11-16 07:45:07 -02:00
Dmitry Lenev
378cdc58c1 Patch that refactors global read lock implementation and fixes
bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ
LOCK" and bug #54673 "It takes too long to get readlock for
'FLUSH TABLES WITH READ LOCK'".

The first bug manifested itself as a deadlock which occurred
when a connection, which had some table open through HANDLER
statement, tried to update some data through DML statement
while another connection tried to execute FLUSH TABLES WITH
READ LOCK concurrently.

What happened was that FTWRL in the second connection managed
to perform first step of GRL acquisition and thus blocked all
upcoming DML. After that it started to wait for table open
through HANDLER statement to be flushed. When the first connection
tried to execute DML it has started to wait for GRL/the second
connection creating deadlock.

The second bug manifested itself as starvation of FLUSH TABLES
WITH READ LOCK statements in cases when there was a constant
stream of concurrent DML statements (in two or more
connections).

This has happened because requests for protection against GRL
which were acquired by DML statements were ignoring presence of
pending GRL and thus the latter was starved.

This patch solves both these problems by re-implementing GRL
using metadata locks.

Similar to the old implementation acquisition of GRL in new
implementation is two-step. During the first step we block
all concurrent DML and DDL statements by acquiring global S
metadata lock (each DML and DDL statement acquires global IX
lock for its duration). During the second step we block commits
by acquiring global S lock in COMMIT namespace (commit code
acquires global IX lock in this namespace).

Note that unlike in old implementation acquisition of
protection against GRL in DML and DDL is semi-automatic.
We assume that any statement which should be blocked by GRL
will either open and acquires write-lock on tables or acquires
metadata locks on objects it is going to modify. For any such
statement global IX metadata lock is automatically acquired
for its duration.

The first problem is solved because waits for GRL become
visible to deadlock detector in metadata locking subsystem
and thus deadlocks like one in the first bug become impossible.

The second problem is solved because global S locks which
are used for GRL implementation are given preference over
IX locks which are acquired by concurrent DML (and we can
switch to fair scheduling in future if needed).

Important change:
FTWRL/GRL no longer blocks DML and DDL on temporary tables.
Before this patch behavior was not consistent in this respect:
in some cases DML/DDL statements on temporary tables were
blocked while in others they were not. Since the main use cases
for FTWRL are various forms of backups and temporary tables are
not preserved during backups we have opted for consistently
allowing DML/DDL on temporary tables during FTWRL/GRL.

Important change:
This patch changes thread state names which are used when
DML/DDL of FTWRL is waiting for global read lock. It is now
either "Waiting for global read lock" or "Waiting for commit
lock" depending on the stage on which FTWRL is.

Incompatible change:
To solve deadlock in events code which was exposed by this
patch we have to replace LOCK_event_metadata mutex with
metadata locks on events. As result we have to prohibit
DDL on events under LOCK TABLES.

This patch also adds extensive test coverage for interaction
of DML/DDL and FTWRL.

Performance of new and old global read lock implementations
in sysbench tests were compared. There were no significant
difference between new and old implementations.
2010-11-11 20:11:05 +03:00
Davi Arnaut
80246ac8b8 Bug#58057: 5.1 libmysql/libmysql.c unused variable/compile failure
Bug#57995: Compiler flag change build error on OSX 10.4: my_getncpus.c
Bug#57996: Compiler flag change build error on OSX 10.5 : bind.c
Bug#57994: Compiler flag change build error : my_redel.c
Bug#57993: Compiler flag change build error on FreeBsd 7.0 : regexec.c
Bug#57992: Compiler flag change build error on FreeBsd : mf_keycache.c
Bug#57997: Compiler flag change build error on OSX 10.6: debug_sync.cc

Fix assorted compiler generated warnings.
2010-11-10 19:14:47 -02:00
Tor Didriksen
367549ab2b Bug#52172 test binlog.binlog_index needs --skip-core-file to avoid leaving core files
For crash testing: kill the server without generating core file.

include/my_dbug.h
  Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
  All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
  Kill server without generating core.
sql/log.cc
  Kill server without generating core.
2010-10-18 13:27:52 +02:00
Tor Didriksen
0853153346 Bug#52172 test binlog.binlog_index needs --skip-core-file to avoid leaving core files
For crash testing: kill the server without generating core file.

include/my_dbug.h
  Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
  All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
  Kill server without generating core.
sql/log.cc
  Kill server without generating core.
2010-10-18 13:24:34 +02:00
Davi Arnaut
5f911fa874 Bug#49938: Failing assertion: inode or deadlock in fsp/fsp0fsp.c
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock

- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.

- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.

Problem description:

The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.

Solution:

The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.

Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
2010-10-06 11:34:28 -03:00
Konstantin Osipov
f8bfa3287d A fix for Bug#41158 "DROP TABLE holds LOCK_open during unlink()".
Remove acquisition of LOCK_open around file system operations,
since such operations are now protected by metadata locks.
Rework table discovery algorithm to not require LOCK_open.

No new tests added since all MDL locking operations are covered
in lock.test and mdl_sync.test, and as long as these tests
pass despite the increased concurrency, consistency must be
unaffected.
2010-08-09 22:33:47 +04:00
Konstantin Osipov
2abe7b9d4e Merge trunk-bugfixing -> trunk-runtime. 2010-07-27 18:32:42 +04:00
Konstantin Osipov
ec2c3bf2c1 A pre-requisite patch for the fix for Bug#52044.
This patch also fixes Bug#55452 "SET PASSWORD is
replicated twice in RBR mode".

The goal of this patch is to remove the release of 
metadata locks from close_thread_tables().
This is necessary to not mistakenly release
the locks in the course of a multi-step
operation that involves multiple close_thread_tables()
or close_tables_for_reopen().

On the same token, move statement commit outside 
close_thread_tables().

Other cleanups:
Cleanup COM_FIELD_LIST.
Don't call close_thread_tables() in COM_SHUTDOWN -- there
are no open tables there that can be closed (we leave
the locked tables mode in THD destructor, and this
close_thread_tables() won't leave it anyway).

Make open_and_lock_tables() and open_and_lock_tables_derived()
call close_thread_tables() upon failure.
Remove the calls to close_thread_tables() that are now
unnecessary.

Simplify the back off condition in Open_table_context.

Streamline metadata lock handling in LOCK TABLES 
implementation.

Add asserts to ensure correct life cycle of 
statement transaction in a session.

Remove a piece of dead code that has also become redundant
after the fix for Bug 37521.
2010-07-27 14:25:53 +04:00
Davi Arnaut
60ab2b9283 WL#5498: Remove dead and unused source code
Remove unused macros or macro which are always defined.
2010-07-23 17:16:29 -03:00
Davi Arnaut
9fd9857e0b WL#5498: Remove dead and unused source code
Remove code that has been disabled for a long time.
2010-07-23 17:09:27 -03:00