The problem was that the server did not robustly handle a
unilateral roll back issued by the Resource Manager (RM)
due to a resource deadlock within the transaction branch.
By not acknowledging the roll back, the server (TM) would
eventually corrupt the XA transaction state and crash.
The solution is to mark the transaction as rollback-only
if the RM indicates that it rolled back its branch of the
transaction.
The REPAIR TABLE ... USE_FRM query silently corrupts data of tables
with old .FRM file version.
The mysql_upgrade client program or the REPAIR TABLE query (without
the USE_FRM clause) can't prevent this trouble, because in the
common case they don't upgrade .FRM file to compatible structure.
1. Evaluation of the REPAIR TABLE ... USE_FRM query has been
modified to reject such tables with the message:
"Failed repairing incompatible .FRM file".
2. REPAIR TABLE query (without USE_FRM clause) evaluation has been
modified to upgrade .FRM files to current version.
3. CHECK TABLE ... FOR UPGRADE query evaluation has been modified
to return error status when .FRM file has incompatible version.
4. mysql_upgrade and mysqlcheck client programs call CHECK TABLE
FOR UPGRADE and REPAIR TABLE queries, so their behaviors have
been changed too to upgrade .FRM files with incompatible
version numbers.
The bug was that handler::clone/handler::ha_open() call caused allocation of
cloned_copy->ref on the handler->table->mem_root. The allocated memory could not
be reclaimed until the table is flushed, so it was possible to exhaust memory by
repeatedly running index_merge queries without doing table flushes.
The fix:
- make handler::clone() allocate new_handler->ref on the passed mem_root
- make handler::ha_open() not allocate this->ref if it has already been allocated
There is no testcase as it is not possible to check small leaks from testsuite.
ORDER BY is used
The range analysis module did not correctly signal to the
handler that a range represents a ref (EQ_RANGE flag). This causes
non-range queries like
SELECT ... FROM ... WHERE keypart_1=const, ..., keypart_n=const
ORDER BY ... FOR UPDATE
to wait for a lock unneccesarily if another running transaction uses
SELECT ... FOR UPDATE on the same table.
Fixed by setting EQ_RANGE for all range accesses that represent
an equality predicate.
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
Once had been set the flag might later got reset inside of a stored routine
execution stack.
The reason was in that there was no check if a new statement started at time
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update
is wrapped up within
bug@27716 fix, multi-delete bug@29136.
Fixed with saving parent's statement flag of whether the statement modified
non-transactional table, and unioning (merging) the value with that was gained
in mysql_execute_command.
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
reflects the fact if such a table got modified by the substatement.
That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
the union of the values of all invoked sub-statements.
That fixes this bug#27417;
Computing of thd->transaction.all.modified_non_trans_table is refined to base to
the stmt's value for all the case including insert .. select statement which
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select.
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
When innodb detects a deadlock it calls ha_rollback_trans() to rollback the
main transaction. But such action isn't allowed from inside of triggers and
functions. When it happen the 'Explicit or implicit commit' error is thrown
even if there is no commit/rollback statements in the trigger/function. This
leads to the user confusion.
Now the convert_error_code_to_mysql() function doesn't call the
ha_rollback_trans() function directly but rather calls the
mark_transaction_to_rollback function and returns an error.
The sp_rcontext::find_handler() now doesn't allow errors to be caught by the
trigger/function error handlers when the thd->is_fatal_sub_stmt_error flag
is set. Procedures are still allowed to catch such errors.
The sp_rcontext::find_handler function now accepts a THD handle as a parameter.
The transaction_rollback_request and the is_fatal_sub_stmt_error flags are
added to the THD class. The are initialized by the THD class constructor.
Now the ha_autocommit_or_rollback function rolls back main transaction
when not in a sub statement and the thd->transaction_rollback_request
is set.
The THD::restore_sub_statement_state function now resets the
thd->is_fatal_sub_stmt_error flag on exit from a sub-statement.
to CHECK TABLE
CHECK/REPAIR TABLE reports "File not found" error when issued
against temporary table.
Fixed by disabling a brunch of code (in case it gets temporary table)
that is responsible for updating frm version as it is not needed
for temporary tables.
This bug affects multi-row INSERT ... ON DUPLICATE into table
with PRIMARY KEY of AUTO_INCREMENT field and some additional UNIQUE indices.
If the first row in multi-row INSERT contains duplicated values of UNIQUE
indices, then following rows of multi-row INSERT (with either duplicated or
unique key field values) may me applied to _arbitrary_ records of table as
updates.
This bug was introduced in 5.0. Related code was widely rewritten in 5.1, and
5.1 is already free of this problem. 4.1 was not affected too.
When updating the row during INSERT ON DUPLICATE KEY UPDATE, we called
restore_auto_increment(), which set next_insert_id back to 0, but we
forgot to set clear_next_insert_id back to 0.
restore_auto_increment() function has been fixed.
NO_AUTO_VALUE_ON_ZERO mode.
In the NO_AUTO_VALUE_ON_ZERO mode the table->auto_increment_field_not_null
variable is used to indicate that a non-NULL value was specified by the user
for an auto_increment column. When an INSERT .. ON DUPLICATE updates the
auto_increment field this variable is set to true and stays unchanged for the
next insert operation. This makes the next inserted row sometimes wrongly have
0 as the value of the auto_increment field.
Now the fill_record() function resets the table->auto_increment_field_not_null
variable before filling the record.
The table->auto_increment_field_not_null variable is also reset by the
open_table() function for a case if we missed some auto_increment_field_not_null
handling bug.
Now the table->auto_increment_field_not_null is reset at the end of the
mysql_load() function.
Reset the table->auto_increment_field_not_null variable after each
write_row() call in the copy_data_between_tables() function.
thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where
SF() modified non-ta table.
As the result of this artifact it was not possible to detect whether there were any side-effects when
top-level query ends.
If the top level query table was not modified and the bit is lost there would be no binlogging.
Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags
telling whether the current query and the current transaction modified any non-ta table.
The flags stmt, all are dropped at the end of the query and the transaction.
Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
- Detect if a table has field of type MYSQL_TYPE_VAR_STRING while running
"CHECK TABLE t FOR UPGRADE" and indicate it need to be fixed
with "REPAIR TABLE t".
- When running a "REPAIR TABLE t" or "ALTER TABLE t FORCE" on the above
table, install a special copy function to trim off the trailing spaces
which we safely can say that the pre 5.0 mysqld didn't put there.
Only MyISAM tables locked with LOCK TABLES ... WRITE were affected.
A query that is optimized with index_merge doesn't reflect rows
inserted within LOCK TABLES.
MyISAM doesn't flush a state within LOCK TABLES. index_merge
optimization creates a copy of the handler, which thus gets
outdated MyISAM state.
New handler->clone() method is introduced to fix this problem.
For non-MyISAM storage engines it allocates a handler and opens
it with ha_open(). For MyISAM it additionally copies MyISAM state
pointer to cloned handler.