HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK".
Attempt to update an InnoDB temporary table under LOCK TABLES
led to assertion failure in both debug and production builds
if this temporary table was explicitly locked for READ. The
same scenario works fine for MyISAM temporary tables.
The assertion failure was caused by discrepancy between lock
that was requested on the rows of temporary table at LOCK TABLES
time and by update operation. Since SQL-layer requested a
read-lock at LOCK TABLES time InnoDB engine assumed that upcoming
statements which are going to be executed under LOCK TABLES will
only read table and therefore should acquire only S-lock.
An update operation broken this assumption by requesting X-lock.
Possible approaches to fixing this problem are:
1) Skip locking of temporary tables as locking doesn't make any
sense for connection-local objects.
2) Prohibit changing of temporary table locked by LOCK TABLES ...
READ.
Unfortunately both of these approaches have drawbacks which make
them unviable for stable versions of server.
So this patch takes another approach and changes code in such way
that LOCK TABLES for a temporary table will always request write
lock. In 5.1 version of this patch switch from read lock to write
lock is done inside of InnoDBs handler methods as doing it on
SQL-layer causes compatibility troubles with FLUSH TABLES WITH
READ LOCK.
Problem: MYSQL_BIN_LOG::reset_logs acquires mutexes in wrong order.
The correct order is first LOCK_thread_count and then LOCK_log. This function
does it the other way around. This leads to deadlock when run in parallel
with a thread that takes the two locks in correct order. For example, a thread
that disconnects will take the locks in the correct order.
Fix: change order of the locks in MYSQL_BIN_LOG::reset_logs:
first LOCK_thread_count and then LOCK_log.
Manual merged mysql-5.1-gca into latest mysql-5.5.
Conflicts
=========
Text conflict in mysql-test/suite/rpl/r/rpl_relayspace.result
Text conflict in mysql-test/suite/rpl/t/rpl_relayspace.test
VM-WIN2003-32-A, SLES10-IA64-A
The test case waits for master_pos_wait not to timeout, which
means that the deadlock between SQL and IO threads was
succesfully and automatically dealt with.
However, very rarely, master_pos_wait reports a timeout. This
happens because the time set for master_pos_wait to wait was
too small (6 seconds). On slow test env this could be a
problem.
We fix this by setting the timeout inline with the one used
in sync_slave_with_master (300 seconds). In addition we
refactored the test case and refined some comments.
.-> USING PASSWORD: NO
The server was always setting the flag for using password to NO and
then relying on the server authentication plugin to update it if it uses
a password.
This creates compatibility problems with 5.1 when rejecting a
nonexistent user login.
Set the default for the password supplied flag for non-existing users
as the default plugin (native password authentication) would do it
for compatibility reasons.
Test case added.
federated.result updated with the correct error message.
Before this fix, the test performance_schema.relaylog would fail
with sporadic failures related to statistics on update_cond.
The reason for these failures is that thread scheduling makes
impossible to predict if instrumented conditions will be used on not.
The fix is to relax the test case, to not collect statistics about:
- wait/synch/cond/sql/MYSQL_BIN_LOG::update_cond
- wait/synch/cond/sql/MYSQL_RELAY_LOG::update_cond
DB_COL_APPEARS_TWICE_IN_INDEX: Remove. This condition is already
checked and reported by MySQL before passing the index definition to
the storage engine.
row_create_index_for_mysql(): Remove the redundant check for
DB_COL_APPEARS_TWICE_IN_INDEX. When enforcing the column prefix index
limit, invoke dict_mem_index_free(index) to plug the memory leak. In
the loop, use index->n_def instead of dict_index_get_n_fields(index),
because the latter would be 0 for indexes that have not been copied to
the data dictionary cache.
innodb-use-sys-malloc.test:
Add test cases for attempting to trigger the error checks in
row_create_index_for_mysql(). Before MySQL 5.5 and WL#5743, the leak
is only reproducible if ha_innobase::max_supported_key_part_length()
returned a higher limit than the one used in
row_create_index_for_mysql().
In MySQL 5.5 and later, the leak is reproducible with
innodb_large_prefix=true.
rb:688 approved by Jimmy Yang
Fix a failure of the re-enabled innodb-index.test in the embedded server.
Apparently, the embedded server does not default to ENGINE=InnoDB when
copying an InnoDB table by CREATE TABLE t2 SELECT * FROM t1;
Re-enable a test that was disabled as collateral damage.
Starting with MySQL 5.5, queries will acquire and hold a shared meta-data lock
(MDL) on tables they process, until the transaction is committed or
rolled back. This will prevent DDL operations on the tables, such as creating
an index.
innodb-index.test: Use a second table for creating the index. The index will
still be "too new" for the transaction that was started before the index
creation was started.
Bug 12430414 - THE TEST PERFSCHEMA.SELECTS.TEST CAN AFFECT SUCCEEDING TESTS
Bug 12430599 - THE TEST PERFSCHEMA.ONE_THREAD_PER_CON. CAN AFFECT SUCCEEDING TESTS
Bug 12431153 - THE TEST PERFSCHEMA.PFS_UPGRADE CAN AFFECT SUCCEEDING TEST
This assert could be triggered during two phase commit if binary
log was used as transaction coordinator log. The triggered assert
checks that the same number of transaction IDs are processed in
the prepare and commit phases.
The reason it was triggered, was that the transaction consisted
of an INSERT/UPDATE IGNORE that had an ignorable error. Since it
had an error, no row log events were made and therefore
prepared_xids was 0. However, since it was an IGNORE statement,
the statement started a read/write statement transaction, committed
it and completed successfully.
This patch fixes the problem by adjusting the assert to take
this possibility into account.
Test case added to binlog.binlog_innodb_row.test.
If LOAD DATA INFILE featured a SET clause, the name=value pairs
would be regenerated using item::print. Unfortunately, that code
is mostly optimized for EXPLAIN EXTENDED output and such, and can
not be relied on to return valid SQL.
We now name each value its original, user-supplied form and use
that to create LOAD DATA INFILE statements for statement-based
replication.
- add support for choosing the engine of test
table(t1) with $engine_type
- add primary key to the test table(t1) to support
replication of BLOB/TEXT (also with ENGINE=ndb)
- change the suppression since the warning printed to error log
now says "Column 1"
IN WAIT_SHOW_CONDITION)
There was a typo in the name of one of the parameters to the
include file wait_show_condition. The parameter name was being
set to "connection" instead of "condition".
We fix this typo, improve one instruction in the test case and
deploy parameter checks inside wait_show_condition.inc.
Manual merge from mysql-5.1 into mysql-5.5.
Conflicts
=========
Text conflict in mysql-test/suite/rpl/t/rpl_row_until.test
Text conflict in sql/handler.h
Text conflict in storage/archive/ha_archive.cc
The innoDB global variable srv_lower_case_table_names is set to the value of lower_case_table_names declared in mysqld.h server in ha_innodb.cc. Since this variable can change at runtime, it is reset for each handler call to ::create, ::open, ::rename_table & ::delete_table.
But it is possible for tables to be implicitly opened before an explicit handler call is made when an engine is first started or restarted. I was able to reproduce that with the testcase in this patch on a version of InnoDB from 2 weeks ago. It seemed like the change buffer entries for the secondary key was getting put into pages after the restart. (But I am not sure, I did not write down the call stack while it was reproducing.) In the current code, the implicit open, which is actually a call to dict_load_foreigns(), does not occur with this testcase.
The change is to replace srv_lower_case_table_names by an interface function in innodb.cc that retrieves the server global variable when it is needed.
causes future shutdown hang
InnoDB would hang on shutdown if any XA transactions exist in the
system in the PREPARED state. This has been masked by the fact that
MySQL would roll back any PREPARED transaction on shutdown, in the
spirit of Bug #12161 Xa recovery and client disconnection.
[mysql-test-run] do_shutdown_server: Interpret --shutdown_server 0 as
a request to kill the server immediately without initiating a
shutdown procedure.
xid_cache_insert(): Initialize XID_STATE::rm_error in order to avoid a
bogus error message on XA ROLLBACK of a recovered PREPARED transaction.
innobase_commit_by_xid(), innobase_rollback_by_xid(): Free the InnoDB
transaction object after rolling back a PREPARED transaction.
trx_get_trx_by_xid(): Only consider transactions whose
trx->is_prepared flag is set. The MySQL layer seems to prevent
attempts to roll back connected transactions that are in the PREPARED
state from another connection, but it is better to play it safe. The
is_prepared flag was introduced in the InnoDB Plugin.
trx_n_prepared: A new counter, counting the number of InnoDB
transactions in the PREPARED state.
logs_empty_and_mark_files_at_shutdown(): On shutdown, allow
trx_n_prepared transactions to exist in the system.
trx_undo_free_prepared(), trx_free_prepared(): New functions, to free
the memory objects of PREPARED transactions on shutdown. This is not
needed in the built-in InnoDB, because it would collect all allocated
memory on shutdown. The InnoDB Plugin needs this because of
innodb_use_sys_malloc.
trx_sys_close(): Invoke trx_free_prepared() on all remaining
transactions.