Problem was that in-place online alter table was used on a table
that had mismatch between MySQL frm file and InnoDB data dictionary.
Fixed so that traditional "Copy" method is used if the MySQL frm
and InnoDB data dictionary is not consistent.
Step 2:
-- Introduce temporal memory array to buffer pool where to allocate
temporary memory for encryption/compression
-- Rename PAGE_ENCRYPTION -> ENCRYPTION
-- Rename PAGE_ENCRYPTION_KEY -> ENCRYPTION_KEY
-- Rename innodb_default_page_encryption_key -> innodb_default_encryption_key
-- Allow enable/disable encryption for tables by changing
ENCRYPTION to enum having values DEFAULT, ON, OFF
-- In create table store crypt_data if ENCRYPTION is ON or OFF
-- Do not crypt tablespaces having ENCRYPTION=OFF
-- Store encryption mode to crypt_data and redo-log
Merged lp:maria/maria-10.0-galera up to revision 3880.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Merged lp:maria/maria-10.0-galera up to revision 3879.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.
Update InnoDB to 5.6.14
Apply MySQL-5.6 hack for MySQL Bug#16434374
Move Aria-only HA_RTREE_INDEX from my_base.h to maria_def.h (breaks an assert in InnoDB)
Fix InnoDB memory leak
SYNTAX: ATOMIC_WRITES=['DEFAULT','ON','OFF']
Idea here is to be able to define innodb_doublewrite = 1 but with following rules:
ATOMIC_WRITES='DEFAULT' - if innodb_use_atomic_writes = 1, we do not write to doublewrite buffer the changes
if innodb_use_atomic_writes = 0, we write to doublewrite buffer
ATOMIC_WRITES='ON' - do not write to doublewrite buffer
ATOMIC_WRITES='OFF' - write to doublewrite buffer
Note that doublewrite buffer can't be used if innodb_doublewrite = 0.
The ha_innobase table handler contained two search key buffers
(srch_key_val1, srch_key_val2) of fixed size used to store the search
key. The size of these buffers where fixed at
REC_VERSION_56_MAX_INDEX_COL_LEN + 2. But this size is not sufficient
to hold the search key. Hence the following assert in
row_sel_convert_mysql_key_to_innobase() failed.
2438 /* Storing may use at most data_len bytes of buf */
2439
2440 if (UNIV_LIKELY(!is_null)) {
2441 ut_a(buf + data_len <= original_buf + buf_len);
2442 row_mysql_store_col_in_innobase_format(
2443 dfield, buf,
2444 FALSE, /* MySQL key value format col */
2445 key_ptr + data_offset, data_len,
2446 dict_table_is_comp(index->table));
2447 buf += data_len;
2448 }
The buffer size is now calculated with the formula
MAX_KEY_LENGTH + MAX_REF_PARTS*2. This properly takes into account
the extra bytes needed to store the length for each column. An index
can contain a maximum of MAX_REF_PARTS columns in it, and for each
column 2 bytes are needed to store length.
rb://1238 approved by Marko and Vasil Dimov.
BY A CONCURRENT TRANSACTIO
The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.
The ha_innobase::clone() is not there. The handler::clone() does not
take care of the ha_innobase->prebuilt->select_lock_type. Because of
this what happens is that for one index we do a locking read, and
for the other index we were doing a non-locking (consistent) read.
The patch introduces ha_innobase::clone() member function.
It is implemented similar to ha_myisam::clone(). It calls the
base class handler::clone() and then does any additional operation
required. I am setting the ha_innobase->prebuilt->select_lock_type
correctly.
rb://1060 approved by Marko
sql/sql_insert.cc:
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
******
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
small cleanup
******
small cleanup
SECONDARY INDEX IN INNODB
The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.
This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.
In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.
Large parts of this patch were written by Marko Mäkelä.
Test case added to innodb_mysql_lock.test.
- Added a lot of code comments
- Updated get_best_ror_intersec() to prefer index scan on not clustered keys before clustered keys.
- Use HA_CLUSTERED_INDEX to define if one should use HA_MRR_INDEX_ONLY
- For test of using index or filesort to resolve ORDER BY, use HA_CLUSTERED_INDEX flag instead of primary_key_is_clustered()
- Use HA_TABLE_SCAN_ON_INDEX instead of primary_key_is_clustered() to decide if ALTER TABLE ... ORDER BY will have any effect.
sql/ha_partition.h:
Added comment with warning for code unsafe to use with multiple storage engines at the same time
sql/handler.h:
Added HA_CLUSTERED_INDEX.
Documented primary_key_is_clustered()
sql/opt_range.cc:
Added code comments
Updated get_best_ror_intersec() to ignore clustered keys.
Optimized away cpk_scan_used and one instance of current_thd (Simpler code)
Use HA_CLUSTERED_INDEX to define if one should use HA_MRR_INDEX_ONLY
sql/sql_select.cc:
Changed comment to #ifdef
For test of using index or filesort to resolve ORDER BY, use HA_CLUSTERED_INDEX flag instead of primary_key_is_clustered()
(Change is smaller than what it looks beause of indentation change)
sql/sql_table.cc:
Use HA_TABLE_SCAN_ON_INDEX instead of primary_key_is_clustered() to decide if ALTER TABLE ... ORDER BY will have any effect.
storage/innobase/handler/ha_innodb.h:
Added support for HA_CLUSTERED_INDEX
storage/innodb_plugin/handler/ha_innodb.cc:
Added support for HA_CLUSTERED_INDEX
storage/xtradb/handler/ha_innodb.cc:
Added support for HA_CLUSTERED_INDEX
CMakeLists.txt: Remove the checks for mysql_storage_engine.cmake
and MYSQL_VERSION_ID.
ha_innodb.cc, ha_innodb.h: Remove the checks for MYSQL_VERSION_ID.
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock
- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.
- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.
Problem description:
The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.
Solution:
The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.
Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
mysql-test/suite/innodb/t/innodb-truncate.test:
Add test cases for truncate and foreign key checks.
Also test that InnoDB resets auto-increment on truncate.
mysql-test/suite/innodb/t/innodb.test:
FK is not necessary, test is related to auto-increment.
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/suite/innodb/t/innodb_mysql.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
Use delete instead of truncate, test is used to check
the interaction of FKs, triggers and delete.
mysql-test/suite/parts/inc/partition_check.inc:
Fix typo.
mysql-test/suite/sys_vars/t/foreign_key_checks_func.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/t/mdl_sync.test:
Modify test case to reflect and ensure that truncate takes
a exclusive metadata lock.
mysql-test/t/trigger-trans.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
sql/ha_partition.cc:
Reorganize the various truncate methods. delete_all_rows is now
passed directly to the underlying engines, so as truncate. The
code responsible for truncating individual partitions is moved
to ha_partition::truncate_partition, which is invoked when a
ALTER TABLE t1 TRUNCATE PARTITION p statement is executed.
Since the partition truncate no longer can be invoked via
delete, the bitmap operations are not necessary anymore. The
explicit reset of the auto-increment value is also removed
as the underlying engines are now responsible for reseting
the value.
sql/handler.cc:
Wire up the handler truncate method.
sql/handler.h:
Introduce and document the truncate handler method. It assumes
certain use cases of delete_all_rows.
Add method to retrieve the list of foreign keys referencing a
table. Method is used to avoid truncating tables that are
parent in a foreign key relationship.
sql/share/errmsg-utf8.txt:
Add error message for truncate and FK.
sql/sql_lex.h:
Introduce a flag so that the partition engine can detect when
a partition is being truncated. Used to give a special error.
sql/sql_parse.cc:
Function mysql_truncate_table no longer exists.
sql/sql_partition_admin.cc:
Implement the TRUNCATE PARTITION statement.
sql/sql_truncate.cc:
Change the truncate table implementation to use the new truncate
handler method and to not rely on row-by-row delete anymore.
The truncate handler method is always invoked with a exclusive
metadata lock. Also, it is no longer possible to truncate a
table that is parent in some non-self-referencing foreign key.
storage/archive/ha_archive.cc:
Rename method as the description indicates that in the future
this could be a truncate operation.
storage/blackhole/ha_blackhole.cc:
Implement truncate as no operation for the blackhole engine in
order to remain compatible with older releases.
storage/federated/ha_federated.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/heap/ha_heap.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/ibmdb2i/ha_ibmdb2i.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/innobase/handler/ha_innodb.cc:
Rename delete_all_rows to truncate. InnoDB now does truncate
under a exclusive metadata lock.
Introduce and reorganize methods used to retrieve the list
of foreign keys referenced by a or referencing a table.
storage/myisammrg/ha_myisammrg.cc:
Introduce truncate method that invokes delete_all_rows.
This is required in order to remain compatible with earlier
releases where truncate would resort to a row-by-row delete.
errors
In the fix of BUG#39934 in 5.1-rep+3, errors are generated when
binlog_format=row and a statement modifies a table restricted to
statement-logging (ER_BINLOG_ROW_MODE_AND_STMT_ENGINE); or if
binlog_format=statement and a statement modifies a table restricted to
row-logging (ER_BINLOG_STMT_MODE_AND_ROW_ENGINE).
However, some DDL statements that lock tables (e.g. ALTER TABLE,
CREATE INDEX and CREATE TRIGGER) were causing spurious errors,
although no row might be inserted into the binary log.
To fix the problem, we tagged statements that may generate
rows into the binary log and thence the warning messages are
only printed out when the appropriate conditions hold and rows
might be changed.
sql/log_event.cc:
Reorganized the Query_log_event's constructor based on the
CF_CAN_GENERATE_ROW_EVENTS flag and as such any statement
that has the associated flag should go through a cache
before being written to the binary log.
sql/share/errmsg-utf8.txt:
Improved the error message ER_BINLOG_UNSAFE_MIXED_STATEMENT according to Paul's
suggestion.
sql/sql_class.cc:
Created a hook to be used by innodb that checks if a statement
may write rows to the binary log. In other words, if it has
the CF_CAN_GENERATE_ROW_EVENTS flag associated.
sql/sql_class.h:
Defined the CF_CAN_GENERATE_ROW_EVENTS flag.
sql/sql_parse.cc:
Updated the sql_command_flags and added a function to check the
CF_CAN_GENERATE_ROW_EVENTS.
sql/sql_parse.h:
Added a function to check the CF_CAN_GENERATE_ROW_EVENTS.
storage/innobase/handler/ha_innodb.cc:
Added a call to the hook thd_generates_rows().
storage/innobase/handler/ha_innodb.h:
Defined an external reference to the hook thd_generates_rows().
Post-merge fixes: Remove the MYSQL_VERSION_ID checks, because they only
apply to the InnoDB Plugin. Fix potential race condition accessing
trx->op_info and trx->detailed_error.
------------------------------------------------------------
revno: 3466
revision-id: marko.makela@oracle.com-20100514130815-ym7j7cfu88ro6km4
parent: marko.makela@oracle.com-20100514130228-n3n42nw7ht78k0wn
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: mysql-5.1-innodb2
timestamp: Fri 2010-05-14 16:08:15 +0300
message:
Make the InnoDB FOREIGN KEY parser understand multi-statements. (Bug #48024)
Also make InnoDB thinks that /*/ only starts a comment. (Bug #53644).
This fixes the bugs in the InnoDB Plugin.
ha_innodb.h: Use trx_query_string() instead of trx_query() when
available (MySQL 5.1.42 or later).
innobase_get_stmt(): New function, to retrieve the currently running
SQL statement.
struct trx_struct: Remove mysql_query_str. Use innobase_get_stmt() instead.
dict_strip_comments(): Add and observe the parameter sql_length. Treat
/*/ as the start of a comment.
dict_create_foreign_constraints(), row_table_add_foreign_constraints():
Add the parameter sql_length.
Also make InnoDB thinks that /*/ only starts a comment. (Bug #53644).
struct trx_struct: Add mysql_query_len.
ha_innodb.cc: Use trx_query_string() instead of trx_query() and
initialize trx->mysql_query_len.
INNOBASE_COPY_STMT(thd, trx): New macro, to initialize
trx->mysql_query_str and trx->mysql_query_len.
dict_strip_comments(): Add and observe the parameter sql_length. Treat
/*/ as the start of a comment.
dict_create_foreign_constraints(), row_table_add_foreign_constraints():
Add the parameter sql_length.
Detailed revision comments:
r6424 | marko | 2010-01-12 12:22:19 +0200 (Tue, 12 Jan 2010) | 16 lines
branches/5.1: In innobase_initialize_autoinc(), do not attempt to read
the maximum auto-increment value from the table if
innodb_force_recovery is set to at least 4, so that writes are
disabled. (Bug #46193)
innobase_get_int_col_max_value(): Move the function definition before
ha_innobase::innobase_initialize_autoinc(), because that function now
calls this function.
ha_innobase::innobase_initialize_autoinc(): Change the return type to
void. Do not attempt to read the maximum auto-increment value from
the table if innodb_force_recovery is set to at least 4. Issue
ER_AUTOINC_READ_FAILED to the client when the auto-increment value
cannot be read.
rb://144 by Sunny, revised by Marko
Detailed revision comments:
r6422 | marko | 2010-01-12 11:34:27 +0200 (Tue, 12 Jan 2010) | 3 lines
branches/5.1: Non-functional change:
Make innobase_get_int_col_max_value() a static function.
It does not access any fields of class ha_innobase.