large table gives server crash": make sure that when a MyISAM temporary
table is created for a cursor, it's created in its memory root,
not the memory root of the current query.
Handlerton array is now created instead of using sys_table_types_st. All storage engines can now have inits and giant ifdef's are now gone for startup. No compeltely clean yet, handlertons will next be merged with sys_table_types. Federated and archive now have real cleanup if their inits fail.
"SELECT ... FOR UPDATE executed as consistent read inside LOCK TABLES"
Do not discard lock_type information as handler::start_stmt() may require knowledge.
(fixed by Antony)
The alias structure now is a bit more simple and just uses a pointer to replace with the currect name. The giant case statement should go away in the next patch.
cursors. This should fix Bug#11813 when InnoDB part is in
(tested with a draft patch).
The idea of the patch is that if a storage engine supports
consistent read views, we open one when open a cursor,
set is as the active view when fetch from the cursor, and close
together with cursor close.
The idea of the patch
is that every cursor gets its own lock id for table level locking.
Thus cursors are protected from updates performed within the same
connection. Additionally a list of transient (must be closed at
commit) cursors is maintained and all transient cursors are closed
when necessary. Lastly, this patch adds support for deadlock
timeouts to TLL locking when using cursors.
+ post-review fixes.
- Added better error messages when trying to open a table that can't be discovered or unpacked. The most likely cause of this is that it does not have any frm data, probably since it has been created from NdbApi or is a NDB system table.
- Separated functionality that was in ha_create_table_from_engine into two functions. One that checks if the table exists and another one that tries to create the table from the engine.
After review version.
Added a condition for MERGE tables. These do not have unique
indexes. But every key could be a unique key on the underlying
MyISAM table. So get the maximum key length for MERGE tables
instead of the maximum unique key length. This is used for
buffer allocation in write_record().
Count null_bits separately from field offsets and adjust them in case of primary key parts.
(Previously a CREATE TABLE with a lot of null fields that was part of a primary key caused MySQL to wrongly count the number of bytes needed to store null bits)
This is a more complete bug fix for #6236
Semi-synchronous replication for InnoDB type tables; before telling the client that a commit has been processed, wait that the replication thread has returned from my_net_send() where it sends the binlog to the slave; note that TCP/IP, even with the TCP_NODELAY option does not guarantee that the slave has RECEIVED the data - this is just heuristic at the moment; this is useful in failover: in almost all cases, every transaction that has returned from the commit has been sent and processed in the slave, which makes failover to the slave simpler if the master crashes; the code does not work yet as is, because MySQL should call innobase_report_binlog_offset_and_commit() in a commit; we will most probably return that call to 5.0.x, to make InnoDB Hot Backup and group commit to work again; XA code broke them temporarily in 5.0.3
Fixed warnings by valgrind for sum_distinct.test
Enable buffered-record-reads after filesort for InnoDB tables with short primary key
Enabled sort-with-data for MyISAM temporary files
Windows to call CreateFileMapping() with correct arguments, and
propogating the introduction of query_id_t to everywhere query ids are
passed around. (Bug #8826)