Bug #31956 auto increment bugs in MySQL Cluster: Added utility method and constant for internal prefetch default
ndb_auto_increment.result:
BitKeeper file /home/marty/MySQL/mysql-5.0-ndb/mysql-test/r/ndb_auto_increment.result
mysqld.cc:
Bug #25176 Trying to set ndb_autoincrement_prefetch_sz always fails: Changed pointer to max value
Bug #31956 auto increment bugs in MySQL Cluster: Changed meaning of ndb_autoincrement_prefetch_sz to specify prefetch between statements, changed default to 1 (with internal prefetch to at least 32 inside a statement)
ndb_insert.test, ndb_insert.result:
Moved auto_increment tests to ndb_auto_increment.test
ndb_auto_increment.test:
BitKeeper file /home/marty/MySQL/mysql-5.0-ndb/mysql-test/t/ndb_auto_increment.test
ha_ndbcluster.cc:
Bug #31956 auto increment bugs in MySQL Cluster: Changed meaning of ndb_autoincrement_prefetch_sz to specify prefetch between statements, changed default to 1 (with internal prefetch to at least 32 inside a statement), added handling of updates of pk/unique key with auto_increment
Bug #32055 Cluster does not handle auto inc correctly with insert ignore statement
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced
subject table didn't see the results of operation which caused invocation
of those triggers. In other words AFTER trigger invoked as result of update
(or deletion) of particular row saw version of this row before update (or
deletion).
The problem occured because NDB handler in those cases postponed actual
update/delete operations to be able to perform them later as one batch.
This fix solves the problem by disabling this optimization for particular
operation if subject table has AFTER trigger for this operation defined.
To achieve this we introduce two new flags for handler::extra() method:
HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH.
These are called if there exists AFTER DELETE/UPDATE triggers during a
statement that potentially can generate calls to delete_row()/update_row().
This includes multi_delete/multi_update statements as well as insert statements
that do delete/update as part of an ON DUPLICATE statement.
bug #18184 SELECT ... FOR UPDATE does not work..: New test case
ha_ndbcluster.h, ha_ndbcluster.cc, NdbConnection.hpp:
Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: Releasing operation for each intermediate batch, before next call to trans->execute(NoCommit);
Bug #17257 ndb, update fails for inner joins if tables do not have Primary Key
change: the allocated area by setValue may not be around for later, store hidden key in special member variable instead
Handlerton array is now created instead of using sys_table_types_st. All storage engines can now have inits and giant ifdef's are now gone for startup. No compeltely clean yet, handlertons will next be merged with sys_table_types. Federated and archive now have real cleanup if their inits fail.
"SELECT ... FOR UPDATE executed as consistent read inside LOCK TABLES"
Do not discard lock_type information as handler::start_stmt() may require knowledge.
(fixed by Antony)
- Added better error messages when trying to open a table that can't be discovered or unpacked. The most likely cause of this is that it does not have any frm data, probably since it has been created from NdbApi or is a NDB system table.
- Separated functionality that was in ha_create_table_from_engine into two functions. One that checks if the table exists and another one that tries to create the table from the engine.