Using more than 16g can cause record-pool ptr.i values to overflow
Fix by splitting memory into 2 zones, lo(16g)/hi(rest)
When record pools only use zone_lo, and datamemory, buffers etc...can use any
storage/ndb/src/kernel/blocks/lgman.cpp:
adopt to changed interface for Ndbd_mem_manager
storage/ndb/src/kernel/vm/Pool.cpp:
Always use ZONE_LO for record pools
as they use ptr.i == 19 bit page id + 13 bit page index
storage/ndb/src/kernel/vm/ndbd_malloc_impl.cpp:
Add zones to Ndbd_mem_manager
ZONE_LO = lower 16g
ZONE_HI = rest
storage/ndb/src/kernel/vm/ndbd_malloc_impl.hpp:
Add zones to Ndbd_mem_manager
ZONE_LO = lower 16g
ZONE_HI = rest
During TC-take-over (NF) the new-TC builds up a new transaction state
And commits operation according to this state.
However, in the new state that is build, the operations does not have to be in same order, as "real" state
In the multi-update-case, this means that operations can be commit in "incorrect" order
i.e update A, delete A, insert A is normally commited in same order as prepared
but can be committed in any order
This patch changes TUP handling of these out-order commits, and previous implementation
could confuse the TUX triggers
storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
new method
storage/ndb/src/kernel/blocks/dbtup/DbtupAbort.cpp:
move removeActiveOpList, cause it's now only used by DbtupAbort
storage/ndb/src/kernel/blocks/dbtup/DbtupCommit.cpp:
- move tux-trigger execution *before* check of disk, since ops can be committed during a disk timeslice
- allow out-of-order commits and use tuple_ptr->m_operation_ptr_i for determening "real" commit
(instead of re-ordering operations on the fly, which confused tux-triggers)
storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp:
use constant instead of number
storage/ndb/test/run-test/daily-basic-tests.txt:
"old-51" does not yet support --nologging
testcases
storage/ndb/src/kernel/blocks/ERROR_codes.txt:
new error codes
storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
new error codes
storage/ndb/src/kernel/blocks/dbtc/DbtcMain.cpp:
new error codes
storage/ndb/src/kernel/blocks/dbtup/DbtupCommit.cpp:
remove assert
storage/ndb/test/ndbapi/testNodeRestart.cpp:
new testcase
1) -n Bug34216
Which tests node diying during multip-op commit
Very controlled
2) -n mixedmultiop
Runs several threads "load" of same scenario...not very controlled
storage/ndb/test/run-test/daily-basic-tests.txt:
new testcases
into sama.ndb.mysql.com:/export/space/pekka/ndb/version/my51-ndb
storage/ndb/src/common/debugger/SignalLoggerManager.cpp:
Auto merged
storage/ndb/src/common/debugger/signaldata/ScanTab.cpp:
Auto merged
storage/ndb/src/kernel/vm/pc.hpp:
Auto merged
into sama.ndb.mysql.com:/export/space/pekka/ndb/version/my51-ndb
storage/ndb/src/ndbapi/Ndb.cpp:
Auto merged
storage/ndb/test/ndbapi/testOIBasic.cpp:
Auto merged
make sure to alloc logspace and set bits
if doing delete after previous update wo/ touching DD part
mysql-test/suite/ndb/r/ndb_dd_basic.result:
testcase
mysql-test/suite/ndb/t/ndb_dd_basic.test:
testcase
Updated with new support function from Magnus push to dbutil
storage/ndb/test/ndbapi/acrt/NdbRepStress.cpp:
Updated with new support function from Magnus push to dbutil
into sama.ndb.mysql.com:/export/space/pekka/ndb/version/my51-bug34107
storage/ndb/test/ndbapi/testInterpreter.cpp:
Auto merged
storage/ndb/test/run-test/daily-basic-tests.txt:
ul, fix next
into sama.ndb.mysql.com:/export/space/pekka/ndb/version/my51-bug34107
mysql-test/suite/ndb/r/ndb_condition_pushdown.result:
Auto merged
mysql-test/suite/ndb/t/ndb_condition_pushdown.test:
Auto merged
storage/ndb/include/ndbapi/ndbapi_limits.h:
Auto merged
storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
Auto merged
storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp:
silly stuff
storage/ndb/src/kernel/blocks/dbtup/DbtupStoredProcDef.cpp:
a name was improved in 5.1
storage/ndb/src/ndbapi/ndberror.c:
use local due to huge bogus diff
into perch.ndb.mysql.com:/home/jonas/src/51-ndb
storage/ndb/src/kernel/blocks/backup/Backup.cpp:
Auto merged
storage/ndb/src/kernel/vm/DLHashTable.hpp:
Auto merged
storage/ndb/src/kernel/vm/DLHashTable2.hpp:
Auto merged
storage/ndb/src/kernel/blocks/backup/Backup.hpp:
merge
mysql-test/suite/ndb/r/ndb_dd_basic.result:
bug#34118 hash index trigger disk flag
mysql-test/suite/ndb/t/ndb_dd_basic.test:
bug#34118 hash index trigger disk flag
storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
bug#34118 hash index trigger disk flag
storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp:
bug#34118 hash index trigger disk flag
storage/ndb/src/kernel/blocks/dbtup/DbtupTrigger.cpp:
bug#34118 hash index trigger disk flag
storage/ndb/src/kernel/vm/NdbdSuperPool.cpp:
rename Ndbd_mem_manager::log2 to ndb_log2
storage/ndb/src/kernel/vm/ndbd_malloc_impl.cpp:
rename Ndbd_mem_manager::log2 to ndb_log2
storage/ndb/src/kernel/vm/ndbd_malloc_impl.hpp:
rename Ndbd_mem_manager::log2 to ndb_log2
into sama.ndb.mysql.com:/export/space/pekka/ndb/version/my51-bug31477
storage/ndb/include/ndbapi/Ndb.hpp:
Auto merged
storage/ndb/src/common/util/NdbOut.cpp:
Auto merged
storage/ndb/src/kernel/blocks/dbtux/DbtuxScan.cpp:
Auto merged
storage/ndb/test/ndbapi/testOIBasic.cpp:
Auto merged
storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
mindless merge
storage/ndb/src/kernel/blocks/dbtup/DbtupIndex.cpp:
mindless merge
Changed to use information_schema to check auto_increment
Ndb.cpp:
Bug #33534 Bad performance of INSERT's in auto_incremented tables: Saving highest seen value when setting auto_increment fields
ndb_auto_increment.result:
Regenerated result
mysql-test/suite/ndb/r/ndb_auto_increment.result:
Regenerated result
mysql-test/suite/ndb/r/ndb_restore.result:
Changed to use information_schema to check auto_increment
mysql-test/suite/ndb/t/ndb_restore.test:
Changed to use information_schema to check auto_increment
storage/ndb/src/ndbapi/Ndb.cpp:
Bug #33534 Bad performance of INSERT's in auto_incremented tables: Saving highest seen value when setting auto_increment fields
remove LCP-snapshot from MM-tables,
removing possibility of spurious 899 on MM-tables
storage/ndb/src/kernel/blocks/dbtup/DbtupScan.cpp:
dont run LCP-snapshot on pure MM-tables,
this is implemented by not setting frag.m_lcp_scan_op
which will make TUP_COMMIT do nothing
Add a check if setting an auto_increment field will change it's next value before retrieving tuple_id_range lock. This avoids hitting locks when updating auto_increment values to a lower value than the current maximum. This is useful in loading a table with auto_increment where one loads the highest numbered pk's first and then proceeds backwards to the first. This can then be achieved with the same performance as a normal insert without auto_increment.
ndb_restore.result:
Updated result file
mysql-test/suite/ndb/r/ndb_restore.result:
Updated result file
sql/ha_ndbcluster.cc:
Add a check if setting an auto_increment field will change it's next value before retrieving tuple_id_range lock. This avoids hitting locks when updating auto_increment values to a lower value than the current maximum. This is useful in loading a table with auto_increment where one loads the highest numbered pk's first and then proceeds backwards to the first. This can then be achieved with the same performance as a normal insert without auto_increment.
storage/ndb/include/ndbapi/Ndb.hpp:
Add a check if setting an auto_increment field will change it's next value before retrieving tuple_id_range lock. This avoids hitting locks when updating auto_increment values to a lower value than the current maximum. This is useful in loading a table with auto_increment where one loads the highest numbered pk's first and then proceeds backwards to the first. This can then be achieved with the same performance as a normal insert without auto_increment.
storage/ndb/src/ndbapi/Ndb.cpp:
Add a check if setting an auto_increment field will change it's next value before retrieving tuple_id_range lock. This avoids hitting locks when updating auto_increment values to a lower value than the current maximum. This is useful in loading a table with auto_increment where one loads the highest numbered pk's first and then proceeds backwards to the first. This can then be achieved with the same performance as a normal insert without auto_increment.
into perch.ndb.mysql.com:/home/jonas/src/51-telco-gca
storage/ndb/include/util/Bitmask.hpp:
Auto merged
storage/ndb/src/common/util/Bitmask.cpp:
merge
storage/ndb/test/ndbapi/testBitfield.cpp:
merge
into mysql.com:/home/marty/MySQL/mysql-5.1-new-ndb
storage/ndb/src/ndbapi/Ndb.cpp:
Using local, will merge manually.
storage/ndb/include/ndbapi/Ndb.hpp:
Changed parameter name to better reflect meaning.
storage/ndb/test/include/DbUtil.hpp:
Add support for SqlResultSet
storage/ndb/test/ndbapi/Makefile.am:
Add testNDBT
storage/ndb/test/src/DbUtil.cpp:
Add support for SqlResultSet
storage/ndb/test/src/Makefile.am:
Build AtrtClient
storage/ndb/test/include/AtrtClient.hpp:
New BitKeeper file ``storage/ndb/test/include/AtrtClient.hpp''
storage/ndb/test/ndbapi/testNDBT.cpp:
New BitKeeper file ``storage/ndb/test/ndbapi/testNDBT.cpp''
storage/ndb/test/src/AtrtClient.cpp:
New BitKeeper file ``storage/ndb/test/src/AtrtClient.cpp''
- The errno variable should only be used when the previous socket
write failed, it should be regarded as undefined at other times
OutputStream.cpp:
Only use "errno" after the attempt to write to the socket has failed
storage/ndb/src/common/util/OutputStream.cpp:
Only use "errno" after the attempt to write to the socket has failed