make fragment logpart in DIH "global"
storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Store logpart id in FRAGMENTATION as it goes very bad if
they differ on different nodes, as they are copied from master
on node/system -restart
make each fragment use own LCP file, so that (s/n)r can use different LCP-no for different fragments
storage/ndb/include/kernel/signaldata/FsOpenReq.hpp:
Add fragment id to LCP filename
storage/ndb/src/kernel/blocks/ERROR_codes.txt:
Add new error code
storage/ndb/src/kernel/blocks/backup/Backup.cpp:
put each fragment in own LCP file
storage/ndb/src/kernel/blocks/backup/Backup.hpp:
put each fragment in own LCP file
storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp:
Use fifo lists
storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
1) use fifo lists
2) restore each fragment separatly
3) add error codes
storage/ndb/src/kernel/blocks/restore.cpp:
Add fragment id to LCP filename
storage/ndb/src/kernel/blocks/ndbfs/Filename.cpp:
Add fragment id to LCP filename
storage/ndb/test/ndbapi/testNodeRestart.cpp:
Add testcase
storage/ndb/test/run-test/daily-basic-tests.txt:
add testcase
make dump state args list const
make node lists const
storage/ndb/include/mgmapi/mgmapi_debug.h:
make dump state args list const
storage/ndb/src/mgmapi/mgmapi.cpp:
make dump state args list const
storage/ndb/test/include/NdbRestarter.hpp:
make node lists const
storage/ndb/test/src/NdbRestarter.cpp:
make node lists const
into mysql.com:/home/cps/mysql/devel/5.1-curs-bug
sql/ha_myisam.cc:
Auto merged
sql/handler.h:
Auto merged
sql/sql_table.cc:
Auto merged
mysql-test/r/log_tables.result:
SCCS merged
mysql-test/t/log_tables.test:
SCCS merged
report only once on "all dump 1000"
storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp:
report DM only once
storage/ndb/src/kernel/blocks/dbtup/DbtupDebug.cpp:
report DM only once
Make sure that tupkeyErrorLab is run if interpretedUpdate(fail), so that entry is not inserted into index.
Yeilding crash on following dml on tupel
ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp:
Make sure that tupkeyErrorLab is run if interpretedUpdate(fail), so that entry is not inserted into index.
Yeilding crash on following dml on tupe
gets deadlocked when dropping w/ log on"
Log tables rely on concurrent insert machinery to add data.
This means that log tables are always opened and locked by
special (artificial) logger threads. Because of this, the thread
which tries to drop a log table starts to wait for the table
to be unlocked. Which will happen only if the log table is disabled.
Alike situation happens if one tries to alter a log table.
However in addition to the problem above, alter table calls
check_if_locking_is_allowed() routine for the engine. The
routine does not allow alter for the log tables. So, alter
doesn't start waiting forever for logs to be disabled, but
returns with an error.
Another problem is that not all engines could be used for
the log tables. That's because they need concurrent insert.
In this patch we:
(1) Explicitly disallow to drop/alter a log table if it
is currently used by the logger.
(2) Update MyISAM to support log tables
(3) Allow to drop log tables/alter log tables if log is
disabled
At the same time we (4) Disallow to alter log tables to
unsupported engine (after this patch CSV and MyISAM are
alowed)
Recommit with review fixes.
mysql-test/r/log_tables.result:
Update result file.
Note: there are warnings in result file. This is because of CSV
bug (Bug #21328). They should go away after it is fixed.
mysql-test/t/log_tables.test:
Add a test for the bug
sql/ha_myisam.cc:
Add log table handling to myisam: as log tables
use concurrent insert, they are typically
locked with TL_CONCURRERENT_INSERT lock. So,
disallow other threads to attempt locking of
the log tables in incompatible modes. Because
otherwise the threads will wait for the tables
to be unlocked forever.
sql/handler.cc:
Add a function to check if a table we're going to lock
is a log table and if the lock mode we want allowed
sql/handler.h:
Add a new function to check compatibility of the locking
sql/log.cc:
we shouldn't close the log table if and only
if this particular table is already closed
sql/log.h:
add new functions to check if a log is enabled
sql/share/errmsg.txt:
add new error messages
sql/sql_table.cc:
DROP and ALTER TABLE should not work on log
tables if the log tables are enabled
storage/csv/ha_tina.cc:
move function to check if the locking for the log
tables allowed to handler class, so that we can
reuse it in other engines.
storage/myisam/mi_extra.c:
add new ::extra() flag processing to myisam
storage/myisam/mi_open.c:
init log table flag
storage/myisam/mi_write.c:
update status after each write if it's a log table
storage/myisam/myisamdef.h:
Add new log table flag to myisam share.
We need it to distinguish between usual
and log tables, as for the log tables we
should provide concurrent insert in a
different way than for usual tables: we
want new rows to be immediately visible
to other threads.
Due to incorrect handling of FLUSH TABLES, log tables were marked for flush,
but not reopened. Later we started to wait for the log table to be closed
(disabled) after the flush. And as nobody disabled logs in concurrent treads,
the command lasted forever.
After internal consultations it was decided to skip logs during FLUSH TABLES.
The reasoning is that logging is done in the "log device", whatever it is
which is always active and controlled by FLUSH LOGS. So, to flush logs
one should use FLUSH LOGS, and not FLUSH TABLES.
mysql-test/r/log_tables.result:
update result file
mysql-test/t/log_tables.test:
add a test for the bug
sql/sql_base.cc:
Skip log tables during FLUSH TABLES
Correction of bug#19852 (that also revealed another bug)
Do grow noOfPagesToGrow with more than was actually allocated
ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp:
Dont grow "noOfPagesToGrow" with more than was actually allocated
(as it will then grow indefinitly)
Fix bug in tup buddy allocator, which made it make invalid access to cfreepagelist[16] (which is not defined)
ndb/src/kernel/blocks/dbtup/DbtupPagMan.cpp:
loop from firstListToCheck -1 (as firstListToCheck has already been checked), when looking for less than requested pages
add if-statement for firtListToCheck == 0
make Dblqh use OM_AUTO_SYNC
storage/ndb/include/kernel/signaldata/FsOpenReq.hpp:
make Dblqh us OM_AUTO_SYNC
storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp:
make Dblqh us OM_AUTO_SYNC
storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
make Dblqh us OM_AUTO_SYNC
storage/ndb/src/kernel/vm/pc.hpp:
remove unused #defines
3 new paramters:
DiskSyncSize - Outstanding disk writes before sync (default 4M)
DiskCheckpointSpeed - Write speed of LCP in bytes/sec (default 10M)
DiskCheckpointSpeedInRestart - As above but during LCP (default 100M)
Depricated old NoOfDiskPagesToDisk*
- Change NoOfFragmentLogFiles default to 16 (1Gb)
storage/ndb/include/kernel/signaldata/BackupContinueB.hpp:
Add possibility to limitat of disk write speed in backup
storage/ndb/include/mgmapi/mgmapi_config_parameters.h:
Add possibility to limitat of disk write speed in backup
storage/ndb/src/kernel/blocks/backup/Backup.cpp:
Add possibility to limitat of disk write speed in backup
storage/ndb/src/kernel/blocks/backup/Backup.hpp:
Add possibility to limitat of disk write speed in backup
storage/ndb/src/kernel/blocks/backup/BackupInit.cpp:
Add possibility to limitat of disk write speed in backup
storage/ndb/src/mgmsrv/ConfigInfo.cpp:
Add possibility to limitat of disk write speed in backup
Change NoOfFragmentLogFiles default to 16 (1Gb)
DiskSyncSize
DiskCheckpointSpeed
DiskCheckpointSpeedInRestart
storage/ndb/src/mgmsrv/InitConfigFileParser.cpp:
Handle deprication warning also in my.cnf format
add sync-flag to FsAppendReq
storage/ndb/include/kernel/signaldata/FsAppendReq.hpp:
Add sync flag to FsAppend
storage/ndb/include/kernel/signaldata/FsOpenReq.hpp:
Add auto sync flag to FSOPEN
storage/ndb/src/kernel/blocks/ndbfs/AsyncFile.cpp:
Add append_synch and auto sync
storage/ndb/src/kernel/blocks/ndbfs/AsyncFile.hpp:
Add variables for auto sync
storage/ndb/src/kernel/blocks/ndbfs/Ndbfs.cpp:
Add append_sync and auto sync
- Remove the defines for strings and uses STRING_WITH_LEN directly when calling 'append'
sql/ha_federated.cc:
Remove the defines for strings and their lengths and use STRING_WITH_LEN directly when calling append().
sql/ha_federated.h:
Remove the defines for strings and their lengths and use STRING_WITH_LEN directly when calling append().
Make possible to build both debug/release from compile-ndb-autotest
BUILD/compile-ndb-autotest:
Make possible to build both debug/release from compile-ndb-autotest