{{{test_log10.recover}}} now works. Fixes #548.

The final fix for this bug involves writing zeros into the unused bytes of the disk when serializing nodes.  
This is important for two reasons:
 1. It makes the files the same the bit-level.  (The problem showed up because a node of size near 1MB was written, and then the node split, causing the node to shrink, and when the node was written again, some left over bits from the previous node were still on disk.  Then the file compare failed after recovery.)
 1. It causes the file system to actually allocate the space for a node, so that when a node grows, it will all be contiguous on disk.

It has the disadvantage of writing more to disk than we did before, possibly reducing performance.  It probably doesn't matter much, however. 


git-svn-id: file:///svn/tokudb@2916 c7de825b-a66e-492c-adef-691d508d4ae1
This commit is contained in:
Bradley C. Kuszmaul 2008-03-18 12:08:56 +00:00
parent fba345a3e9
commit 7dcf06384a

View file

@ -186,11 +186,13 @@ void toku_serialize_brtnode_to(int fd, DISKOFF off, DISKOFF size, BRTNODE node)
wbuf_int(&w, w.crc32);
#endif
memset(w.buf+w.ndone, 0, size-w.ndone); // fill with zeros
//write_now: printf("%s:%d Writing %d bytes\n", __FILE__, __LINE__, w.ndone);
{
ssize_t r=pwrite(fd, w.buf, w.ndone, off);
ssize_t r=pwrite(fd, w.buf, size, off); // write the whole buffer, including the zeros
if (r<0) printf("r=%ld errno=%d\n", (long)r, errno);
assert((size_t)r==w.ndone);
assert(r==size);
}
if (calculated_size!=w.ndone)