OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Crash may happen when selecting from a merge table that has underlying
tables with less indexes than in a merge table itself.
If number of keys in merge table is not bigger than requested key number,
return error.
The problem is that on some Mac OS X-es the file writing/reading
call with zero bytes to read/write returns error.
So here i try to eliminate that kinds of calls.
event' from master"
Since there is no repeatable test case, and this is obviously wrong, this is
the most conservative change that might possibly work.
The syscall read() wasn't checked for a negative return value for an
interrupted read. The kernel sys_read() returns -EINTR, and the "library"
layer maps that to return value of -1 and sets errno to EINTR. It's
impossible (on Linux) for read() to set errno EINTR without the return
value being -1 .
So, if we're checking for EINTR behavior, we should not require that the
return value be zero.
Too many cursors (more than 1024) could lead to memory corruption.
This affects both, stored routines and C API cursors, and the
threshold is per-server, not per-connection. Similarly, the
corruption could happen when the server was under heavy load
(executing more than 1024 simultaneous complex queries), and this is
the reason why this bug is fixed in 4.1, which doesn't support
cursors.
The corruption was caused by a bug in the temporary tables code, when
an attempt to create a table could lead to a write beyond allocated
space. Note, that only internal tables were affected (the tables
created internally by the server to resolve the query), not tables
created with CREATE TEMPORARY TABLE. Another pre-condition for the
bug is TRUE value of --temp-pool startup option, which, however, is a
default.
The cause of a bug was that random memory was overwritten in
bitmap_set_next() due to out-of-bound memory access.
and BUG#19208 "Test 'rpl000017' hangs on Windows".
Both bugs are caused by attempting to delete an opened
file and to create immediatedly a new one with the same
name. On Windows it can be supported only on NT-platforms
(by using FILE_SHARE_DELETE mode and with renaming the
file before deletion). Because deleting not-closed files
is not supported on all platforms (e.g. Win 98|ME) this
is to be considered harmful and should be eliminated by
a "code redesign".
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
The problem was a call to convert_dirname() with a destination buffer
that did not have room for the trailing slash added by that function.
This could cause the instance manager to crash in some cases.
Only check for FN_DEVCHAR in filenames if FN_DEVCHAR is defined.
This allows to use table names with ":" on non windows platforms.
On Windows platform get an error if you use table name that contains FN_DEVCHAR