Backport of correction for Mac OS X build problem, global variable not
initiated is "common" and can't be used in shared libraries, unless
special flags are used (bug#26218)
Dropping an user defined function may cause server crash in
case this function is still in use by another thread.
The problem was that our hash implementation didn't update
hash link list properly when hash_update() was called.
tables
In case system doesn't have native pread/pwrite calls (e.g. Windows)
and there is CHECK TABLE runs concurrently with another statement that
reads from a table, the table may be reported as crashed.
This is fixed by locking file descriptor when my_seek is executed on
MyISAM index file and emulated pread/pwrite may be executed concurrently.
Affects MyISAM tables on platforms that do not have native
pread/pwrite calls (e.g. Windows).
No deterministic test case for this bug.
This bug occurs when error message length exceeds allowed limit: my_error()
function outputs "%s" sequences instead of long string arguments.
Formats like %-.64s are very common in errmsg.txt files, however my_error()
function simply ignores precision of those formats.
- When cache memory can't be allocated size is recaclulated using 3/4 of
the requested memory. This number is rounded up to the nearest
min_cache step.
However with the previous implementation the new cache size might
become bigger than requested because of this rounding and thus we get
an infinit loop.
- This patch fixes this problem by ensuring that the new cache size
always will be smaller on the second and subsequent iterations until
we reach min_cache.
- The io cache flag seek_not_done was not set properly in the
reinit_io_cache function call and this led my_seek to be called
desipite an invalid file handle.
- Added a test in reinit_io_cache to ensure we have a valid file
handle before setting seek_not_done flag.
- Because my_seek actually is capable of returning an error code we should
exploit that in the best possible way.
- There might be kernel errors or other errors we can't predict and capturing
the return value of all system calls gives us better understanding of
possible errors.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Crash may happen when selecting from a merge table that has underlying
tables with less indexes than in a merge table itself.
If number of keys in merge table is not bigger than requested key number,
return error.