OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Certain updates of table joined to self results in unexpected
behavior.
The problem was that record cache was mistakenly enabled for
self-joined table updates. Normally record cache must be disabled
for such updates.
Fixed wrong condition in code that determines whether to use
record cache for self-joined table updates.
Only MyISAM tables were affected.
For "count(*) while index_column = value" an index read
is done. It consists of an index scan and retrieval of
each key.
For efficiency reasons the index scan stores the key in
the special buffer 'lastkey2' once only. At the first
iteration it notes this fact with the flag
HA_STATE_RNEXT_SAME in 'info->update'.
For efficiency reasons, the key retrieval for blobs
does not allocate a new buffer, but uses 'lastkey2'...
Now I clear the HA_STATE_RNEXT_SAME flag whenever the
buffer has been polluted. In this case, the index scan
copies the key value again (and sets the flag again).
Initialized usable_keys from table->keys_in_use instead of ~0
in test_if_skip_sort_order(). It was possible that a disabled
index was used for sorting.
Since 4.1 keys are compared with trailing spaces.
Thus, a "x " key can be inserted between a couple of "x" keys.
The existing code did not take this into account. Though the
comments in the code claimed it did.
Added more DBUG statements
Ensure that we are comparing end space with BINARY strings
Use 'any_db' instead of '' to mean any database. (For HANDLER command)
Only strip ' ' when comparing CHAR, not other space-like characters (like \t)
Add pack_bits to pack_reclength for dynamic rows. This solves buffer a possible buffer overflow on update.
(This will probably solve bug #563)
Fix test for available file descriptors in mysqltest
Fixed core dump bug in replication tests when running without transactional table support
Portability fixes
Added new client function: mysql_get_server_version()
New server help code (From Victor Vagin)
Fixed wrong usage of binary()
Disabled RTREE usage for now.