When compiling with a default key block size greater than the
smallest key block size used in a table, checking that table
failed with bogus errors. The table was marked corrupt. This
affected myisamchk and the server.
The problem was that the default key block size was used at
some places where sizes less or equal to the block size of the
index in check was required.
We do now use the key block size of the particular index
when checking.
A test case is available for later versions only.
really damaged MyISAM tables
When unpacking a blob column from broken row server crash
could happen. This could rather happen when trying to repair
a table using either REPAIR TABLE or myisamchk, though it
also could happend when trying to access broken row using
other SQL statements like SELECT if table is not marked as
crashed.
Fixed ulong overflow when trying to extract blob from
broken row.
Affects MyISAM only.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
set. This has always worked because when flag is !=0 then
HA_VAR_LENGTH_KEY is always set. Therefore, a test case cannot
reveal a faulty behavior.
Fix for bug#23074: typo in myisam/sort.c
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
Deletes on a big index could crash the index when it needs to
shrink.
Put a forgotten negation operator in.
No test case. It is too big for the test suite. And it does not
work with 4.0, only with higher versions. It is attached to the
bug report.