A corrupted compressed table could crash the server and
myisamchk.
The data file of an uncompressed table contains just the records.
There is no header in the data file.
However the data file of a compressed table has a header.
The header describes how the table was compressed. This
information is necessary to extract the records from the
compressed data file.
Part of the compressed data file header are the [de]code tables.
They are numeric representations of the Huffman trees used for
coding and decoding. A Huffman tree is a binary tree. Every
node has two childs. A child can be a leaf or a branch. Leaves
contain the decoded value. Branches point to another tree node.
Since the [de]code table is represented as an array of childs,
the branches need to point at a child within the same array.
The corruption of the compressed data file from the bug report
was a couple of branches that pointed outside their array.
This condition had not been correctly checked.
I added some checks for the pointers in the decode tables.
This type of corruption will no longer crash the server or
myisamchk.
No test case. A corrupted compressed table is required.
tables,alter table
Deadlock could happen if there are delayed insert + flush tables + alter table
running concurrently.
This is fixed by removing a redundant mutex lock when killing a delayed thread.
When compiling with a default key block size greater than the
smallest key block size used in a table, checking that table
failed with bogus errors. The table was marked corrupt. This
affected myisamchk and the server.
The problem was that the default key block size was used at
some places where sizes less or equal to the block size of the
index in check was required.
We do now use the key block size of the particular index
when checking.
A test case is available for later versions only.
really damaged MyISAM tables
When unpacking a blob column from broken row server crash
could happen. This could rather happen when trying to repair
a table using either REPAIR TABLE or myisamchk, though it
also could happend when trying to access broken row using
other SQL statements like SELECT if table is not marked as
crashed.
Fixed ulong overflow when trying to extract blob from
broken row.
Affects MyISAM only.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
Examined rows are counted for every join part. The per-join-part
counter was incremented over all iterations. The result variable
was replaced at the end of every iteration. The final result was
the number of examined rows by the join part that ended its
execution as the last one. The numbers of other join parts was
lost.
Now we reset the per-join-part counter before every iteration and
add it to the result variable at the end of the iteration. That
way we get the sum of all iterations of all join parts.
No test case. Testing this needs a look into the slow query log.
I don't know of a way to do this portably with the test suite.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
This is addition to fix for bug21617. Valgrind reports an error when
opening merge table that has underlying tables with less indexes than
in a merge table itself.
Copy at most min(file->keys, table->key_parts) elements from rec_per_key array.
This fixes problems when merge table and subtables have different number of keys.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.