on any access
Archive engine for 5.1 (and latter) version uses a modified
version of zlib (azlib). These two version are incompatible
so a proper upgrade is needed before tables created in 5.0
can be used reliable.
This upgrade can be performed using repair. But due to lack
of test its risky to allow upgrade for now. This patch addresses
only the crashing issue. Any attempt to repair will be blocked.
Eventually repair can be allowed to run through (which will also
cause an upgrade from older version to newer) but only after a
thorough testing.
Archive engine returns wrong values for average record length
and max data length.
With this fix they're calculated as following:
- max data length is 2 ^ 63 where large files are supported
and INT_MAX32 where this is not supported;
- average record length is data length / records in data file.
SELECT with join (not only self-join) from archive table may
return incomplete result set, when result set size exceeds
join buffer size.
The problem was that archive row counter was initialzed too
early, when ha_archive::info() method was called. Later,
when optimizer exceeds join buffer, it attempts to reuse
handler without calling ha_archive::info() again (which is
correct).
Fixed by moving row counter initialization from
ha_archive::info() to ha_archive::rnd_init().
Any statement reading corrupt archive data file
(CHECK/REPAIR/SELECT/UPDATE/DELETE) may cause assertion
failure in debug builds. This assertion has been removed
and an error is returned instead.
Also fixed that CHECK/REPAIR returns vague error message
when it mets corruption in archive data file. This is
fixed by returning proper error code.
ha_partition::update_create_info() just calls update_create_info
of a first partition, so only get the autoincrement maximum
of the first partition, so SHOW CREATE TABLE can show
small AUTO_INCREMENT parameters.
Fixed by implementing ha_partition::update_create_info() in a way
other handlers work.
HA_ARCHIVE:stats.auto_increment handling made consistent with other engines
table cache is full
After reading last record from freshly opened archive table
(e.g. after flush table, or if there is no room in table cache),
the table is reported as crashed.
The problem was that azio wrongly invalidated azio_stream when it
meets EOF.
CHECK TABLE against ARCHIVE table may falsely report table corruption,
or cause server crash.
Fixed by using proper buffer for CHECK TABLE.
Affects both 5.0 and 5.1.
ARCHIVE table
ARCHIVE table was truncated by REPAIR TABLE ... USE_FRM statement.
The table handler returned its file name extensions in a wrong order.
REPAIR TABLE believed it has to use the meta file to create a new table
from it.
With the fixed order, REPAIR TABLE does now use the data file to create
a new table. So REPAIR TABLE ... USE_FRM works well with ARCHIVE engine
now.
This issue affects 5.0 only, since in 5.1 ARCHIVE engine stores meta
information and data in the same file.
1) Two small windows cleanups for Archive.
2) Patch from Calvin for Falcon to be able to have its own I_S loaded. One example added for this, does hello world.
Also added a flush table test as well. Found one possible bug in OPTIMIZE TABLE which has never been reported, but I think it would be possible on a file system that ran out of disk.