After a locking error the open table(s) were not fully
cleaned up for reuse. But they were put into the open table
cache even before the lock was tried. The next statement
reused the table(s) with a wrong lock type set up. This
tricked MyISAM into believing that it don't need to update
the table statistics. Hence CHECK TABLE reported a mismatch
of record count and table size.
Fortunately nothing worse has been detected yet. The effect
of the test case was that the insert worked on a read locked
table. (!)
I added a new function that clears the lock type from all
tables that were prepared for a lock. I call this function
when a lock failes.
No test case. One test would add 50 seconds to the
test suite. Another test requires file mode modifications.
I added a test script to the bug report. It contains three
cases for failing locks. All could reproduce a table
corruption. All are fixed by this patch.
This bug was not lock timeout specific.
The bug caused a reported index corruption in the cases when
key_cache_block_size was not a multiple of myisam_block_size,
e.g. when key_cache_block_size=1536 while myisam_block_size=1024.
does not have "NOT NULL" attribute set. Also, calculate the padding
characters more safely, so that a negative number doesn't cause it to
print MAXINT-n spaces.
Retrieving data from compressed MyISAM table which is bigger than 4G on 32-bit box
with mmap() support results in server crash.
mmap() accepts length of bytes to be mapped in second param, which is 32-bit
size_t. But we pass data_file_length, which is 64-bit my_off_t. As a result only
first data_file_length % 4G were mapped.
This fix adds additional condition for mmap() usage, that is use mmap() for
compressed table which size is no more than 4G on 32-bit platform.
Conversion from int and real numbers to UCS2 didn't work fine:
CONVERT(100, CHAR(50) UNICODE)
CONVERT(103.9, CHAR(50) UNICODE)
The problem appeared because numbers have binary charset, so,
simple charset recast binary->ucs2 was performed
instead of real conversion.
Fixed to make numbers pretend to be non-binary.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
CONNECTION_ID() was implemented as a constant Item, i.e. an instance of
Item_static_int_func class holding value computed at creation time.
Since Items are created on parsing, and trigger statements are parsed
on table open, the first connection to open a particular table would
effectively set its own CONNECTION_ID() inside trigger statements for
that table.
Re-implement CONNECTION_ID() as a class derived from Item_int_func, and
compute connection_id on every call to fix_fields().
a race between updating and checking Max_used_connections. This is done in
a loop until either disconnect finished or timeout expired. In a latter case
the test will fail.