The function mi_get_pointer_length() computed too small
pointer size for very large tables.
Inserted missing 'else' between the branches for very
large tables.
An update that used a join of a table to itself and modified the
table on one side of the join reported the table as crashed or
updated wrong rows.
Fixed by creating temporary table for self-joined multi update statement.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Currently SQL_BIG_RESULT is checked only at compile time.
However, additional optimizations may take place after
this check that change the sort method from 'filesort'
to sorting via index. As a result the actual plan
executed is not the one specified by the SQL_BIG_RESULT
hint. Similarly, there is no such test when executing
EXPLAIN, resulting in incorrect output.
The patch corrects the problem by testing for
SQL_BIG_RESULT both during the explain and execution
phases.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
functions in queries
Using MAX()/MIN() on table with disabled indexes (by ALTER TABLE)
results in error 124 (wrong index) from storage engine.
The problem was that optimizer use disabled index to optimize
MAX()/MIN(). Normally it must skip disabled index and perform
table scan.
This patch skips disabled indexes for min/max optimization.
Certain updates of table joined to self results in unexpected
behavior.
The problem was that record cache was mistakenly enabled for
self-joined table updates. Normally record cache must be disabled
for such updates.
Fixed wrong condition in code that determines whether to use
record cache for self-joined table updates.
Only MyISAM tables were affected.
For "count(*) while index_column = value" an index read
is done. It consists of an index scan and retrieval of
each key.
For efficiency reasons the index scan stores the key in
the special buffer 'lastkey2' once only. At the first
iteration it notes this fact with the flag
HA_STATE_RNEXT_SAME in 'info->update'.
For efficiency reasons, the key retrieval for blobs
does not allocate a new buffer, but uses 'lastkey2'...
Now I clear the HA_STATE_RNEXT_SAME flag whenever the
buffer has been polluted. In this case, the index scan
copies the key value again (and sets the flag again).
There are (at least) two implementations of the checksum
computation. One is in MyISAM for the quick checksum. It
is executed on every row change. The other is in the
SQL layer for the extended checksum. It retrieves all rows
of a table via the respective storage engine.
In former MySQL versions varchars were stored with their
maximum length, but now with their real length similar to
blobs.
This change had been forgotten to take care of in the
extended checksum calculation. Hence too much data was
checksumed. In MyISAM this change had been taken care of
already. Only the real data is included in the checksum.
I changed mysql_checksum_table() so that it uses the
length information of true varchar fields instead
of the field length like in former varchar
implementations.
Initialized usable_keys from table->keys_in_use instead of ~0
in test_if_skip_sort_order(). It was possible that a disabled
index was used for sorting.