OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
On an INSERT into an updatable but non-insertable view an error message was
issued stating the view being not updatable. This can lead to a confusion of a
user.
A new error message is introduced. Is is showed when a user tries to insert
into a non-insertable view.
Crash may happen when selecting from a merge table that has underlying
tables with less indexes than in a merge table itself.
If number of keys in merge table is not bigger than requested key number,
return error.
Presence of a subquery in the ON expression of a join
should not block merging the view that contains this join.
Before this patch the such views were converted into
into temporary table views.
an ALL/ANY quantified subquery in HAVING.
The Item::split_sum_func2 method should not create Item_ref
for objects of any class derived from Item_subselect.
Any default value for a enum fields over UCS2 charsets was corrupted
when we put it into the frm file, as it had been overwritten by its
HEX representation.
To fix it now we save a copy of structure that represents the enum
type and when putting the default values we use this copy.
that returns the results of aggregation by GROUP_CONCAT.
The crash was due to an overflow happened for the field
sortoder->length.
The fix prevents this overflow exploiting the fact that the
value of sortoder->length cannot be greater than the value of
thd->variables.max_sort_length.
Bug#20627 - INSERT DELAYED does not honour auto_increment_* variables
INSERT DELAYED ignored an explicitly set INSERT_ID and session
specific auto_increment_* variables.
The problem was that the inserts are done by a system thread,
which does not have access to the session variables of the user
thread.
On a proposal of Guilhem I fixed it so that the variables are
copied to the data structure for every delayed row. The system
thread sets its session variables from these values.
Fixed confusing error message from the storage engine when
it fails to open underlying table. The error message is issued
when a table is _opened_ (not when it is created).