Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
When compiling with a default key block size greater than the
smallest key block size used in a table, checking that table
failed with bogus errors. The table was marked corrupt. This
affected myisamchk and the server.
The problem was that the default key block size was used at
some places where sizes less or equal to the block size of the
index in check was required.
We do now use the key block size of the particular index
when checking.
A test case is available for later versions only.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
The problem is that on some Mac OS X-es the file writing/reading
call with zero bytes to read/write returns error.
So here i try to eliminate that kinds of calls.
CHECK TABLE could complain about a fully intact spatial index.
A wrong comparison operator was used for table checking.
The result was that it checked for non-matching spatial keys.
This succeeded if at least two different keys were present,
but failed if only the matching key was present.
I fixed the key comparison.
CHECK TABLE did temporarily clear the auto_increment value.
It runs with a read lock, allowing other readers and
conurrent INSERTs. The latter could grab the wrong value
in this moment.
CHECK TABLE does no longer modify the auto_increment value.
Not even for a short moment.
Whenever 'myisamchk' needed to recreate a table,
the auto increment information was lost.
Now the forgotten element of the table creation
information is set correctly.
Removed wrong fix for bug #14009 (use of abs() on null value causes problems with filesort)
Mark that add_time(), time_diff() and str_to_date() can return null values
Bug#12166 - Unable to index very large tables
Moved clearing of errors behind the second repair attempt,
but will clear only if no error happened.
No test case. The error can be repeated with little free
space on tmpdir only. I do not know of a way to create this
in a portable way with our test suite.
I did however attach a shell script to the bug report which
can easily be adapted to the situation on the test machine.
This patch allows to configure MyISAM for 128 indexes per table.
The main problem is the key_map, wich is implemented as an ulonglong.
To get rid of the limit and keep the efficient and flexible
implementation, the highest bit is now used for all upper keys.
This means that the lower keys can be disabled and enabled
individually as usual and the high keys can only be disabled and
enabled as a block. That way the existing test suite is still
applicable, while more keys work, though slightly less efficient.
To really get more than 64 keys, some defines need to be changed.
Another patch will address this.
After review fix
Copy from internal state to share state only when in lock write
mode (happens only when lock table x write has been performed since
update_state_info is only called when holding a TL_READ_NO_INSERT
lock normally. Previous patch would have failed in combination with
delayed writes.
Analyze table corrupts the state on
data_file_length, records, index_file_length...
by writing the shared state when there is an updated internal
state due to inserts or deletes
Fixed by synching the shared state with the internal state before
writing it to disk
Added test cases of 2 error cases and a normal case in new
analyze test case