OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
Deletes on a big index could crash the index when it needs to
shrink.
Put a forgotten negation operator in.
No test case. It is too big for the test suite. And it does not
work with 4.0, only with higher versions. It is attached to the
bug report.
This problem affects myisam_ftdump tool only.
For fulltext index positive subkeys means word weight, negative subkeys
means number of documents in level 2 fulltext index.
Fixed that document counter was not properly updated for keys having
level 2 fulltext index.
No test case for this bug.
- When a MyISAM table which belongs to a merge table union and is
renamed the associated file descriptors are closed on windows.
This causes a server crash next time an insert or update is
performed on the merge table.
- This patch prevents the system from crashing on windows by
checking for bad file descriptors every time the MyISAM table
is locked by associated the merge table.
- When an ALTER TABLE RENAME is performed on windows, the files are closed and their cached file
descriptors are marked invalid. Performing INSERT, UPDATE or SELECT on the associated merge
table causes a server crash on windows. This patch adds a test for bad file descriptors when a
table attempts a lock. If a bad descriptor is found an error is thrown. An additional FLUSH TABLES
will be necessary to further operate on the associated merge table.
The problem is that on some Mac OS X-es the file writing/reading
call with zero bytes to read/write returns error.
So here i try to eliminate that kinds of calls.
table results in table corrupt
Fulltext key has always two keysegs, thus we need to update
FT_SEGS (last) element from seg array in case of compressed table.
Also we must update ft2_keyinfo.
Problem described in this bug report affects MyISAM tables only.
Running mysqld --flush instructs mysqld to sync all changes to disk
after each SQL statement. It worked well for INSERT and DELETE
statements, but it did sync for UPDATE only in case if there was
index change (change of colum that has an index). If no updated column
has an index, data wasn't synced to disk.
This fix makes UPDATE statement to sync data to disk even if there is
no index change (that is only data change) and mysqld is run with
--flush option.
Fixed a possible problem with reading of dynamic records
when a write cache is active. The cache must be flushed
whenever a part of the file in the write cache is to be
read.
Added a read optimization to _mi_read_dynamic_record().
No test case. This was a hypothetical but existing problem.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
It was possible that fetching a record by an exact key value
(including the record pointer) could return a record with a
different key value. This happened only if a concurrent insert
added a record with the searched key value after the fetching
statement locked the table for read.
The search succeded on the key value, but the record was
rejected as it was past the file length that was remembered
at start of the fetching statement. With other words it was
rejected as being a concurrently inserted record.
The action to recover from this problem was to fetch the
record that is pointed at by the next key of the index.
This was repeated until a record below the file length was
found.
I do now avoid this loop if an exact match was searched.
If this match is beyond the file length, it is now treated
as "key not found". There cannot be another key with the
same record pointer.
A corrupt table with dynamic record format can crash the
server when trying to select from it.
I fixed the crash that resulted from the particular type
of corruption that has been reported for this bug.
No test case. To test it, one needs a table with a very special
corruption. The bug report contains a file with such a table.
CHECK TABLE could complain about a fully intact spatial index.
A wrong comparison operator was used for table checking.
The result was that it checked for non-matching spatial keys.
This succeeded if at least two different keys were present,
but failed if only the matching key was present.
I fixed the key comparison.
Very complex select statements can create temporary tables
that are too big to be represented as a MyISAM table.
This was not checked at table creation time, but only at
open time. The result was an attempt to delete the
"impossible" table.
But if the server is built --with-raid, MyISAM tries to
open the table before deleting the files. It needs to find
out if the table uses the raid support and how many raid
chunks there are. This is done with an open "for repair",
which will almost always succeed.
But in this case we have an "impossible" table. The open
failed. Hence the files were not deleted. Also the error
message was a bit unspecific.
I turned an open error in this situation into the assumption
of having no raid support on the table. Thus the normal data
file is tried to be deleted. This may however leave existing
raid chunks behind.
I also added a check in mi_create() to prevent the creation
of an "impossible" table. A more decriptive error message is
given in this case.
No test case. The required select statement is way too
large for the test suite. I added a test script to the
bug report.
CHECK TABLE did temporarily clear the auto_increment value.
It runs with a read lock, allowing other readers and
conurrent INSERTs. The latter could grab the wrong value
in this moment.
CHECK TABLE does no longer modify the auto_increment value.
Not even for a short moment.
Write operations on tables created in 4.x with index on variable
length column results in index crash. Even REPAIR TABLE wasn't able
to fix broken index.
Problem was that packed key length size wasn't restored correctly.
In 5.0 packed key length size is either 1 or 2. In 4.x this length
is always 2, but is saved as 0.
This fix ensures that key length size is restored correctly for 4.x
tables.
Updating data in HEAP table with BTREE index results in wrong index_length
counter value, which keeps growing after each update.
When inserting new record into tree counter is incremented by:
sizeof(TREE_ELEMENT) + key_size + tree->size_of_element
But when deleting element from tree it doesn't decrement counter by key_size:
sizeof(TREE_ELEMENT) + tree->size_of_element
This fix makes accurate allocated memory counter for tree. That is
decrease counter by key_size when deleting tree element.
Retrieving data from compressed MyISAM table which is bigger than 4G on 32-bit box
with mmap() support results in server crash.
mmap() accepts length of bytes to be mapped in second param, which is 32-bit
size_t. But we pass data_file_length, which is 64-bit my_off_t. As a result only
first data_file_length % 4G were mapped.
This fix adds additional condition for mmap() usage, that is use mmap() for
compressed table which size is no more than 4G on 32-bit platform.
Whenever 'myisamchk' needed to recreate a table,
the auto increment information was lost.
Now the forgotten element of the table creation
information is set correctly.
For "count(*) while index_column = value" an index read
is done. It consists of an index scan and retrieval of
each key.
For efficiency reasons the index scan stores the key in
the special buffer 'lastkey2' once only. At the first
iteration it notes this fact with the flag
HA_STATE_RNEXT_SAME in 'info->update'.
For efficiency reasons, the key retrieval for blobs
does not allocate a new buffer, but uses 'lastkey2'...
Now I clear the HA_STATE_RNEXT_SAME flag whenever the
buffer has been polluted. In this case, the index scan
copies the key value again (and sets the flag again).
Remove wrong fix for Bug#14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
Safety fix for bug #13855 "select distinct with group by caused server crash"
for uca collation isalnum and strnncollsp don't agree on whether
0xC2A0 is a space (strnncollsp is right, isalnum is wrong).
they still don't, the bug was fixed by avoiding strnncollsp
Removed wrong fix for bug #14009 (use of abs() on null value causes problems with filesort)
Mark that add_time(), time_diff() and str_to_date() can return null values
Bug#12166 - Unable to index very large tables
Moved clearing of errors behind the second repair attempt,
but will clear only if no error happened.
No test case. The error can be repeated with little free
space on tmpdir only. I do not know of a way to create this
in a portable way with our test suite.
I did however attach a shell script to the bug report which
can easily be adapted to the situation on the test machine.
1. it's wrong to use memcpy() for overlapped areas;
2. we use it only once.
During merge to 4.1 will remove a memcpy_overlap() call
from strings/ctype-tis620.c as well in order to fix
bug #10836: ctype_tis620 test failure with ICC-compiled binaries on IA64.
The problem was an ab-use of last_rkey_length.
Formerly we saved the packed key length (of the search key)
in this element. But in certain cases it got replaced by
the (packed) result key length.
Now we use a new element of MI_INFO to save the packed key
length of the search key.
This patch allows to configure MyISAM for 128 indexes per table.
The main problem is the key_map, wich is implemented as an ulonglong.
To get rid of the limit and keep the efficient and flexible
implementation, the highest bit is now used for all upper keys.
This means that the lower keys can be disabled and enabled
individually as usual and the high keys can only be disabled and
enabled as a block. That way the existing test suite is still
applicable, while more keys work, though slightly less efficient.
To really get more than 64 keys, some defines need to be changed.
Another patch will address this.
Without this patch, all file elements in info have symlink resolved
pathnames. This means append_create_info does not have any way
of showing the correct database name when a symlinked database is used.
produces invalid query results
mi_key.c:
well_formed_length should be executed before space trimming, not after.
ctype_utf8.test:
ctype_utf8.result:
adding test.
UPPER/LOWER now can return a string with different length.
mi_test1.c:
Adding new arguments.
Many files:
Changeing caseup/casedn to return a result with different
length than argument.
sql_string.h:
Removing unused method,
mysql_priv.h:
Removing unused method
After review fix
Copy from internal state to share state only when in lock write
mode (happens only when lock table x write has been performed since
update_state_info is only called when holding a TL_READ_NO_INSERT
lock normally. Previous patch would have failed in combination with
delayed writes.
Analyze table corrupts the state on
data_file_length, records, index_file_length...
by writing the shared state when there is an updated internal
state due to inserts or deletes
Fixed by synching the shared state with the internal state before
writing it to disk
Added test cases of 2 error cases and a normal case in new
analyze test case
myisam_max_extra_sort_file_size is depricated
Ensure that myisam_data_pointer_size is honoured when creating new MyISAM files
Changed default value of myisam_data_pointer_size from 4 to 6 to get rid of 'table-is-full' errors
This is the second of three changesets. It contains the pure bug fix.
It also contains the second after-review fixes.
The problem was that with gcc on x86, shifts are done modulo word size.
'value' is 32 bits wide and shifting it by 32 bits is a no-op.
This was triggered by an evil distribution of character incidences.
A distribution of 2917027827 characters made of 202 distinct values led to
34 occurrences of 32-bit Huffman codes.
This might have been the first time ever that write_bits() had to write
32-bit values. Since it can be expected that one day even 32 bits might
be insufficient, the third changeset suggests to enlarge some variables
to 64 bits.
Since 4.1 keys are compared with trailing spaces.
Thus, a "x " key can be inserted between a couple of "x" keys.
The existing code did not take this into account. Though the
comments in the code claimed it did.
Windows to call CreateFileMapping() with correct arguments, and
propogating the introduction of query_id_t to everywhere query ids are
passed around. (Bug #8826)