RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
The problem is that on some Mac OS X-es the file writing/reading
call with zero bytes to read/write returns error.
So here i try to eliminate that kinds of calls.
table results in table corrupt
Fulltext key has always two keysegs, thus we need to update
FT_SEGS (last) element from seg array in case of compressed table.
Also we must update ft2_keyinfo.
Problem described in this bug report affects MyISAM tables only.
Running mysqld --flush instructs mysqld to sync all changes to disk
after each SQL statement. It worked well for INSERT and DELETE
statements, but it did sync for UPDATE only in case if there was
index change (change of colum that has an index). If no updated column
has an index, data wasn't synced to disk.
This fix makes UPDATE statement to sync data to disk even if there is
no index change (that is only data change) and mysqld is run with
--flush option.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
It was possible that fetching a record by an exact key value
(including the record pointer) could return a record with a
different key value. This happened only if a concurrent insert
added a record with the searched key value after the fetching
statement locked the table for read.
The search succeded on the key value, but the record was
rejected as it was past the file length that was remembered
at start of the fetching statement. With other words it was
rejected as being a concurrently inserted record.
The action to recover from this problem was to fetch the
record that is pointed at by the next key of the index.
This was repeated until a record below the file length was
found.
I do now avoid this loop if an exact match was searched.
If this match is beyond the file length, it is now treated
as "key not found". There cannot be another key with the
same record pointer.
A corrupt table with dynamic record format can crash the
server when trying to select from it.
I fixed the crash that resulted from the particular type
of corruption that has been reported for this bug.
No test case. To test it, one needs a table with a very special
corruption. The bug report contains a file with such a table.
CHECK TABLE could complain about a fully intact spatial index.
A wrong comparison operator was used for table checking.
The result was that it checked for non-matching spatial keys.
This succeeded if at least two different keys were present,
but failed if only the matching key was present.
I fixed the key comparison.
Very complex select statements can create temporary tables
that are too big to be represented as a MyISAM table.
This was not checked at table creation time, but only at
open time. The result was an attempt to delete the
"impossible" table.
But if the server is built --with-raid, MyISAM tries to
open the table before deleting the files. It needs to find
out if the table uses the raid support and how many raid
chunks there are. This is done with an open "for repair",
which will almost always succeed.
But in this case we have an "impossible" table. The open
failed. Hence the files were not deleted. Also the error
message was a bit unspecific.
I turned an open error in this situation into the assumption
of having no raid support on the table. Thus the normal data
file is tried to be deleted. This may however leave existing
raid chunks behind.
I also added a check in mi_create() to prevent the creation
of an "impossible" table. A more decriptive error message is
given in this case.
No test case. The required select statement is way too
large for the test suite. I added a test script to the
bug report.
Retrieving data from compressed MyISAM table which is bigger than 4G on 32-bit box
with mmap() support results in server crash.
mmap() accepts length of bytes to be mapped in second param, which is 32-bit
size_t. But we pass data_file_length, which is 64-bit my_off_t. As a result only
first data_file_length % 4G were mapped.
This fix adds additional condition for mmap() usage, that is use mmap() for
compressed table which size is no more than 4G on 32-bit platform.
Whenever 'myisamchk' needed to recreate a table,
the auto increment information was lost.
Now the forgotten element of the table creation
information is set correctly.
For "count(*) while index_column = value" an index read
is done. It consists of an index scan and retrieval of
each key.
For efficiency reasons the index scan stores the key in
the special buffer 'lastkey2' once only. At the first
iteration it notes this fact with the flag
HA_STATE_RNEXT_SAME in 'info->update'.
For efficiency reasons, the key retrieval for blobs
does not allocate a new buffer, but uses 'lastkey2'...
Now I clear the HA_STATE_RNEXT_SAME flag whenever the
buffer has been polluted. In this case, the index scan
copies the key value again (and sets the flag again).
Remove wrong fix for Bug#14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
Safety fix for bug #13855 "select distinct with group by caused server crash"
for uca collation isalnum and strnncollsp don't agree on whether
0xC2A0 is a space (strnncollsp is right, isalnum is wrong).
they still don't, the bug was fixed by avoiding strnncollsp
Removed wrong fix for bug #14009 (use of abs() on null value causes problems with filesort)
Mark that add_time(), time_diff() and str_to_date() can return null values