storage engine returns errno 12
If there is not enough memory to store or update blob record
(while allocating record buffer), myisam marks table as crashed.
With this fix myisam attempts to roll an index back and return
an error, not marking a table as crashed.
Affects myisam tables with blobs only. No test case for this fix.
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
ALTER TABLE DISABLE KEYS doesn't work when modifying the table
ENABLE|DISABLE KEYS combined with another ALTER TABLE option, different
than RENAME TO did nothing. Also, if the table had disabled keys
and was ALTER-ed then the end table was with enabled keys.
Fixed by checking whether the table had disabled keys and enabling them
in the copied table.
A corrupted compressed table could crash the server and
myisamchk.
The data file of an uncompressed table contains just the records.
There is no header in the data file.
However the data file of a compressed table has a header.
The header describes how the table was compressed. This
information is necessary to extract the records from the
compressed data file.
Part of the compressed data file header are the [de]code tables.
They are numeric representations of the Huffman trees used for
coding and decoding. A Huffman tree is a binary tree. Every
node has two childs. A child can be a leaf or a branch. Leaves
contain the decoded value. Branches point to another tree node.
Since the [de]code table is represented as an array of childs,
the branches need to point at a child within the same array.
The corruption of the compressed data file from the bug report
was a couple of branches that pointed outside their array.
This condition had not been correctly checked.
I added some checks for the pointers in the decode tables.
This type of corruption will no longer crash the server or
myisamchk.
No test case. A corrupted compressed table is required.
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
When compiling with a default key block size greater than the
smallest key block size used in a table, checking that table
failed with bogus errors. The table was marked corrupt. This
affected myisamchk and the server.
The problem was that the default key block size was used at
some places where sizes less or equal to the block size of the
index in check was required.
We do now use the key block size of the particular index
when checking.
A test case is available for later versions only.
really damaged MyISAM tables
When unpacking a blob column from broken row server crash
could happen. This could rather happen when trying to repair
a table using either REPAIR TABLE or myisamchk, though it
also could happend when trying to access broken row using
other SQL statements like SELECT if table is not marked as
crashed.
Fixed ulong overflow when trying to extract blob from
broken row.
Affects MyISAM only.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
set. This has always worked because when flag is !=0 then
HA_VAR_LENGTH_KEY is always set. Therefore, a test case cannot
reveal a faulty behavior.
Fix for bug#23074: typo in myisam/sort.c
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
Deletes on a big index could crash the index when it needs to
shrink.
Put a forgotten negation operator in.
No test case. It is too big for the test suite. And it does not
work with 4.0, only with higher versions. It is attached to the
bug report.
This problem affects myisam_ftdump tool only.
For fulltext index positive subkeys means word weight, negative subkeys
means number of documents in level 2 fulltext index.
Fixed that document counter was not properly updated for keys having
level 2 fulltext index.
No test case for this bug.
- When a MyISAM table which belongs to a merge table union and is
renamed the associated file descriptors are closed on windows.
This causes a server crash next time an insert or update is
performed on the merge table.
- This patch prevents the system from crashing on windows by
checking for bad file descriptors every time the MyISAM table
is locked by associated the merge table.
- When an ALTER TABLE RENAME is performed on windows, the files are closed and their cached file
descriptors are marked invalid. Performing INSERT, UPDATE or SELECT on the associated merge
table causes a server crash on windows. This patch adds a test for bad file descriptors when a
table attempts a lock. If a bad descriptor is found an error is thrown. An additional FLUSH TABLES
will be necessary to further operate on the associated merge table.
The problem is that on some Mac OS X-es the file writing/reading
call with zero bytes to read/write returns error.
So here i try to eliminate that kinds of calls.
table results in table corrupt
Fulltext key has always two keysegs, thus we need to update
FT_SEGS (last) element from seg array in case of compressed table.
Also we must update ft2_keyinfo.
Problem described in this bug report affects MyISAM tables only.
Running mysqld --flush instructs mysqld to sync all changes to disk
after each SQL statement. It worked well for INSERT and DELETE
statements, but it did sync for UPDATE only in case if there was
index change (change of colum that has an index). If no updated column
has an index, data wasn't synced to disk.
This fix makes UPDATE statement to sync data to disk even if there is
no index change (that is only data change) and mysqld is run with
--flush option.
Fixed a possible problem with reading of dynamic records
when a write cache is active. The cache must be flushed
whenever a part of the file in the write cache is to be
read.
Added a read optimization to _mi_read_dynamic_record().
No test case. This was a hypothetical but existing problem.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
It was possible that fetching a record by an exact key value
(including the record pointer) could return a record with a
different key value. This happened only if a concurrent insert
added a record with the searched key value after the fetching
statement locked the table for read.
The search succeded on the key value, but the record was
rejected as it was past the file length that was remembered
at start of the fetching statement. With other words it was
rejected as being a concurrently inserted record.
The action to recover from this problem was to fetch the
record that is pointed at by the next key of the index.
This was repeated until a record below the file length was
found.
I do now avoid this loop if an exact match was searched.
If this match is beyond the file length, it is now treated
as "key not found". There cannot be another key with the
same record pointer.
A corrupt table with dynamic record format can crash the
server when trying to select from it.
I fixed the crash that resulted from the particular type
of corruption that has been reported for this bug.
No test case. To test it, one needs a table with a very special
corruption. The bug report contains a file with such a table.
CHECK TABLE could complain about a fully intact spatial index.
A wrong comparison operator was used for table checking.
The result was that it checked for non-matching spatial keys.
This succeeded if at least two different keys were present,
but failed if only the matching key was present.
I fixed the key comparison.
Very complex select statements can create temporary tables
that are too big to be represented as a MyISAM table.
This was not checked at table creation time, but only at
open time. The result was an attempt to delete the
"impossible" table.
But if the server is built --with-raid, MyISAM tries to
open the table before deleting the files. It needs to find
out if the table uses the raid support and how many raid
chunks there are. This is done with an open "for repair",
which will almost always succeed.
But in this case we have an "impossible" table. The open
failed. Hence the files were not deleted. Also the error
message was a bit unspecific.
I turned an open error in this situation into the assumption
of having no raid support on the table. Thus the normal data
file is tried to be deleted. This may however leave existing
raid chunks behind.
I also added a check in mi_create() to prevent the creation
of an "impossible" table. A more decriptive error message is
given in this case.
No test case. The required select statement is way too
large for the test suite. I added a test script to the
bug report.
CHECK TABLE did temporarily clear the auto_increment value.
It runs with a read lock, allowing other readers and
conurrent INSERTs. The latter could grab the wrong value
in this moment.
CHECK TABLE does no longer modify the auto_increment value.
Not even for a short moment.
Write operations on tables created in 4.x with index on variable
length column results in index crash. Even REPAIR TABLE wasn't able
to fix broken index.
Problem was that packed key length size wasn't restored correctly.
In 5.0 packed key length size is either 1 or 2. In 4.x this length
is always 2, but is saved as 0.
This fix ensures that key length size is restored correctly for 4.x
tables.
Updating data in HEAP table with BTREE index results in wrong index_length
counter value, which keeps growing after each update.
When inserting new record into tree counter is incremented by:
sizeof(TREE_ELEMENT) + key_size + tree->size_of_element
But when deleting element from tree it doesn't decrement counter by key_size:
sizeof(TREE_ELEMENT) + tree->size_of_element
This fix makes accurate allocated memory counter for tree. That is
decrease counter by key_size when deleting tree element.
Retrieving data from compressed MyISAM table which is bigger than 4G on 32-bit box
with mmap() support results in server crash.
mmap() accepts length of bytes to be mapped in second param, which is 32-bit
size_t. But we pass data_file_length, which is 64-bit my_off_t. As a result only
first data_file_length % 4G were mapped.
This fix adds additional condition for mmap() usage, that is use mmap() for
compressed table which size is no more than 4G on 32-bit platform.
Whenever 'myisamchk' needed to recreate a table,
the auto increment information was lost.
Now the forgotten element of the table creation
information is set correctly.
For "count(*) while index_column = value" an index read
is done. It consists of an index scan and retrieval of
each key.
For efficiency reasons the index scan stores the key in
the special buffer 'lastkey2' once only. At the first
iteration it notes this fact with the flag
HA_STATE_RNEXT_SAME in 'info->update'.
For efficiency reasons, the key retrieval for blobs
does not allocate a new buffer, but uses 'lastkey2'...
Now I clear the HA_STATE_RNEXT_SAME flag whenever the
buffer has been polluted. In this case, the index scan
copies the key value again (and sets the flag again).
Remove wrong fix for Bug#14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
Safety fix for bug #13855 "select distinct with group by caused server crash"
for uca collation isalnum and strnncollsp don't agree on whether
0xC2A0 is a space (strnncollsp is right, isalnum is wrong).
they still don't, the bug was fixed by avoiding strnncollsp
Removed wrong fix for bug #14009 (use of abs() on null value causes problems with filesort)
Mark that add_time(), time_diff() and str_to_date() can return null values
Bug#12166 - Unable to index very large tables
Moved clearing of errors behind the second repair attempt,
but will clear only if no error happened.
No test case. The error can be repeated with little free
space on tmpdir only. I do not know of a way to create this
in a portable way with our test suite.
I did however attach a shell script to the bug report which
can easily be adapted to the situation on the test machine.
1. it's wrong to use memcpy() for overlapped areas;
2. we use it only once.
During merge to 4.1 will remove a memcpy_overlap() call
from strings/ctype-tis620.c as well in order to fix
bug #10836: ctype_tis620 test failure with ICC-compiled binaries on IA64.