The problem:
CSV storage engine open function returns success even
thought it failed to open the data file
The fix:
return error
Additional fixes:
added MY_WME to my_open to avoid mysterious error message
free share struct if open the file was unsuccessful
"Update of CSV row incorrect for some BLOBs"
when reading in rows, move blob columns into temporary storage not
allocated by Field_blob class or else row update operation will
alter original row and make mysql think that nothing has been changed.
fix incrementing wrong statistic values.
"CSV does not work with NULL value in datetime fields"
Attempting to insert a row with a NULL value for a DATETIME field
results in a CSV file which the storage engine cannot read.
Don't blindly assume that "0" is acceptable for all field types,
Since CSV does not support NULL, we find out from the field the
default non-null value.
Do not permit the creation of a table with a nullable columns.
Problem: we don't take into account the length of the data written
to the temporary data file during update on a CSV table.
Fix: properly calculate the data file length during update.
Problem: we don't adjust share->rows_recorded and local_saved_data_file_length
deleting rows from a CSV table, so following table check may fail.
Fix: properly adjust those values.
leads to the table corruption
New Field::store() method implemented to explicitly set thd->count_cuted_fields
before value storing, instead of (incorrectly) setting it in the CSV storage engine.
Thread row counter now properly incremented during check and repair in the CSV engine.
Problem: Temporary buffer which is used for quoting and escaping
was initialized to character set utf8, and thus didn't allow
to store data in other character sets.
Fix: changing character set of the buffer to be able to
store any arbitrary sequence of bytes.
make server crash
UPDATE against CSV table may cause server crash or update a table with wrong
values.
CSV can write only a whole row at once. That means it must read all columns,
that it is not going to update, and write them along with updated columns.
But only limited set of columns was read, those that were needed for the
UPDATE query.
With this fix all columns are read in case we're performing an UPDATE.
Open and seek to end of data_file after rename
Fix comment for when file does not need repair.
Set share->mapped_file to NULL always when it's been unmapped
Add test to see that file can be used after repair