ALTER TABLE DISABLE KEYS doesn't work when modifying the table
ENABLE|DISABLE KEYS combined with another ALTER TABLE option, different
than RENAME TO did nothing. Also, if the table had disabled keys
and was ALTER-ed then the end table was with enabled keys.
Fixed by checking whether the table had disabled keys and enabling them
in the copied table.
myisam/mi_open.c:
Extend mi_indexes_are_disabled to implement return value
2 - Non-unique indexes are disabled
mysql-test/r/alter_table.result:
update result
mysql-test/t/alter_table.test:
update test
sql/sql_table.cc:
When ENABLE|DISABLE index is combined with another option
different than RENAME TO, we should ENABLE|DISABLE the keys of
the modified table. Also when modifying we should preserve the
previous state of the indices.
(This problem exists in 5.0 and 5.1 but since the codebase has
diverged, this fix won't automerge, but the fix will be quite
similar).
into zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-4.1-maint
configure.in:
Auto merged
mysql-test/t/ps.test:
Auto merged
sql/handler.cc:
Auto merged
sql/sql_delete.cc:
Auto merged
sql/sql_select.cc:
Auto merged
sql/table.cc:
Auto merged
tests/mysql_client_test.c:
Auto merged
myisam/sort.c:
Manual merge.
mysql-test/r/innodb_mysql.result:
Manual merge.
mysql-test/t/innodb_mysql.test:
Manual merge.
mysys/mf_iocache.c:
Manual merge.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
myisam/sort.c:
maxbuffer is number of BUFFPEK-s for repair. It is calculated
as records / keys. keys is number of keys that can be stored
in memory (myisam_sort_buffer_size). There must be sufficient
memory to store both BUFFPEK-s and keys. It was checked
correctly before this patch. However there is another
requirement that wasn't checked: there must be sufficient
memory for at least one key per BUFFPEK, otherwise repair
by sort/parallel repair cannot operate.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
mysql-test/r/repair.result:
A test case for BUG#23175.
mysql-test/t/repair.test:
A test case for BUG#23175.
set. This has always worked because when flag is !=0 then
HA_VAR_LENGTH_KEY is always set. Therefore, a test case cannot
reveal a faulty behavior.
Fix for bug#23074: typo in myisam/sort.c
myisam/sort.c:
fix typo. Nevertheless, it has worked as expected
because when a bit in flag is set HA_VAR_LENGTH_KEY has
been always set too. Actually, no problem exposed through
DDL.
into chilla.local:/home/mydev/mysql-4.1-bug8283-one
myisam/mi_check.c:
Auto merged
myisam/mi_packrec.c:
Auto merged
myisam/sort.c:
Auto merged
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
include/my_sys.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
include/myisam.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of checksum calculation in mi_check.c.
'calc_checksum' is now in myisamdef.h:st_mi_sort_param.
myisam/mi_check.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Implemented a new parallel repair design.
Using a synchronized shared read/write cache.
Allowed for thread specific bit_buff, rec_buff, and calc_checksum.
myisam/mi_open.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added DBUG output.
myisam/mi_packrec.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Allowed for thread specific bit_buff and rec_buff.
myisam/myisamdef.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Commented on checksum calculation variables.
Allowed for thread specific bit_buff.
Added DBUG output for better table crash detection.
myisam/sort.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added implications of the new parallel repair design.
Renamed 'info' -> 'sort_param'.
Added DBUG output.
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test results.
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test cases.
mysys/mf_iocache.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
We do now allow a writer to synchronize himself with the
readers of a shared cache. When all threads join in the lock,
the writer copies the data from his write buffer to the shared
read buffer.
Fix: using charset-aware functions cs->cset->lengthsp() and cs->cset->fill()
instead of single byte code which is not UCS2 compatible.
myisam/mi_key.c:
Using character-set aware functions to trim and append spaces.
myisam/mi_open.c:
Initialize charset for BINARY/VARBINARY to my_charset_bin,
instead of NULL
mysql-test/r/ctype_ucs.result:
Adding test case
mysql-test/t/ctype_ucs.test:
Adding test case
into mysql.com:/home/svoj/devel/mysql/merge/mysql-4.1-engines
mysql-test/r/myisam.result:
Auto merged
mysql-test/t/myisam.test:
Auto merged
sql/table.cc:
Auto merged
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
myisam/mi_range.c:
return the error if min_key is NULL
mysql-test/r/gis-rtree.result:
test result
mysql-test/t/gis-rtree.test:
test case
Deletes on a big index could crash the index when it needs to
shrink.
Put a forgotten negation operator in.
No test case. It is too big for the test suite. And it does not
work with 4.0, only with higher versions. It is attached to the
bug report.
myisam/mi_delete.c:
Bug#22384 - DELETE FROM table causes "Incorrect key file for table"
Put a negation operator ('!') before _mi_get_last_key() in del().
It returns NULL on error, non-NULL on success.
into chilla.local:/home/mydev/mysql-4.0-bug14400
mysql-test/r/myisam.result:
Auto merged
myisam/mi_rkey.c:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge
mysql-test/t/myisam.test:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge
into chilla.local:/home/mydev/mysql-4.0-bug14400
myisam/mi_rkey.c:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge
into chilla.local:/home/mydev/mysql-4.1-bug14400-monty
BitKeeper/etc/ignore:
auto-union
include/my_global.h:
Auto merged
myisam/mi_rkey.c:
Manual null merge as a better fix is already present.
mysql-test/r/myisam.result:
Manual null merge as a better fix is already present.
mysql-test/t/myisam.test:
Manual null merge as a better fix is already present.
sql/sql_select.cc:
Manual merge of purify improvements.
into chilla.local:/home/mydev/mysql-4.1-bug14400
mysql-test/r/myisam.result:
Auto merged
mysql-test/t/myisam.test:
Auto merged
myisam/mi_rkey.c:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge
into chilla.local:/home/mydev/mysql-4.1-bug14400
myisam/mi_rkey.c:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge from 4.0
mysql-test/r/myisam.result:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge from 4.0
mysql-test/t/myisam.test:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Manual merge from 4.0
"concurrent insert"
Additional fix for full keys and test case.
myisam/mi_rkey.c:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Additional fix for full keys.
mysql-test/r/myisam.result:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Additional results.
mysql-test/t/myisam.test:
Bug#14400 - Query joins wrong rows from table which is subject of
"concurrent insert"
Additional test case.
The problem is that on some Mac OS X-es the file writing/reading
call with zero bytes to read/write returns error.
So here i try to eliminate that kinds of calls.
myisam/mi_check.c:
zero length copying avoided
mysys/my_chsize.c:
no file operations if it's not necessary
into chilla.local:/home/mydev/mysql-4.1-bug14400
myisam/mi_rkey.c:
Bug#14400 - Query joins wrong rows from table which is
subject of "concurrent insert"
Manual merge from 4.0.
mysql-test/r/myisam.result:
Bug#14400 - Query joins wrong rows from table which is
subject of "concurrent insert"
Manual merge from 4.0.
mysql-test/t/myisam.test:
Bug#14400 - Query joins wrong rows from table which is
subject of "concurrent insert"
Manual merge from 4.0.
subject of "concurrent insert"
Better fix by Monty: "The previous bug fix didn't work
when using partial keys."
mysql-test/r/myisam.result:
Bug#14400 - Query joins wrong rows from table which is
subject of "concurrent insert"
Added test result
mysql-test/t/myisam.test:
Bug#14400 - Query joins wrong rows from table which is
subject of "concurrent insert"
Added test case
table results in table corrupt
Fulltext key has always two keysegs, thus we need to update
FT_SEGS (last) element from seg array in case of compressed table.
Also we must update ft2_keyinfo.
myisam/mi_packrec.c:
Fulltext key has always two keysegs, thus we need to update
FT_SEGS (last) element from seg array in case of compressed table.
Also we must update ft2_keyinfo.
Fixed by moving update_key_parts() down to be after write_index().
myisam/sort.c:
write_index() collects index statistic which is further used in
update_key_parts(). Thus update_key_parts() must be called after
write_index().
mysql-test/r/repair.result:
Test case for bug#18874.
mysql-test/t/repair.test:
Test case for bug#18874.
The previous bug fix didn't work when using partial keys.
Don't use GNUC min/max operations are they are depricated.
Fixed valgrind warning
BitKeeper/etc/ignore:
Added */.libs/*
include/my_global.h:
Don't use GNUC min/max operations are they are depricated
myisam/mi_rkey.c:
Better bug fix for #14400 "Query joins wrong rows from table which is subject of "concurrent insert""
The previous bug fix didn't work when using partial keys.
myisam/mi_test_all.res:
Updated results to match mi_test_all.sh
myisam/mi_test_all.sh:
Removed confusing warning
mysql-test/r/myisam.result:
Added test case for #14400
mysql-test/t/myisam.test:
Added test case for #14400
sql/sql_select.cc:
Fixed valgrind warning (in field_string::val_int())
myisam/mi_uniue.c:mi_check_unique() should skip trailing spaces comparing
TEXT and VARTTEXT key segments.
myisam/mi_unique.c:
Fix for bug #20709: Collation not used in group by on 4.1.
myisam/mi_uniue.c:mi_check_unique() should skip trailing spaces comparing
TEXT and VARTTEXT key segments.
Example: assume, we have a 'char(200) collate utf8_unicode_ci' field,
there are two records with _utf8"0x65" and _utf8"0xC3A9" characters;
these values are equal according
to the utf8_unicode_ci collation, but two 600 byte length corresponding keys:
"0x65<0x20 repeats 599 times>" and "0xC3A9<0x20 repeats 598 times>" are not
equal if we count trailing spaces and it may cause inconsequent behavior.
So, let's pass 1 as the skip_end_space parameter value to the mi_compare_text()
function for proper TEXT and VARTTEXT key segments comparison.
mysql-test/r/ctype_utf8.result:
Fix for bug #20709: Collation not used in group by on 4.1.
- test results.
mysql-test/t/ctype_utf8.test:
Fix for bug #20709: Collation not used in group by on 4.1.
- test case.
Problem described in this bug report affects MyISAM tables only.
Running mysqld --flush instructs mysqld to sync all changes to disk
after each SQL statement. It worked well for INSERT and DELETE
statements, but it did sync for UPDATE only in case if there was
index change (change of colum that has an index). If no updated column
has an index, data wasn't synced to disk.
This fix makes UPDATE statement to sync data to disk even if there is
no index change (that is only data change) and mysqld is run with
--flush option.
myisam/mi_update.c:
Every myisam function that updates myisam table must end with
call to _mi_writeinfo(). If operation (second param of
_mi_writeinfo()) is not 0 it sets share->changed to 1, that is
flags that data has changed. If operation is 0, this function
equals to no-op in this case.
mi_update() must always pass !0 value as operation, since even if
there is no index change there could be data change.