Showstopper and regression against 5.0.24.
Previously, we ignored seek() errors (see Bug#22828) and let seek()s
against pipes fail. Now, since we check that a seek didn't fail,
and return without reading, this bug popped up.
This restores the behavior for file-ish objects that could never be
seek()ed.
- When cache memory can't be allocated size is recaclulated using 3/4 of
the requested memory. This number is rounded up to the nearest
min_cache step.
However with the previous implementation the new cache size might
become bigger than requested because of this rounding and thus we get
an infinit loop.
- This patch fixes this problem by ensuring that the new cache size
always will be smaller on the second and subsequent iterations until
we reach min_cache.
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
- The io cache flag seek_not_done was not set properly in the
reinit_io_cache function call and this led my_seek to be called
desipite an invalid file handle.
- Added a test in reinit_io_cache to ensure we have a valid file
handle before setting seek_not_done flag.
- Because my_seek actually is capable of returning an error code we should
exploit that in the best possible way.
- There might be kernel errors or other errors we can't predict and capturing
the return value of all system calls gives us better understanding of
possible errors.
- The io cache flag seek_not_done was not set properly in the reinit_
io_chache function call and this led my_seek to be called despite an
invalid file handle.
- Added a test in reinit_io_cache to ensure we have a valid file handle
before setting seek_not_done flag.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
A wrong cast led to numeric overflow for data files
greater than 4GB. The parallel repair assumed end of
file after reading the amount of data that the file
was bigger than 4GB. It truncated the data file and
noted the number of records it found so far in the
index file header as the number of rows in the table.
Removing the cast fixed the problem.
I added some cosmetic changes too.
The normal repair worked because it uses a different
function to read from the data file.
Added two status variables:
binlog_cache_use - counts number of transactions that used somehow
transaction temporary binary log.
binlog_cache_disk_use - counts number of transactions that required
disk I/O for storing info in this this binary log.
open a file that already existed. The problem was that end_io_cache()
was called even if init_io_cache() was not. This affected both
OUTFILE and DUMPFILE (both fixed). Sometimes wrongly aligned pointer was freed,
sometimes mysqld core dumped.
Other problem was that select_dump::send_error removed the dumpfile,
even if it was created by an earlier run, or by some other program, if
the file permissions just permitted it. Fixed it so that the file will
only be deleted, if an error occurred, but the file was created by mysqld
just a moment ago, in that thread.
On the other hand, select_export did not handle the corresponding garbage
file at all. Both fixed.
After these fixes, a big part of the select_export::prepare and select_dump::prepare
code became identical. Merged the code into a new function called create_file(),
which is now called by the two latter functions.
Regards,
Jani
the first 4 bytes of the relay log. Indeed comments in mysys/mf_iocache.c
say we must always use my_b_append for such a cache.
This *could* avoid a very rare assertion failure which is:
030524 19:32:38 Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log '/
users/gbichot/4.1.1/mysql-test/var/log/slave-relay-bin.000001' position: 4
030524 19:32:38 next log '/users/gbichot/4.1.1/mysql-test/var/log/slave-relay-bin.000002' is currently active
mysqld: mf_iocache.c:701: _my_b_seq_read: Assertion `pos_in_file == info->end_of_file' failed.
and which seemed to happen always when the SQL thread and/or the I/O thread
were at position 4 in a relay log.