After the table is compressed by the myisampack utility,
opening the table by the server produces valgrind warnings.
This happens because when we try to read a record into the buffer
we alway assume that the remaining buffer to read is always equal
to word size(4 or 8 or 2 bytes) we read. Sometimes we have
remaining buffer size less than word size and trying to read the
entire word size will end up in valgrind errors.
Fixed by reading byte by byte when we detect the remaining buffer
size is less than the word size.
expired timeout on debx86-b in PB
Moved the resource-intensive test case for bug #41486 into
a separate test file to reduce execution time for mysql.test.
produce incorrect results for ROUND()
Added a workaround and a configure check to test whether
isinf() is affected by the GCC bug #39228.
Since no code in MySQL server is currently affected by that
bug, the patch is actually a safeguard for possible future
code modifications. No test cases or changelog entries are
needed.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498
When asking what database is selected, client expected
to *always* get an answer from the server.
We now handle failure more gracefully.
See comments in ticket for a discussion of what happens,
and how things interlock.
are written.
When we have a myisam table with DELAY_KEY_WRITE option, index updates
are not applied until the flush tables command is issued or until the
server is shutdown. If server gets killed before the index updates are
written to disk, the index file is corrupted as expected but the table
is not marked as crashed. So when we start server with myisam-recover,
table is not repaired leaving the table unusable.
The problem is when we try to write the index updates to index file,
we decrement the open_count even before the flushing the keys to index
file.
Fixed by moving the decrement operation after flushing the keys to the
index file. So we always have non zero open count if the flush table
operation is killed and when the server is started with mysiam-recover
option, it marks the table as crashed and repairs it.
Note: No testcase for added as we need to kill the server and start the
server with different set of options and other non trivial
operations involved.
- Header <sys/ttydefaults.h> missing or not usable on QNX and OpenServer 6
include/my_global.h
- Moved down definition of function rint(), as for some platforms (in
this case Netware) 'longlong' is not defined until later in
"my_global.h"
functions
Unknown timezone specifications are properly rejected
by the server, but are copied into tz_storage before
rejection, and hence is retained until end of server
life. With sufficiently large bogus timezone specs,
it is easy to exhaust system memory.
Allocation of memory for a copy of the timezone
name is delayed until after verification of validity,
at the cost of a memcpy of the timezone info. This
only happens once, future lookups will hit the cached
structure.
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.
seems to become negative
THD::start_time has a dual meaning : it's either the time since the process
entered a given state or is the transaction time returned by e.g. NOW().
This causes problems, as sometimes THD::start_time may be set to a value
that is correct and needed when used as a base for NOW(), but these times
may be arbitrary (SET @@timestamp) or non-local (coming from the master
through the replication feed).
If one such non-local time is set there's no way to return a correct value
for e.g. SHOW PROCESSLIST or SELECT ... FROM INFORMATION_SCHEMA.PROCESSLIST.
Fixed by making the Time column in SHOW PROCESSLIST SIGNED LONG instead of
UNSIGNED LONG and doing the correct conversions.
Note that no reliable test suite can be constructed, since it would require
knowing the local time and can't be achieved by the means of the current test
suite.
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
The reference manual has instructions for adding new character
sets, and refers to the string/CHARSET_INFO.txt file. This file
is currently not present in the distribution.
Modify the build to include this file in the distribution.
--ignore-table option
mysqldump would correctly omit temporary tables for views, but would
incorrectly still emit all CREATE VIEW statements.
Backport a fix from 5.1, where we capture the names we want to emit
views for in one pass (the placeholder tables) and in the pass where
we actually emit the views, we don't emit a view if it wasn't in that
list.
Took the Xfree implementation (based on the same rewrite as the NDB one)
and added it instead of the current implementation.
Added a macro to make the calls to MD5 more streamlined.