~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Any statement reading corrupt archive data file
(CHECK/REPAIR/SELECT/UPDATE/DELETE) may cause assertion
failure in debug builds. This assertion has been removed
and an error is returned instead.
Also fixed that CHECK/REPAIR returns vague error message
when it mets corruption in archive data file. This is
fixed by returning proper error code.
become negative
- merged the fix to 5.1
- extended to cover I_S.PROCESSLIST.TIME
- Changed the column type of I_S.PROCESSLIST.TIME from LOGNLONG
UNSIGNED
to LONG (to match the SHOW PROCESSLIST type)
- Added a test case
"Release_lock("hello")" is now also successful when delivering NULL, replaced two sleeps by wait_condition. The last two "sleep 1" have not been replaced as all tried wait conditions leaded to nondeterministic results, especially to succeeding concurrent updates. To replace the sleeps there should be some time planned (or internal knowledge of the server may help).
If a sys-var has a base and a block-size>1, and then a
user-supplied value >= minimum ended up below minimum
thanks to block-size alignment, we threw a warning.
This meant for instance that when getting, then setting
the minimum, we'd see a warning. This was needlessly
confusing. (updated patch)
The problem is issued because we set wrong start position and stop position of query string into binlog.
That two values are stored as part of head info of query string.
When we parse binlog, we first get position values then get the query string according position values.
But seems that two values are not calculated correctly after the parse of Yacc.
We don't want to touch so much of yacc because it may influence other codes.
So just add one space after 'INTO' key word when parsing.
This can easily resolve the problem.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.
The method to purge binary log files produces different results in some platforms.
The reason is that the purge time is calculated based on table modified time and
that can't guarantee to purge master-bin.000002 in all platforms.(eg. windows)
Use a new way that sets the time to purge binlog file 1 second after the last
modified time of master-bin.000002.
That can be sure that the file is always deleted in any platform.
Bug #43203 Overflow from auto incrementing causes server segv
Detailed revision comments:
r4325 | sunny | 2009-03-02 02:28:52 +0200 (Mon, 02 Mar 2009) | 10 lines
branches/5.1: Bug#43203: Overflow from auto incrementing causes server segv
It was not a SIGSEGV but an assertion failure. The assertion was checking
the invariant that *first_value passed in by MySQL doesn't contain a value
that is greater than the max value for that type. The assertion has been
changed to a check and if the value is greater than the max we report a
generic AUTOINC failure.
rb://93
Approved by Heikki
Bug #42714 AUTO_INCREMENT errors in 5.1.31
Detailed revision comments:
r4287 | sunny | 2009-02-25 05:32:01 +0200 (Wed, 25 Feb 2009) | 10 lines
branches/5.1: Fix Bug#42714 AUTO_INCREMENT errors in 5.1.31. There are two
changes to the autoinc handling.
1. To fix the immediate problem from the bug report, we must ensure that the
value written to the table is always less than the max value stored in
dict_table_t.
2. The second related change is that according to MySQL documentation when
the offset is greater than the increment, we should ignore the offset.
Bug #42400 InnoDB autoinc code can't handle floating-point columns
Detailed revision comments:
r4065 | sunny | 2009-01-29 16:01:36 +0200 (Thu, 29 Jan 2009) | 8 lines
branches/5.1: In the last round of AUTOINC cleanup we assumed that AUTOINC
is only defined for integer columns. This caused an assertion failure when
we checked for the maximum value of a column type. We now calculate the
max value for floating-point autoinc columns too.
Fix Bug#42400 - InnoDB autoinc code can't handle floating-point columns
rb://84 and Mantis issue://162
r4111 | sunny | 2009-02-03 22:06:52 +0200 (Tue, 03 Feb 2009) | 2 lines
branches/5.1: Add the ULL suffix otherwise there is an overflow.
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
The problem is that creating a event could fail if the value of
the variable server_id didn't fit in the originator column of
the event system table. The cause is two-fold: it was possible
to set server_id to a value outside the documented range (from
0 to 2^32-1) and the originator column of the event table didn't
have enough room for values in this range.
The log tables (general_log and slow_log) also don't have a proper
column type to store the server_id and having a large server_id
value could prevent queries from being logged.
The solution is to ensure that all system tables that store the
server_id value have a proper column type (int unsigned) and that
the variable can't be set to a value that is not within the range.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
The method to purge binary log files produces different results in some platforms.
The reason is that the purge time is calculated based on table modified time and
that can't guarantee to purge master-bin.000002 in all platforms.(eg. windows)
Use a new way that sets the time to purge binlog file 1 second after the last modified time of master-bin.000002.
That can be sure that the file is always deleted in any platform.