Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
mysql.procs_priv table itself does not get replicated.
Inserting routine privilege record into mysql.procs_priv table
is triggered by creating function/procedure statements
according to current user's privileges.
Because the current user of SQL thread has GLOBAL_ACL,
which doesn't need any check mysql.procs_priv privilege
when create/alter/execute routines.
Corresponding GLOBAL_ACL privilege user
doesn't insert routine privilege record into
mysql.procs_priv when creating a routine.
Fixed by switching the current user of SQL thread to definer user if
the definer user exists on slave.
That populates procs_priv, otherwise to keep the SQL thread
user and procs_priv remains unchanged.
Any statement reading corrupt archive data file
(CHECK/REPAIR/SELECT/UPDATE/DELETE) may cause assertion
failure in debug builds. This assertion has been removed
and an error is returned instead.
Also fixed that CHECK/REPAIR returns vague error message
when it mets corruption in archive data file. This is
fixed by returning proper error code.
become negative
- merged the fix to 5.1
- extended to cover I_S.PROCESSLIST.TIME
- Changed the column type of I_S.PROCESSLIST.TIME from LOGNLONG
UNSIGNED
to LONG (to match the SHOW PROCESSLIST type)
- Added a test case
"Release_lock("hello")" is now also successful when delivering NULL, replaced two sleeps by wait_condition. The last two "sleep 1" have not been replaced as all tried wait conditions leaded to nondeterministic results, especially to succeeding concurrent updates. To replace the sleeps there should be some time planned (or internal knowledge of the server may help).
If a sys-var has a base and a block-size>1, and then a
user-supplied value >= minimum ended up below minimum
thanks to block-size alignment, we threw a warning.
This meant for instance that when getting, then setting
the minimum, we'd see a warning. This was needlessly
confusing. (updated patch)
The problem is issued because we set wrong start position and stop position of query string into binlog.
That two values are stored as part of head info of query string.
When we parse binlog, we first get position values then get the query string according position values.
But seems that two values are not calculated correctly after the parse of Yacc.
We don't want to touch so much of yacc because it may influence other codes.
So just add one space after 'INTO' key word when parsing.
This can easily resolve the problem.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.
The method to purge binary log files produces different results in some platforms.
The reason is that the purge time is calculated based on table modified time and
that can't guarantee to purge master-bin.000002 in all platforms.(eg. windows)
Use a new way that sets the time to purge binlog file 1 second after the last
modified time of master-bin.000002.
That can be sure that the file is always deleted in any platform.
Bug #43203 Overflow from auto incrementing causes server segv
Detailed revision comments:
r4325 | sunny | 2009-03-02 02:28:52 +0200 (Mon, 02 Mar 2009) | 10 lines
branches/5.1: Bug#43203: Overflow from auto incrementing causes server segv
It was not a SIGSEGV but an assertion failure. The assertion was checking
the invariant that *first_value passed in by MySQL doesn't contain a value
that is greater than the max value for that type. The assertion has been
changed to a check and if the value is greater than the max we report a
generic AUTOINC failure.
rb://93
Approved by Heikki
Bug #42714 AUTO_INCREMENT errors in 5.1.31
Detailed revision comments:
r4287 | sunny | 2009-02-25 05:32:01 +0200 (Wed, 25 Feb 2009) | 10 lines
branches/5.1: Fix Bug#42714 AUTO_INCREMENT errors in 5.1.31. There are two
changes to the autoinc handling.
1. To fix the immediate problem from the bug report, we must ensure that the
value written to the table is always less than the max value stored in
dict_table_t.
2. The second related change is that according to MySQL documentation when
the offset is greater than the increment, we should ignore the offset.