functions
Unknown timezone specifications are properly rejected
by the server, but are copied into tz_storage before
rejection, and hence is retained until end of server
life. With sufficiently large bogus timezone specs,
it is easy to exhaust system memory.
Allocation of memory for a copy of the timezone
name is delayed until after verification of validity,
at the cost of a memcpy of the timezone info. This
only happens once, future lookups will hit the cached
structure.
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.
seems to become negative
THD::start_time has a dual meaning : it's either the time since the process
entered a given state or is the transaction time returned by e.g. NOW().
This causes problems, as sometimes THD::start_time may be set to a value
that is correct and needed when used as a base for NOW(), but these times
may be arbitrary (SET @@timestamp) or non-local (coming from the master
through the replication feed).
If one such non-local time is set there's no way to return a correct value
for e.g. SHOW PROCESSLIST or SELECT ... FROM INFORMATION_SCHEMA.PROCESSLIST.
Fixed by making the Time column in SHOW PROCESSLIST SIGNED LONG instead of
UNSIGNED LONG and doing the correct conversions.
Note that no reliable test suite can be constructed, since it would require
knowing the local time and can't be achieved by the means of the current test
suite.
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
--ignore-table option
mysqldump would correctly omit temporary tables for views, but would
incorrectly still emit all CREATE VIEW statements.
Backport a fix from 5.1, where we capture the names we want to emit
views for in one pass (the placeholder tables) and in the pass where
we actually emit the views, we don't emit a view if it wasn't in that
list.
Took the Xfree implementation (based on the same rewrite as the NDB one)
and added it instead of the current implementation.
Added a macro to make the calls to MD5 more streamlined.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
When disk is full, server may waiting for free space while
writing binlog, relay-log or MyISAM tables. The server will
continue after user have freed some space. But the error
message printed was not quite clear about the how often the
error message is printed, and there will be a delay before
the server continue and user freeing space. And caused users
thinking that the server was hanging forever.
This patch fixed the problem by making the error messages
printed more clear. The error message is split into two part,
the first part will only be printed once, and the second part
will be printed very 10 times.
Message first part:
Disk is full writing '<filename>' (Errcode: <errorno>). Waiting
for someone to free space... (Expect up to 60 secs delay for
server to continue after freeing disk space)
Message second part:
Retry in 60 secs, Message reprinted in 600 secs
This is a back port from 5.1 to 5.0.
Fix for BUG 20023: mysql_change_user() resets the value
of SQL_BIG_SELECTS.
The bug was that SQL_BIG_SELECTS was not properly set
in COM_CHANGE_USER.
The fix is to update SQL_BIG_SELECTS properly.
+ Fix for Bug#43114 wait_until_count_sessions too restrictive, random PB failures
+ Removal of a lot of other weaknesses found
+ modifications according to review
Bug #41571: MySQL segfaults after innodb recovery
This 5.0 fix will not be pushed into 5.1; a separate fix (from
innodb-5.1-ss4007) will be pushed into 5.1+.
Detailed revision comments:
r4003 | marko | 2009-01-20 16:12:50 +0200 (Tue, 20 Jan 2009) | 10 lines
branches/5.0: rec_set_nth_field(): When the field already is SQL null,
do nothing when it is being changed to SQL null. (Bug #41571)
Normally, MySQL does not pass "do-nothing" updates to the storage engine.
When it does and a column of an InnoDB table that is in ROW_FORMAT=COMPACT
is being updated from NULL to NULL, the InnoDB buffer pool will be corrupted
without this fix.
rb://81 approved by Heikki Tuuri
Bug #18828: If InnoDB runs out of undo slots, it returns misleading 'table is full'
This is a backport of code already in 5.1+. The error message change referred
to in the detailed revision comments is still pending.
Detailed revision comments:
r3937 | calvin | 2009-01-15 03:11:56 +0200 (Thu, 15 Jan 2009) | 17 lines
branches/5.0:
Backport the fix for Bug#18828. Return DB_TOO_MANY_CONCURRENT_TRXS
when we run out of UNDO slots in the rollback segment. The backport
is requested by MySQL under bug#41529 - Safe handling of InnoDB running
out of undo log slots.
This is a partial fix since the MySQL error code requested to properly
report the error condition back to the client has not yet materialized.
Currently we have #ifdef'd the error code translation in ha_innodb.cc.
This will have to be changed as and when MySQl add the new requested
code or an equivalent code that we can then use.
Given the above, currently we will get the old behavior, not the
"fixed" and intended behavior.
Approved by: Heikki (on IM)
Bug #39939: DROP TABLE/DISCARD TABLESPACE takes long time in buf_LRU_invalidate_tablespace()
This was already fixed in 5.1+; this is a backport to 5.0.
Detailed revision comments:
r2743 | inaam | 2008-10-08 22:18:12 +0300 (Wed, 08 Oct 2008) | 13 lines
branches/5.0:
Backport of r2742 from branches/5.1:
Fix Bug#39939 DROP TABLE/DISCARD TABLESPACE takes long time in
buf_LRU_invalidate_tablespace()
Improve implementation of buf_LRU_invalidate_tablespace by attempting
hash index drop in batches instead of doing it one by one.
Reviewed by: Heikki, Sunny, Marko
Approved by: Heikki
return no rows
The algorithm of determining the best key for loose index scan is doing a loop
over the available indexes and selects the one that has the best cost.
It retrieves the parameters of the current index into a set of variables.
If the cost of using the current index is lower than the best cost so far it
copies these variables into another set of variables that contain the
information for the best index so far.
After having checked all the indexes it uses these variables (outside of the
index loop) to create the table read plan object instance.
The was a single omission : the key_infix/key_infix_len variables were used
outside of the loop without being preserved in the loop for the best index
so far.
This causes these variables to get overwritten by the next index(es) checked.
Fixed by adding variables to hold the data for the current index, passing
the new variables to the function that assigns values to them and copying
the new variables into the existing ones when selecting a new current best
index.
To avoid further such problems moved the declarations of the variables used
to keep information about the current index inside the loop's compound
statement.