Bug #18828: If InnoDB runs out of undo slots, it returns misleading 'table is full'
This is a backport of code already in 5.1+. The error message change referred
to in the detailed revision comments is still pending.
Detailed revision comments:
r3937 | calvin | 2009-01-15 03:11:56 +0200 (Thu, 15 Jan 2009) | 17 lines
branches/5.0:
Backport the fix for Bug#18828. Return DB_TOO_MANY_CONCURRENT_TRXS
when we run out of UNDO slots in the rollback segment. The backport
is requested by MySQL under bug#41529 - Safe handling of InnoDB running
out of undo log slots.
This is a partial fix since the MySQL error code requested to properly
report the error condition back to the client has not yet materialized.
Currently we have #ifdef'd the error code translation in ha_innodb.cc.
This will have to be changed as and when MySQl add the new requested
code or an equivalent code that we can then use.
Given the above, currently we will get the old behavior, not the
"fixed" and intended behavior.
Approved by: Heikki (on IM)
return no rows
The algorithm of determining the best key for loose index scan is doing a loop
over the available indexes and selects the one that has the best cost.
It retrieves the parameters of the current index into a set of variables.
If the cost of using the current index is lower than the best cost so far it
copies these variables into another set of variables that contain the
information for the best index so far.
After having checked all the indexes it uses these variables (outside of the
index loop) to create the table read plan object instance.
The was a single omission : the key_infix/key_infix_len variables were used
outside of the loop without being preserved in the loop for the best index
so far.
This causes these variables to get overwritten by the next index(es) checked.
Fixed by adding variables to hold the data for the current index, passing
the new variables to the function that assigns values to them and copying
the new variables into the existing ones when selecting a new current best
index.
To avoid further such problems moved the declarations of the variables used
to keep information about the current index inside the loop's compound
statement.
*with --with-charset=utf8*
Problem: wrong LONG TEXT field length is sent to a client
when multibyte server character set used.
Fix: always limit field length sent to a client to 2^32,
as we store it in 4 byte slot.
Note: mysql_client_test changed accordingly.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
for bug #15936.
On some platforms fenv.h may #undef the min/max macros
defined in my_global.h.
Fixed by moving the #include directive for fenv.h from
mysqld.cc to my_global.h before definitions for min/max.
Bug#41112: crash in mysql_ha_close_table/get_lock_data with alter table
The problem is that the server wasn't handling robustly failures
to re-open a table during a HANDLER .. READ statement. If the
table needed to be re-opened due to it's storage engine being
altered to one that doesn't support HANDLER, a reference (dangling
pointer) to a closed table could be left in place and accessed in
later attempts to fetch from the table using the handler. Also,
if the server failed to set a error message if the re-open
failed. These problems could lead to server crashes or hangs.
The solution is to remove any references to a closed table and
to set a error if reopening a table during a HANDLER .. READ
statement fails.
There is no test case in this change set as the test depends on
a testing feature only available on 5.1 and later.
Both of our own implementations of rint(3) were inconsistent with the
most common behavior of rint() on those platforms that have it: round
to nearest, break ties by rounding to nearest even.
Fixed by leaving just one implementation of rint() in our source tree,
and changing its behavior to match the most common native
implementations on other platforms.
Signed integer format specifier forced to print the binlog header with server_id
negative if the unsigned value sets the sign-bit ON.
Fixed with correcting the specifier to correspond to typeof(server_id) == ulong.
connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
In case of ROW item each compared pair does not
check if argumet collations can be aggregated and
thus appropiriate item conversion does not happen.
The fix is to add the check and convertion for ROW
pairs.
returns short string value.
Multibyte character sets were not taken into account when
calculating max_length in Item_param::convert_str_value(). As a
result, string parameters of a prepared statement could be
truncated later when calculating string length in characters by
dividing length in bytes by the charset's mbmaxlen value (e.g. in
Field_varstring::store()).
Fixed by taking charset's mbmaxlen into account when calculating
max_length in Item_param::convert_str_value().
Additional fix:
1. Revert the unification of DROP FUNCTION
and DROP PROCEDURE, because DROP FUNCTION can be used to
drop UDFs (that have a non-qualified name and don't require
database name to be present and valid).
2. Fixed the case sensitivity problem by adding a call to
check_db_name() (similar to the sp_name production).
MATCH() function accepts column list as an argument. It was possible to override
this requirement with aliased non-column select expression. Which results in
server crash.
With this fix aliased non-column select expressions are not accepted by MATCH()
function, returning an error.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
date_format functions
String::realloc() did not check whether the existing string data fits in
the newly allocated buffer for cases when reallocating a String object
with external buffer (i.e.alloced == FALSE). This could lead to memory
overruns in some cases.
The parser was not using the correct fully-qualified-name
production for DROP FUNCTION.
Fixed by copying the production from DROP PROCEDURE.
Tested in the windows specific suite to make sure it's
tested on a case-insensitive file system.
In 37553 we declared longlong results for
class Item_str_timefunc as per comments/docs,
but didn't add a method for that. And the
default just wasn't good enough for some
cases.
Changeset adds dedicated val_int() to class.
ORDER BY could cause a server crash
Dependent subqueries like
SELECT COUNT(*) FROM t1, t2 WHERE t2.b
IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a)
caused a memory leak proportional to the
number of outer rows.
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only
(make_simple_join of tmp_join may access these values
via 'this' pointer of the parent JOIN).
NOTE: this patch doesn't include standard test case (this is
"out of memory" bug). See bug #42037 page for test cases.
Problem: some queries using NAME_CONST(.. COLLATE ...)
lead to server crash due to failed type cast.
Fix: return the underlying item's type in case of
NAME_CONST(.. COLLATE ...) to avoid wrong casting.
functions
String::realloc() did not check whether the existing string data fits in the newly
allocated buffer for cases when reallocating a String object with external buffer
(i.e.alloced == FALSE). This could lead to memory overruns in some cases.
When storing a NULL to a TIMESTAMP NOT NULL DEFAULT ...,
NULL returned from some functions threw a 'cannot be NULL error.'
NULL-returns now correctly result in the timestamp-field being
assigned its default value.
If the system time is adjusted back during a query execution
(resulting in the end time being earlier than the start time)
the code that prints to the slow query log gets confused and
prints unsigned negative numbers.
Fixed by not logging the statements that would have negative
execution time due to time shifts.
No test case since this would involve changing the system time.
messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
Various parts of code used different 'precision' arguments for sprintf("%g") when converting
floating point numbers to a string. This led to differences in results in some cases
depending on whether the text-based or prepared statements protocol is used for a query.
Fixed by changing arguments to sprintf("%g") to always be 15 (DBL_DIG) so that results are
consistent regardless of the protocol.
This patch will be null-merged to 6.0 as the problem does not exists there (fixed by the
patch for WL#2934).