Apply the following InnoDB snapshots:
innodb-5.0-ss1319
innodb-5.0-ss1331
innodb-5.0-ss1333
innodb-5.0-ss1341
Fixes:
- Bug #21409: Incorrect result returned when in READ-COMMITTED with query_cache ON
At low transaction isolation levels we let each consistent read set
its own snapshot.
- Bug #23666: strange Innodb_row_lock_time_% values in show status; also millisecs wrong
On Windows ut_usectime returns secs and usecs relative to the UNIX
epoch (which is Jan, 1 1970).
- Bug #25494: LATEST DEADLOCK INFORMATION is not always cleared
lock_deadlock_recursive(): When the search depth or length is exceeded,
rewind lock_latest_err_file and display the two transactions at the
point of aborting the search.
- Bug #25927: Foreign key with ON DELETE SET NULL on NOT NULL can crash server
Prevent ALTER TABLE ... MODIFY ... NOT NULL on columns for which
there is a foreign key constraint ON ... SET NULL.
- Bug #26835: Repeatable corruption of utf8-enabled tables inside InnoDB
The bug could be reproduced as follows:
Define a table so that the first column of the clustered index is
a VARCHAR or a UTF-8 CHAR in a collation where sequences of bytes
of differing length are considered equivalent.
Insert and delete a record. Before the delete-marked record is
purged, insert another record whose first column is of different
length but equivalent to the first record. Under certain conditions,
the insertion can be incorrectly performed as update-in-place.
Likewise, an operation that could be done as update-in-place can
unnecessarily be performed as delete and insert, but that would not
cause corruption but merely degraded performance.
another user.
When the DEFINER clause isn't specified in the ALTER statement then it's loaded
from the view definition. If the definer differs from the current user then
the error is thrown because only a super-user can set other users as a definers.
Now if the DEFINER clause is omitted in the ALTER VIEW statement then the
definer from the original view is used without check.
Server starts any binlog dump from Format_description_log_event,
this shifted all offset calculations in mysqlbinlog and made it
to stop the dump earlier than --stop-position. Now mysqlbinlog
takes Format_description_log_event into account
Possible problems: function call could be eliminated from where class and only
be evaluated once; function can be evaluated during table and item setup phase which could
cause side effects not to be registered in binlog.
Fixed with introducing func_item_sp::used_tables() returning the correct table_map constant.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
what it actually means (Monty approved the renaming)
- correcting description of transaction_alloc command-line options
(our manual is correct)
- fix for a failure of rpl_trigger.
The problem in this bug is when we create temporary tables. When
temporary tables are created for unions, there is some
inferrence being carried out regarding the type of the column.
Whenever this column type is inferred to be REAL (i.e. FLOAT or
DOUBLE), MySQL will always try to maintain exact precision, and
if that is not possible (there are hardware limits, since FLOAT
and DOUBLE are stored as approximate values) will switch to
using approximate values. The problem here is that at this point
the information about number of significant digits is not
available. Furthermore, the number of significant digits should
be increased for the AVG function, however, this was not properly
handled. There are 4 parts to the problem:
#1: DOUBLE and FLOAT fields don't display their proper display
lengths in max_display_length(). This is hard-coded as 53 for
DOUBLE and 24 for FLOAT. Now changed to instead return the
field_length.
#2: Type holders for temporary tables do not preserve the
max_length of the Item's from which they are created, and is
instead reverted to the 53 and 24 from above. This causes
*all* fields to get non-fixed significant digits.
#3: AVG function does not update max_length (display length)
when updating number of decimals.
#4: The function that switches to non-fixed number of
significant digits should use DBL_DIG + 2 or FLT_DIG + 2 as
cut-off values (Since fixed precision does not use the 'e'
notation)
Of these points, #1 is the controversial one, but this
change is preferred and has been cleared with Monty. The
function causes quite a few unit tests to blow up and they had
to b changed, but each one is annotated and motivated. We
frequently see the magical 53 and 24 give way to more relevant
numbers.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0
of its argument happened to be a decimal expression returning
the NULL value.
The crash was due to the fact the function in_decimal::set did
not take into account that val_decimal() could return 0 if
the decimal expression had been evaluated to NULL.
on a database.
The problem was that we required not less privileges on the base tables
than we have on the view.
The fix is to be more flexible and allow to create such a view (necessary
privileges will be checked at the runtime).
INTO clause can be specified only for the last select of a UNION and it
receives the result of the whole query. But it was wrongly allowed in
non-last selects of a UNION which leads to a confusing query result.
Now INTO allowed only in the last select of a UNION.
aggregated in outer context returned wrong results.
This happened only if the subquery did not contain any references
to outer fields.
As there were no references to outer fields the subquery erroneously
was taken for non-correlated one.
Now any set function aggregated in outer context makes the subquery
correlated.
Shift the ID values up into a range where they will not collide with those
which we use for real data, when we fill the system tables.
Will be merged up to 5.0 where it is needed for 5.0.38.
Shift the ID values up into a range where they will not collide with those
which we use for real data, when we fill the system tables.
Will be merged up to 5.0 where it is needed for 5.0.38.
To correctly decide which predicates can be evaluated with a given table
the optimizer must know the exact set of tables that a predicate depends
on. If that mask is too wide (refer to non-existing tables) the optimizer
can erroneously skip a predicate.
One such case of wrong table usage mask were the aggregate functions.
The have a all-1 mask (meaning depend on all tables, including non-existent
ones).
Fixed by making a real used_tables mask for the aggregates. The mask is
constructed in the following way :
1. OR the table dependency masks of all the arguments of the aggregate.
2. If all the arguments of the function are from the local name resolution
context and it is evaluated in the same name resolution
context where it is referenced all the tables from that name resolution
context are OR-ed to the dependency mask. This is to denote that an
aggregate function depends on the number of rows it processes.
3. Handle correctly the case of an aggregate function optimization (such that
the aggregate function can be pre-calculated and made a constant).
Made sure that an aggregate function is never a constant (unless subject of a
specific optimization and pre-calculation).
One other flaw was revealed and fixed in the process : references were
not calling the recalculation method for used_tables of their targets.
Removed wrong fix for the bug#27006.
The bug was added by the fix for the bug#19978 and fixed by Monty on 2007/02/21.
trigger.test, trigger.result:
Corrected test case for the bug#27006.