When processing aggregate functions all tables values are reset
to NULLs at the end of each group.
When doing that if there are no rows found for a group
the const tables must not be reset as they are not recalculated
by do_select()/sub_select() for each group.
Too many cursors (more than 1024) could lead to memory corruption.
This affects both, stored routines and C API cursors, and the
threshold is per-server, not per-connection. Similarly, the
corruption could happen when the server was under heavy load
(executing more than 1024 simultaneous complex queries), and this is
the reason why this bug is fixed in 4.1, which doesn't support
cursors.
The corruption was caused by a bug in the temporary tables code, when
an attempt to create a table could lead to a write beyond allocated
space. Note, that only internal tables were affected (the tables
created internally by the server to resolve the query), not tables
created with CREATE TEMPORARY TABLE. Another pre-condition for the
bug is TRUE value of --temp-pool startup option, which, however, is a
default.
The cause of a bug was that random memory was overwritten in
bitmap_set_next() due to out-of-bound memory access.
When optimizing conditions like 'a = <some_val> OR a IS NULL' so that they're
united into a single condition on the key and checked together the server must
check which value is the NULL value in a correct way : not only using ->is_null
but also check if the expression doesn't depend on any tables referenced in the
current statement.
This additional check must be performed because that optimization takes place
before the actual execution of the statement, so if the field was initialized
to NULL from a previous statement the optimization would be applied incorrectly.
The problem was in that opt_sum_query() replaced MIN/MAX functions
with the corresponding constant found in a key, but due to imprecise
representation of float numbers, when evaluating the where clause,
this comparison failed.
When MIN/MAX optimization detects that all tables can be removed,
also remove all conjuncts in a where clause that refer to these
tables. As a result of this fix, these conditions are not evaluated
twice, and in the case of float number comparisons we do not discard
result rows due to imprecise float representation.
As a side-effect this fix also corrects an unnoticed problem in
bug 12882.
the old problem - mysqltest can't handle multiple connections in the
embedded server properly. So i disabled the test for the embedded mode
until mysqltest is fixed
LIKE craashed with a pattern having letters in the range 128..255
(e.g. A WITH ACUTE or C WITH CARON) because of wrong cast from
signed char to unsigned int.
for class Item_func_trim.
For 4.1 it caused wrong output for EXPLAIN EXTENDED commands
if expressions with the TRIM function of two arguments were used.
For 5.0 it caused an error message when trying to select
from a view with the TRIM function of two arguments.
This unexpected error message was due to the fact that the
print method for the class Item_func_trim was inherited from
the class Item_func. Yet the TRIM function does not take a list
of its arguments. Rather it takes the arguments in the form:
[{BOTH | LEADING | TRAILING} [remstr] FROM] str) |
[remstr FROM] str
1) When initializing a boolean variable, do not use string representations '"false"' and '"true"'
but rather the boolean values 'false' and 'true'.
2) Add the module to the various Windows description files.
Old option ordering in the help was confusing to some users. Changed
ordering of deprecated options to be consistent, and added mention to
entry for options with a "--no-option" variant mentioning the
"--disable-option" variant.