used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
CONNECTION_ID() was implemented as a constant Item, i.e. an instance of
Item_static_int_func class holding value computed at creation time.
Since Items are created on parsing, and trigger statements are parsed
on table open, the first connection to open a particular table would
effectively set its own CONNECTION_ID() inside trigger statements for
that table.
Re-implement CONNECTION_ID() as a class derived from Item_int_func, and
compute connection_id on every call to fix_fields().
a race between updating and checking Max_used_connections. This is done in
a loop until either disconnect finished or timeout expired. In a latter case
the test will fail.
If the second or the third argument of a BETWEEN predicate was
a constant expression, like '2005.09.01' - INTERVAL 6 MONTH,
while the other two arguments were fields then the predicate
was evaluated incorrectly and the query returned a wrong
result set.
The bug was introduced in 5.0.17 when in the fix for 12612.
a misnamed function
... in the presence of a continue handler. The problem was that with a
handler, it continued to execute as if function existed and had set a
useful return value (which it hadn't).
The fix is to set a null return value and do an error return when a function
wasn't found.
The function agg_cmp_type in item_cmpfunc.cc neglected the fact that
the first argument in a BETWEEN/IN predicate could be a field of a view.
As a result in the case when the retrieved table was hidden by a view
over it and the arguments in the BETWEEN/IN predicates are of
the date/time type the function did not perform conversion of
the constant arguments to the same format as the first field argument.
If formats of the arguments differed it caused wrong a evaluation of
the predicates.
too many open statements". The patch adds a new global variable
@@max_prepared_stmt_count. This variable limits the total number
of prepared statements in the server. The default value of
@@max_prepared_stmt_count is 16382. 16382 small statements
(a select against 3 tables with GROUP, ORDER and LIMIT) consume
100MB of RAM. Once this limit has been reached, the server will
refuse to prepare a new statement and return ER_UNKNOWN_ERROR
(unfortunately, we can't add new errors to 4.1 without breaking 5.0). The limit is changeable after startup
and can accept any value from 0 to 1 million. In case
the new value of the limit is less than the current
statement count, no new statements can be added, while the old
still can be used. Additionally, the current count of prepared
statements is now available through a global read-only variable
@@prepared_stmt_count.
gives wrong results". Implement previously missing
Item_row::cleanup. The bug is not repeatable in 5.0, probably
due to a coincidence: the problem is present in 5.0 as well.
Idea of the fix is for master to send FD event with `created' as 0
to reconnecting slave (upon slave_net_timeout, no master crash) to avoid destroying temp tables.
In a case of a connect by slave to the master after its crash temp tables have been already
cleaned up so that slave can not keep `orphan' temp tables.
After FLUSH STATUS max_used_connections was reset to 0, and haven't
been updated while cached threads were reused, until the moment a new
thread was created.
The first suggested fix from original bug report was implemented:
a) On flushing the status, set max_used_connections to
threads_connected, not to 0.
b) Check if it is necessary to increment max_used_connections when
taking a thread from the cache as well as when creating new threads
The problem was due to the fact that with --lower-case-table-names set to 1
the function find_field_in_group did not convert the prefix 'HU' in
HU.PROJ.CITY into lower case when looking for it in the group list. Yet the
names in the group list were extended by the database name in lower case.
counter".
When TRUNCATE TABLE was called within an stored procedure the
auto_increment counter was not reset to 0 even if straight
TRUNCATE for this table did this.
This fix makes TRUNCATE in stored procedures to be handled exactly
in the same way as straight TRUNCATE. We achieve this by rolling
back the fix for bug 8850, which is no longer needed since stored
procedures don't require prelocked mode anymore (and TRUNCATE is
not allowed in stored functions or triggers).