Updating data in HEAP table with BTREE index results in wrong index_length
counter value, which keeps growing after each update.
When inserting new record into tree counter is incremented by:
sizeof(TREE_ELEMENT) + key_size + tree->size_of_element
But when deleting element from tree it doesn't decrement counter by key_size:
sizeof(TREE_ELEMENT) + tree->size_of_element
This fix makes accurate allocated memory counter for tree. That is
decrease counter by key_size when deleting tree element.
The bug caused a reported index corruption in the cases when
key_cache_block_size was not a multiple of myisam_block_size,
e.g. when key_cache_block_size=1536 while myisam_block_size=1024.
Removed sp-goto.test, sp-goto.result and all (disabled) GOTO code.
Also removed some related code that's not needed any more (no possible
unresolved label references any more, so no need to check for them).
NB: Keeping the ER_SP_GOTO_IN_HNDLR in errmsg.txt; it might become useful
in the future, and removing it (and thus re-enumerating error codes)
might upset things. (Anything referring to explicit error codes.)
does not have "NOT NULL" attribute set. Also, calculate the padding
characters more safely, so that a negative number doesn't cause it to
print MAXINT-n spaces.
MySQL 4.1
and Bug#16920 rpl_deadlock_innodb fails in show slave status (reported for MySQL 5.1)
- backport of several fixes done in MySQL 5.0 to 4.1
- fix for new discovered instability (see comment on Bug#12429 + Bug#16920)
- reenabling of testcases
Conversion from int and real numbers to UCS2 didn't work fine:
CONVERT(100, CHAR(50) UNICODE)
CONVERT(103.9, CHAR(50) UNICODE)
The problem appeared because numbers have binary charset, so,
simple charset recast binary->ucs2 was performed
instead of real conversion.
Fixed to make numbers pretend to be non-binary.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
CONNECTION_ID() was implemented as a constant Item, i.e. an instance of
Item_static_int_func class holding value computed at creation time.
Since Items are created on parsing, and trigger statements are parsed
on table open, the first connection to open a particular table would
effectively set its own CONNECTION_ID() inside trigger statements for
that table.
Re-implement CONNECTION_ID() as a class derived from Item_int_func, and
compute connection_id on every call to fix_fields().
a race between updating and checking Max_used_connections. This is done in
a loop until either disconnect finished or timeout expired. In a latter case
the test will fail.
If the second or the third argument of a BETWEEN predicate was
a constant expression, like '2005.09.01' - INTERVAL 6 MONTH,
while the other two arguments were fields then the predicate
was evaluated incorrectly and the query returned a wrong
result set.
The bug was introduced in 5.0.17 when in the fix for 12612.