causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
TABLE ... WRITE".
Memory and CPU hogging occured when connection which had to wait for table
lock was serviced by thread which previously serviced connection that was
killed (note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
The problem also occured when one killed particular query in connection
(using KILL QUERY) and later this connection had to wait for some table
lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
TABLE ... WRITE".
CPU hogging occured when connection which had to wait for table lock was
serviced by thread which previously serviced connection that was killed
(note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
In 5.* versions memory hogging was added to CPU hogging. Moreover in
those versions the problem also occured when one killed particular query
in connection (using KILL QUERY) and later this connection had to wait for
some table lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
my_seek: Assertion `fd != -1' failed"
In difficult optimize/repair situations the server could crash.
Under some circumstances the server retries an optimize/repair
with more elaborate options. But it did not check if the first
attempt failed so badly that a second one must not be tried.
This could happen when a new data file has been created
but it was not possible to open it. In this case the
repair leaves behind a table with closed data file.
This must not be used for another repair attempt.
We do now detect the closed data file and do not try
another repair attempt in this situation.
No test case. The required table corruption can not be
repeated easily. There is a test program attached to
bug 25433.
differences in tables
Certain merge tables were wrongly reported as having incorrect definition:
- Some fields that are 1 byte long (e.g. TINYINT, CHAR(1)), might
be internally casted (in certain cases) to a different type on a
storage engine layer. (affects 4.1 and up)
- If tables in a merge (and a MERGE table itself) had short VARCHAR column (less
than 4 bytes) and at least one (but not all) tables were ALTER'ed (even to an
identical table: ALTER TABLE xxx ENGINE=yyy), table definitions went ouf of
sync. (affects 4.1 only)
This is fixed by relaxing a check for underlying conformance and setting
field type to FIELD_TYPE_STRING in case varchar is shorter than 4
when a table is created.
When the SUBSTRING() function was used over a LONGTEXT field the max_length of
the SUBSTRING() result was wrongly calculated and set to 0. As the max_length
parameter is used while tmp field creation it limits the length of the result
field and leads to printing an empty string instead of the correct result.
Now the Item_func_substr::fix_length_and_dec() function correctly calculates
the max_length parameter.
construct references invalid name.
Derived tables currently cannot use outer references.
Thus there is no outer context for them.
The 4.1 code takes this fact into account while the
Item_field::fix_outer_field code of 5.0 lost the check that blocks
any attempts to resolve names in outer context for derived tables.
incorrect key file for table
In certain cases it could happen that deleting a row could
corrupt an RTREE index.
According to Guttman's algorithm, page underflow is handled
by storing the page in a list for later re-insertion. The
keys from the stored pages have to be inserted into the
remaining pages of the same level of the tree. Hence the
level number is stored in the re-insertion list together
with the page.
In the MySQL RTree implementation the level counts from zero
at the root page, increasing numbers for levels down the tree.
If during re-insertion of the keys the tree height grows, all
level numbers become invalid. The remaining keys will be
inserted at the wrong level.
The fix is to increment the level numbers stored in the
reinsert list after a split of the root block during reinsertion.