ISSUE:
------
There can be up to MERGEBUFF2 number of sorted merge chunks,
We need enough buffer space for at least one record from
each merge chunks. If estimates are wrong(very low) and we
allocate buffer space for less than MERGEBUFF2, then we will
have issue in merge_buffers, if actual number of rows to be
sorted is bigger than estimate and external filesort is
chosen.
SOLUTION:
---------
Set number of rows to sort to be at least MERGEBUFF2.
- Filesort has an optmization where it reads only columns that are
needed before the sorting is done.
- When ref(_or_null) is picked by the join optimizer, it may remove parts
of WHERE clause that are guaranteed to be true.
- However, if we use quick select, we must put all of the range columns into the
read set. Not doing so will may cause us to fail to detect the end of the range.
(assert added by MySQL verified that strnxfrm can only *increase*
the string length if from == to, and the latter is a random
decision made by individual items and String::realloc).
Fix it by avoiding the memcpy in the first place.
mysql-test/suite/innodb/r/row_lock.result:
Test case for MDEV-5629
mysql-test/suite/innodb/t/row_lock.test:
Test case for MDEV-5629
sql/filesort.cc:
Don't call unlock_row() in case of errors
FROM SUBSELECT
ISSUE : In function find_all_keys.
If selected row do not satisfy condition
then we call unlock_row to release the locked
row. Suppose if we have subquery in condition
and we have an innodb error during its execution.
Then we should not call the unlock_row. If the error
is because of deadlock, innodb will rollback the
transaction. And calling unlock_row without
transaction is an invalid case hence an assertion
failure.
SOLUTION : We call unlock_row only if only there is no
error occurred previously.
The solution is back ported from 5.6
defect number 14226481
FROM SUBSELECT
ISSUE : In function find_all_keys.
If selected row do not satisfy condition
then we call unlock_row to release the locked
row. Suppose if we have subquery in condition
and we have an innodb error during its execution.
Then we should not call the unlock_row. If the error
is because of deadlock, innodb will rollback the
transaction. And calling unlock_row without
transaction is an invalid case hence an assertion
failure.
SOLUTION : We call unlock_row only if only there is no
error occurred previously.
The solution is back ported from 5.6
defect number 14226481
sql/filesort.cc:
Now we call unlock_row only if there is no
previous error.
WITH SORT ABORTED LEAKS FILE DESCRIPTORS
ISSUE : IO_CACHE used for index_merge quick select
is freed only on successful retrieval of all rows
from index merge.
Suppose if there is a interrupt( or failure) to
this operation of row retrieval (let it be a
KILL_QUERY signal) then we are not freeing the IO_CACHE
resources allocated by index_merge quick select.
And hence temp file associated with it is also not closed.
This lead to a file descriptor leak.
SOLUTION : As part of file sort operation now we always
free the IO_CACHE allocated by index_merge quick select.
sql/filesort.cc:
In filesort function we try to free if any
IO_CACHE allocated by index_merge quick select
and if it is not yet freed.
WITH SORT ABORTED LEAKS FILE DESCRIPTORS
ISSUE : IO_CACHE used for index_merge quick select
is freed only on successful retrieval of all rows
from index merge.
Suppose if there is a interrupt( or failure) to
this operation of row retrieval (let it be a
KILL_QUERY signal) then we are not freeing the IO_CACHE
resources allocated by index_merge quick select.
And hence temp file associated with it is also not closed.
This lead to a file descriptor leak.
SOLUTION : As part of file sort operation now we always
free the IO_CACHE allocated by index_merge quick select.
Problem:-
We have created a table with UTF8_BIN collation.
In case, when in our query we have ORDER BY clause over a function
call we are getting result in incorrect order.
Note:the bug is not there in 5.5.
Analysis:
In 5.5, for UTF16_BIN, we have min and max multi-byte length is 2 and 4
respectively.In make_sortkey(),for 2 byte character character we are
assuming that the resultant length will be 2 byte/character. But when we
use my_strnxfrm_unicode_full_bin(), we store sorting weights using 3 bytes
per character.This result in truncated result.
Same thing happen for UTF8MB4, where we have 1 byte min multi-byte and
4 byte max multi-byte.We will accsume resultant data as 1 byte/character,
which result in truncated result.
Solution:-
use strnxfrm(means use of MY_CS_STRNXFRM macro) is used for sort, in
which the resultant length is not dependent on source length.
Problem:-
We have created a table with UTF8_BIN collation.
In case, when in our query we have ORDER BY clause over a function
call we are getting result in incorrect order.
Note:the bug is not there in 5.5.
Analysis:
In 5.5, for UTF16_BIN, we have min and max multi-byte length is 2 and 4
respectively.In make_sortkey(),for 2 byte character character we are
assuming that the resultant length will be 2 byte/character. But when we
use my_strnxfrm_unicode_full_bin(), we store sorting weights using 3 bytes
per character.This result in truncated result.
Same thing happen for UTF8MB4, where we have 1 byte min multi-byte and
4 byte max multi-byte.We will accsume resultant data as 1 byte/character,
which result in truncated result.
Solution:-
use strnxfrm(means use of MY_CS_STRNXFRM macro) is used for sort, in
which the resultant length is not dependent on source length.
Description:
------------
There are 2 issues reported in the bug report,
1. One session running a "long" select, then, from the other
session, you kill that first one, while select is
running, and it receives that message "Server shutdown in
progress".
Reported Date: 02-Apr-2006
=> Looks like this isuse is already fixed in 2009 by the patch
pushed for bug28141.
2. Killing query which goes to filesort, logs error entries like:
120416 9:17:28 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
120416 9:18:48 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
120416 9:19:39 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
Reported Date: 16-Apr-2012
=> This issue is introduced in 5.5+ versions. Fixing this issue
in this patch.
Analysis:
---------
In function "filesort()", on error we are logging error message.
To the error message, the message related THD::killed_errno is
also appeneded, if it is set.(THD::kill_errno value is obtained
by calling member function THD::killed_errno)
In the scenario mentioned in this bug report, when we kill the
connection, THD::kill_errno is set to the THD::KILL_CONNECTION.
Enum type THD::KILL_CONNECTION corressponds to value
ER_SERVER_SHUTDOWN. Because of this, "Server shutdown in ...." is
appended to the message logged.
Fix:
----
Modified code of "filesort()" function to append "KILL_QUERY"
status to error message when thread is killed and server
shutdown is not in progress.
Description:
------------
There are 2 issues reported in the bug report,
1. One session running a "long" select, then, from the other
session, you kill that first one, while select is
running, and it receives that message "Server shutdown in
progress".
Reported Date: 02-Apr-2006
=> Looks like this isuse is already fixed in 2009 by the patch
pushed for bug28141.
2. Killing query which goes to filesort, logs error entries like:
120416 9:17:28 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
120416 9:18:48 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
120416 9:19:39 [ERROR] mysqld: Sort aborted: Server shutdown in
progress
Reported Date: 16-Apr-2012
=> This issue is introduced in 5.5+ versions. Fixing this issue
in this patch.
Analysis:
---------
In function "filesort()", on error we are logging error message.
To the error message, the message related THD::killed_errno is
also appeneded, if it is set.(THD::kill_errno value is obtained
by calling member function THD::killed_errno)
In the scenario mentioned in this bug report, when we kill the
connection, THD::kill_errno is set to the THD::KILL_CONNECTION.
Enum type THD::KILL_CONNECTION corressponds to value
ER_SERVER_SHUTDOWN. Because of this, "Server shutdown in ...." is
appended to the message logged.
Fix:
----
Modified code of "filesort()" function to append "KILL_QUERY"
status to error message when thread is killed and server
shutdown is not in progress.
Cleanup: remove TIME_FUZZY_DATE.
Introduce TIME_FUZZY_DATES which means "very fuzzy, the resulting
value is only used for comparison. It can be invalid date, fine, as long as it can be
compared".
Updated many tests results (they're better now).
Fixed some cases that didn't work with > 4G buffers.
Fixed compiler warnings
include/mysql_com.h:
Avoid compiler warning with strncmp()
sql-common/client.c:
Fixed long comment; Added ()
sql/filesort.cc:
Fix code to get filesort to work with big buffers
sql/sys_vars.cc:
Fixed some cache variables that could be set to higher value than the size_t
Limit query cache to ULONG_MAX as the query cache buffer variables are ulong
storage/federatedx/ha_federatedx.cc:
Remove not used variable
storage/maria/ha_maria.cc:
Fix that bulk_insert() works with big buffers
storage/maria/ma_write.c:
Fix that bulk_insert() works with big buffers
storage/myisam/ha_myisam.cc:
Fix that bulk_insert() works with big buffers
storage/myisam/mi_write.c:
Fix that bulk_insert() works with big buffers
storage/sphinx/snippets_udf.cc:
Fixed compiler warnings
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
Bug#14530242 CRASH / MEMORY CORRUPTION IN FILESORT_BUFFER::GET_RECORD_BUFFER WITH MYISAM
This is a backport of
Bug#12694872 - VALGRIND: 18,816 BYTES IN 196 BLOCKS ARE DEFINITELY LOST
Bug#13340270: assertion table->sort.record_pointers == __null
Bug#14536113 CRASH IN CLOSEFRM (TABLE.CC) OR UNPACK (FIELD.H) ON SUBQUERY WITH MYISAM TABLES
Also:
removed and re-added test files with file-ids from trunk.
Bug#14530242 CRASH / MEMORY CORRUPTION IN FILESORT_BUFFER::GET_RECORD_BUFFER WITH MYISAM
This is a backport of
Bug#12694872 - VALGRIND: 18,816 BYTES IN 196 BLOCKS ARE DEFINITELY LOST
Bug#13340270: assertion table->sort.record_pointers == __null
Bug#14536113 CRASH IN CLOSEFRM (TABLE.CC) OR UNPACK (FIELD.H) ON SUBQUERY WITH MYISAM TABLES
Also:
removed and re-added test files with file-ids from trunk.
The bug could caused a crash when the server executed a query with
ORDER by and sort_buffer_size was set to a small enough number.
It happened because the small sort buffer did not allow to allocate
all merge buffers in it.
Made sure that the allocated sort buffer would be big enough
to contain all possible merge buffers.