Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
SHUTDOWN IS IN PROGRESS
PROBLEM
-------
In the background thread srv_master_thread() we have a
a one second delay loop which will continuously monitor
server activity .If the server is inactive (with out any
user activity) or in a shutdown state we do some background
activity like flushing the changes.In the current code
we are not checking if server is in shutdown state before
sleeping for one second.
FIX
---
If server is in shutdown state ,then dont go to one second
sleep.
Problem:
When the user specified foreign key name contains "_ibfk_", InnoDB wrongly
tries to rename it.
Solution:
When a table is renamed, all its associated foreign keys will also be renamed,
only if the foreign key names are automatically generated. If the foreign key
names are given by the user, even if it has _ibfk_ in it, it must not be
renamed.
rb#2935 approved by Jimmy, Krunal and Satya
BUG#12535301- SYS_VARS.RPL_INIT_SLAVE_FUNC MISMATCHES IN DAILY-5.5
Problem:
sys_vars.rpl_init_slave_func test was not recorded after
the last edit. It was disabled on 5.1 after seeing failures
due to the above reason.
No old failures as this suite never ran with pb2 on 5.1
Fix:
Added assert condition after wait for checks.
Recorded test and enabled it.
TO DUMP DATA FROM MYSQL-5.6
Analysis
--------
Dumping mysql-5.6 data using mysql-5.1/mysql-5.5 'myqldump'
utility fails with a syntax error.
Server system variable 'sql_quote_show_create' which quotes the
identifiers is set in the mysqldump utility. The mysldump utility
of mysql-5.1/mysql-5.5 uses deprecated syntax 'SET OPTION' to set
the 'sql_quote_show_create' option. The support for the syntax is
removed in mysql-5.6. Hence syntax error is reported while taking
the dump.
Fix:
---
Changed the 'mysqldump' code to use the syntax
'SET SQL_QUOTE_SHOW_CREATE' to set the 'sql_quote_show_create'
option. That syntax is supported on mysql-5.1, mysql-5.5 and
mysql-5.6.
NOTE: I have not added an mtr test case since it is difficult
to simulate the condition. Also the syntax may not be further
simplified in the future.
Analysis
--------
The pthread_mutex commit_threads_m was initiliazed but never
used.
Fix
---
Removing the commit_threads_m mutex from the code base.
[ Approved by Marko rb#2475]
TO INCONSISTENCY
PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the
table_name.par file and store it in an internal data
structure.Then we delete this file and the data in
the table. If the server crashes after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.
FIX
---
1. We move the part of deleting par file after deleting
all the table data from the storage egine.
2. During drop operation if we detect that the par
file is missing then we delete the .frm file,since
there is no way of recovering without par file.
[Approved by Mattias rb#2576 ]
BY BINLOG_KILLED_SIMULATE.TEST
'mysqbinlog' tool creates a temporary file while
preparing LOAD DATA QUERY. These files needs to be deleted
at the end of the test script otherwise these files are
left out in the daily-run machines, causing
"no space on device issues"
Fix:
Delete them at the end of these test scripts
1) execute mysqlbinlog with --local-load option to
create these files in a specified tmpdir
2) delete the tmpdir at the end of the test script
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS
Backporting these two fixes to 5.1
Added unittest to test my_decimal construtor and assignment operators
INSERT BUFFER MERGE
Problem:
When the record is merged from the change buffer to the actual page,
in a particular condition, it is assumed that the deleted rec will
be re-used by the inserted rec. With this assumption the lock is
restored on the pointer to the deleted rec itself, thinking that
it is pointing to the newly inserted rec.
Solution:
Just before restoring the lock, update the rec pointer to point
to the newly inserted record. An assert has been added to verify
this. This assert will fail without the fix and will pass with
the fix.
rb#2449 in review by Marko and Jimmy
In order to keep error message numbers stable between GA releases, we
can not now add a new error message to 5.1/5.5 as this message would get
a number now used in 5.6.
This patch enforces this by adding a 5.1/5.5 specific check when processing
the error message file. If a new error message is added, building will
abort and report an error.
When a record contains no user data bytes (such as when the PRIMARY
KEY is an empty string and all secondary index fields are NULL or the
empty string), page_zip_decompress() could fail to set the record
heap_no correctly.
page_zip_decompress_node_ptrs(), page_zip_decompress_sec(),
page_zip_decompress_clust(): Set heap_no also at the end of the
compressed data stream.
rb#2448 approved by Jimmy Yang and Inaam Rana
innobase_convert_to_filename_charset() was by mistake kept within
the conditional compilation of UNIV_COMPILE_TEST_FUNCS. Now placing
the function out of UNIV_COMPILE_TEST_FUNCS. Also, removed the
unnecessary log message (as in 5.6+).
Description:
Fixing a build issue. The function innobase_convert_to_system_charset()
is included only in the builtin InnoDB, and it is missed in InnoDB
plugin. Adding this function in InnoDB plugin as well.
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
The problem happened due to broken left expression in Item_in_optimizer object.
In case of the bug left expression is runtime created Item_outer_ref item which
is deleted at the end of the statement and one of Item_in_optimizer arguments
becomes bad when re-executed. The fix is to use real_item() instead of original
left expression. Note: It feels a bit weird that after preparing, the field is
directly part of the generated Item_func_eq, whereas in execution it is replaced
with an Item_outer_ref wrapper object.
Includes bug fixes for:
Bug #16722314 FOREIGN KEY ID MODIFIED DURING EXPORT
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There are two situations here. The constraint name is explicitly
given by the user and the constraint name is automatically generated
by InnoDB. In the case of generated constraint name, it is formed by
adding table name as prefix. The table names are stored internally in
my_charset_filename. In the case of constraint name explicitly given
by the user, it is stored in UTF8 format itself. So, in some
situations the constraint name is in utf8 and in some situations it is
in my_charset_filename format. Hence this problem.
Solution:
Always store the foreign key constraint name in UTF-8 even when
automatically generated.
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There was a memory leak in the function dict_create_add_foreign_to_dictionary().
The allocated pars_info_t object is not freed in the error code path.
Solution:
Allocate the pars_info_t object after the error checking.
rb#2368 in review
eliminate a race condition over recv_sys->n_addrs which might result in a database corruption
in recovery, without reporting a recovery error.
recv_recover_page_func(): move the code segment that decrements recv_sys->n_addrs
to the end of the function, after the call to mtr_commit()
rb://2282 approved by Inaam
GROUP BY, MYISAM
Problem:-
In a query, where we are using loose index scan optimization and
we have MIN() causes segmentation fault(where table row length
is less then key_length).
Analysis:
While using loose index scan for MIN(), we call key_copy(), to copy
the key data from record.
This function is using temporary record buffer to store key data
from the record buffer.But in case where the key length is greater
then the buffer length, this will cause a segmentation fault.
Solution:
Give a proper buffer to store a key record.
OPENING MISSING PARTITION
In the ha_innobase::open() call, for normal tables, there is no retry logic.
But for partitioned tables, there is a retry logic introduced as fix for:
http://bugs.mysql.com/bug.php?id=33349https://support.mysql.com/view.php?id=21080
The Bug#33349, does not provide sufficient information to analyze the original
problem. The original problem reported by bug#33349 is also minor (just an
annoyance and no loss of functionality). Most importantly, the retry logic
has been introduced without any associated test case.
So we are removing the retry logic for partitioned tables. When the original
problem occurs, a different solution will be explored.
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
Solution:
Make group_concat's order by parsing independent of the select
Problem:
A select query inside a group_concat function having an
outer reference results in a crash.
Analysis:
In function Item_group_concat::add, we do not check if
return value of get_tmp_table_field can be NULL for
a non-const item. This can happen for a query with a
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache"
to false. As a result in the call to const_item() from
Item_func_group_concat::add, it returns false and goes on
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type
Item_field, Item_result_field and Item_ref.
For all other items, it returns NULL.
Solution:
Check for the return value of get_tmp_table_field before we
access field contents.
TABLE/KEY RELATIONS
The DICT_FK_MAX_RECURSIVE_LOAD was reduced from 250 to 33 in rb#2058.
But in optimized build, this recursive depth is still too deep and
resulted in stack overflow. So reducing this depth to 20 now.