In order to keep error message numbers stable between GA releases, we
can not now add a new error message to 5.1/5.5 as this message would get
a number now used in 5.6.
This patch enforces this by adding a 5.1/5.5 specific check when processing
the error message file. If a new error message is added, building will
abort and report an error.
INSERT BUFFER MERGE
Problem:
When the record is merged from the change buffer to the actual page,
in a particular condition, it is assumed that the deleted rec will
be re-used by the inserted rec. With this assumption the lock is
restored on the pointer to the deleted rec itself, thinking that
it is pointing to the newly inserted rec.
Solution:
Just before restoring the lock, update the rec pointer to point
to the newly inserted record. An assert has been added to verify
this. This assert will fail without the fix and will pass with
the fix.
rb#2449 in review by Marko and Jimmy
INNODB_FAST_SHUTDOWN IS 2
Problem:
When innodb_fast_shutdown is set to 2 and the master thread enters
flush loop, under some circumstances it will not be able to exit it.
This may lead to a shutdown hanging.
This is happening because of the following:
1. In the flush_loop block of code, if the srv_fast_shutdown is
equal to 2 (very fast shutdown), then we do not flush dirty
pages in buffer pool to disk.
2. In the same flush_loop block of code, if the number of dirty
pages is more than user specified limit, we go to step 1.
This results in infinite loop.
Solution:
When we are in the process of doing a very fast shutdown, don't
do step 2 above.
rb#2328 approved by Inaam.
When a record contains no user data bytes (such as when the PRIMARY
KEY is an empty string and all secondary index fields are NULL or the
empty string), page_zip_decompress() could fail to set the record
heap_no correctly.
page_zip_decompress_node_ptrs(), page_zip_decompress_sec(),
page_zip_decompress_clust(): Set heap_no also at the end of the
compressed data stream.
rb#2448 approved by Jimmy Yang and Inaam Rana
AFTER A ROW IS READ
Approved by: Sunny Bains rb://2425
Don't release concurrency tickets when asked to release
btr_search_latch. This is a 5.5 only bug. It is already
fixed in 5.6 upwards.
In log_event.h
DESCRIPTION:
Due to inclusion of an implementation file, namely 'rpl_tblmap.cc'
in a header file, namely 'log_event.h'; linker errors occur if
log_event.h is included in an application containing multiple source
files, such as in the case of Binlog API.
Binlog API requires including log_event.h in its source files;
which leads to multiple definition errors, for functions defined
in rpl_tblmap.cc for class 'table_mapping'.
FIX:
Change the inclusion from header file(log_event.h) to source files
using this header and have flag MYSQL_CLIENT set. The only file in
the current server repository is mysqlbinlog.cc.
client/mysqlbinlog.cc:
Pulled in code of rpl_tblmap.cc
sql/log_event.h:
Removed inclusion of the implementation file from this header file
== Analysis ==
Both change buffer pages and on-disk indexes pages are marked as
FIL_PAGE_INDEX. So all ibuf index pages will classify as INDEX with NULL
table_name and index_name.
== Solution ==
A new page type for ibuf data pages named I_S_PAGE_TYPE_IBUF is defined. All
these pages whose index_id equal (DICT_IBUF_ID_MIN + IBUF_SPACE_ID) will
classify as IBUF_DATA instead of INDEX in INNODB_BUFFER_PAGE
and INNODB_BUFFER_PAGE_LRU.
This fix is only for IS reporting, both on-disk and buffer pool structures
keep unchanged.
Approved by both Marko and Jimmy. rb#2334
WITH COMPOSITE KEY COLUMNS
Problem:-
While running a SELECT query with several AGGR(DISTINCT) function
and these are referring to different field of same composite key,
Returned incorrect value.
Analysis:-
In a table, where we have composite key like (a,b,c)
and when we give a query like
select COUNT(DISTINCT b), SUM(DISTINCT a) from ....
here, we first make a list of items in Aggr(distinct) function
(which is a, b), where order of item doesn't matter.
and then we see, whether we have a composite key where the prefix
of index columns matches the items of the aggregation function.
(in this case we have a,b,c).
if yes, so we can use loose index scan and we need not perform
duplicate removal to distinct in our aggregate function.
In our table, we traverse column marked with <-- and get the result as
(a,b,c) count(distinct b) sum(distinct a)
treated as count b treated as sum(a)
(1,1,2)<-- 1 1
(1,2,2)<-- 1++=2 1+1=2
(1,2,3)
(2,1,2)<-- 2++=3 1+1+2=4
(2,2,2)<-- 3++=4 1+1+2+2=6
(2,2,3)
result will be 4,6, but it should be (2,3)
As in this case, our assumption is incorrect. If we have
query like
select count(distinct a,b), sum(distinct a,b)from ..
then we can use loose index scan
Solution:-
In our query, when we have more then one aggr(distinct) function
then they should refer to same fields like
select count(distinct a,b), sum(distinct a,b) from ..
-->we can use loose scan index as both aggr(distinct) refer to same fields a,b.
If they are referring to different field like
select count(distinct a), sum(distinct b) from ..
-->will not use loose scan index as both aggr(distinct) refer to different fields.
innobase_convert_to_filename_charset() was by mistake kept within
the conditional compilation of UNIV_COMPILE_TEST_FUNCS. Now placing
the function out of UNIV_COMPILE_TEST_FUNCS. Also, removed the
unnecessary log message (as in 5.6+).
NUMBER ALREADY USED BY 5.6
The problem was that the patch for Bug#13004581 added a new error
message to 5.5. This causes it to use an error number already used
in 5.6 by ER_CANNOT_LOAD_FROM_TABLE_V2. Which means that error
message number stability between GA releases is broken.
This patch fixes the problem by removing the error message and
using ER_UNKNOWN_ERROR instead.
STRING CONVERSION FUNCTIONS
Problem:
While executing the prepared statement, user variable is
set to memory which would be freed at the end of
execution.
If the statement is executed again, valgrind throws
error when accessing this pointer.
Analysis:
1. First time when Item_func_set_user_var::check is called,
memory is allocated for "value" to store the result.
(In the call to copy_if_not_alloced).
2. While sending the result, Item_func_set_user_var::check
is called again. But, this time, its called with
"use_result_field" set to true.
As a result, we call result_field->val_str(&value).
3. Here memory allocated for "value" gets freed. And "value"
gets set to "result_field", with "str_length" being that of
result_field's.
4. In the call to JOIN::cleanup, result_field's memory gets
freed as this is allocated in a chunk as part of the
temporary table which is needed to execute the query.
5. Next time, when execute of the same statement is called,
"value" will be set to memory which is already freed.
Valgrind error occurs as "str_length" is positive
(set at Step 3)
Note that user variables list is stored as part of the Lex object
in set_var_list. Hence the persistance across executions.
Solution:
Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem
as well.So backporting the same.
In the solution for Bug#11764371, we create another object of
user_var and repoint it to temp_table's field. As a result while
deleting the alloced buffer in Step 3, since the cloned object
does not own the buffer, deletion will not happen.
So at step 5 when we execute the statement second time, the
original object will be used and since deletion did not happen
valgrind will not complain about dangling pointer.
sql/item_func.h:
Add constructors.
sql/sql_select.cc:
Change user variable assignment functions to read from fields after
tables have been unlocked.
The problem happened due to broken left expression in Item_in_optimizer object.
In case of the bug left expression is runtime created Item_outer_ref item which
is deleted at the end of the statement and one of Item_in_optimizer arguments
becomes bad when re-executed. The fix is to use real_item() instead of original
left expression. Note: It feels a bit weird that after preparing, the field is
directly part of the generated Item_func_eq, whereas in execution it is replaced
with an Item_outer_ref wrapper object.
sql/item_subselect.cc:
use left_expr->real_item() instead of original left expression
because left_expr can be runtime created Ref item which is deleted
at the end of the statement. Thus one of 'substitution' arguments
can be broken in case of PS.
TRANSACTION ROLLBACK
Problem:
=======
"prepare_commit_mutex" is acquired during "innobase_xa_prepare"
and it is freed only in "innobase_commit". After prepare,
if the commit operation fails the transaction is rolled back
but the mutex is not released.
Analysis:
========
During transaction commit process transaction is prepared and
the "prepare_commit_mutex" is acquired to preserve the order
of commit. After prepare write to binlog is initiated.
File: sql/handler.cc
if (error || (is_real_trans && xid &&
-----> (error= !(cookie= tc_log->log_xid(thd, xid)))))
{
ha_rollback_trans(thd, all);
In the above code "tc_log->log_xid" operation fails.
When the write to binlog fails the transaction is rolled back
with out freeing the mutex. A subsequent "INSERT" operation
tries to acquire the same mutex during its commit process
and the server aborts.
Fix:
===
"prepare_commit_mutex" is freed during "innobase_rollback".
storage/innobase/handler/ha_innodb.cc:
Added code to free "prepare_commit_mutex"
The problem was that if UPDATE with subselect caused a
deadlock inside InnoDB, this deadlock was not properly
handled by the SQL layer. This meant that the SQL layer
would try to unlock the row after InnoDB had rolled
back the transaction. This caused an assertion inside
InnoDB.
This patch fixes the problem by checking for errors
reported by SQL_SELECT::skip_record() and not calling
unlock_row() if any errors have been reported.
This bug is similar to Bug#13586591, but for UPDATE
rather than DELETE. Similar issues in filesort/opt_range/
sql_select will be investigated and handled in the scope
of Bug#16767929
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There are two situations here. The constraint name is explicitly
given by the user and the constraint name is automatically generated
by InnoDB. In the case of generated constraint name, it is formed by
adding table name as prefix. The table names are stored internally in
my_charset_filename. In the case of constraint name explicitly given
by the user, it is stored in UTF8 format itself. So, in some
situations the constraint name is in utf8 and in some situations it is
in my_charset_filename format. Hence this problem.
Solution:
Always store the foreign key constraint name in UTF-8 even when
automatically generated.
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
Problem:
There was a memory leak in the function dict_create_add_foreign_to_dictionary().
The allocated pars_info_t object is not freed in the error code path.
Solution:
Allocate the pars_info_t object after the error checking.
rb#2368 in review
eliminate a race condition over recv_sys->n_addrs which might result in a database corruption
in recovery, without reporting a recovery error.
recv_recover_page_func(): move the code segment that decrements recv_sys->n_addrs
to the end of the function, after the call to mtr_commit()
rb://2282 approved by Inaam
GROUP BY, MYISAM
Problem:-
In a query, where we are using loose index scan optimization and
we have MIN() causes segmentation fault(where table row length
is less then key_length).
Analysis:
While using loose index scan for MIN(), we call key_copy(), to copy
the key data from record.
This function is using temporary record buffer to store key data
from the record buffer.But in case where the key length is greater
then the buffer length, this will cause a segmentation fault.
Solution:
Give a proper buffer to store a key record.
sql/opt_range.cc:
We can't use record buffer to store key data.So, give a proper buffer to store a key record.
After a clean shutdown, InnoDB will not check the *.ibd file headers,
for maximum performance. This is unchanged before and after this
patch.
What this fix addresses is the case when crash recovery is
needed. Previously, InnoDB could load a corrupted tablespace file.
buf_page_is_corrupted(): Add the parameter check_lsn.
fil_check_first_page(): New function, to perform a consistency check
on the first page of a file. This can be overridden by setting
innodb_force_recovery.
fil_read_first_page(), fil_open_single_table_tablespace(),
fil_load_single_table_tablespace(): Invoke fil_check_first_page().
open_or_create_data_files(): Check the status of
fil_open_single_table_tablespace().
rb#2352 approved by Jimmy Yang
When logging to the binary log in row, updates and deletes to a BLACKHOLE
engine table are skipped.
It is impossible to log binary log in row format for updates and deletes to
a BLACKHOLE engine table, as no row events can be generated in these cases.
After fix, generate a warning for UPDATE/DELETE statements that modify a
BLACKHOLE table, as row events are not logged in row format.
SINGLE DATABASE; MYSQLBINLOG
Problem: If last subevent inside a RBR event
is skipped while replaying a binary log
using mysqlbinlog with --database=... option,
generated output is missing end marker('/*!*/;)
for that RBR event. Thence causing syntax error
while replaying the generated output.
Fix: Append end marker ('/*!*/;) if the last
subevent is getting skipped.
Append end marker if the last
subevent is getting skipped.
client/mysqlbinlog.cc:
Append end marker if the last
subevent is getting skipped.
MYSQL_READ_DEFAULT_FILE
Parsing of the plugin-dir config file option was not working due to a typo.
Fixed the typo. No test case can be added due to lack of support for
defaults-exitra-file testing in mysql-test-run.pl.
Thanks to Sinisa for contributing the fix.
OPENING MISSING PARTITION
In the ha_innobase::open() call, for normal tables, there is no retry logic.
But for partitioned tables, there is a retry logic introduced as fix for:
http://bugs.mysql.com/bug.php?id=33349https://support.mysql.com/view.php?id=21080
The Bug#33349, does not provide sufficient information to analyze the original
problem. The original problem reported by bug#33349 is also minor (just an
annoyance and no loss of functionality). Most importantly, the retry logic
has been introduced without any associated test case.
So we are removing the retry logic for partitioned tables. When the original
problem occurs, a different solution will be explored.