Description:
Fixing a build issue. The function innobase_convert_to_system_charset()
is included only in the builtin InnoDB, and it is missed in InnoDB
plugin. Adding this function in InnoDB plugin as well.
Includes bug fixes for:
Bug #16722314 FOREIGN KEY ID MODIFIED DURING EXPORT
Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
eliminate a race condition over recv_sys->n_addrs which might result in a database corruption
in recovery, without reporting a recovery error.
recv_recover_page_func(): move the code segment that decrements recv_sys->n_addrs
to the end of the function, after the call to mtr_commit()
rb://2282 approved by Inaam
GROUP BY, MYISAM
Problem:-
In a query, where we are using loose index scan optimization and
we have MIN() causes segmentation fault(where table row length
is less then key_length).
Analysis:
While using loose index scan for MIN(), we call key_copy(), to copy
the key data from record.
This function is using temporary record buffer to store key data
from the record buffer.But in case where the key length is greater
then the buffer length, this will cause a segmentation fault.
Solution:
Give a proper buffer to store a key record.
sql/opt_range.cc:
We can't use record buffer to store key data.So, give a proper buffer to store a key record.
OPENING MISSING PARTITION
In the ha_innobase::open() call, for normal tables, there is no retry logic.
But for partitioned tables, there is a retry logic introduced as fix for:
http://bugs.mysql.com/bug.php?id=33349https://support.mysql.com/view.php?id=21080
The Bug#33349, does not provide sufficient information to analyze the original
problem. The original problem reported by bug#33349 is also minor (just an
annoyance and no loss of functionality). Most importantly, the retry logic
has been introduced without any associated test case.
So we are removing the retry logic for partitioned tables. When the original
problem occurs, a different solution will be explored.
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
sql/item_func.cc:
skiping execution, if item is not resolved.
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
Solution:
Make group_concat's order by parsing independent of the select
sql/item_sum.cc:
Change the argument as, select->gorder_list is not pointer anymore
sql/item_sum.h:
Change the argument as, select->gorder_list is not pointer anymore
sql/mysql_priv.h:
Parsing for group_concat's order by is made independent.
As a result, add_order_to_list cannot be used anymore.
sql/sql_lex.cc:
Parsing for group_concat's order by is made independent.
As a result, add_order_to_list cannot be used anymore.
sql/sql_lex.h:
Parsing for group_concat's order by is made independent.
As a result, add_order_to_list cannot be used anymore.
sql/sql_yacc.yy:
Make group_concat's order by parsing independent of the select
queries order by.
Problem:
A select query inside a group_concat function having an
outer reference results in a crash.
Analysis:
In function Item_group_concat::add, we do not check if
return value of get_tmp_table_field can be NULL for
a non-const item. This can happen for a query with a
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache"
to false. As a result in the call to const_item() from
Item_func_group_concat::add, it returns false and goes on
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type
Item_field, Item_result_field and Item_ref.
For all other items, it returns NULL.
Solution:
Check for the return value of get_tmp_table_field before we
access field contents.
sql/item_sum.cc:
Check for the return value of get_tmp_table_field before accessing
TABLE/KEY RELATIONS
The DICT_FK_MAX_RECURSIVE_LOAD was reduced from 250 to 33 in rb#2058.
But in optimized build, this recursive depth is still too deep and
resulted in stack overflow. So reducing this depth to 20 now.
Fixed the get_data_size() methods for multi-point features to check properly for end
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper
function.
Test cases added.
Fixed review comments.
REGULAR SQL VS PREPARED STATEMENT
Analysis:
---------
When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.
Consider the example:
SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1 | sqlif |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+
Executing a prepared statement where the parameters are
supplied:
PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ? | ps_if_fail |
+----------+------------+
| 0.038687 | 1 |
+----------+------------+
1 row in set (0.00 sec)
In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.
But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.
Fix:
----
The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
mysys/mf_iocache2.c:
Added a check for "cache->error" and simulation of cache read failure
mysys/my_read.c:
Simulation of read failure
sql/log_event.cc:
Added debug simulation
sql/sql_repl.cc:
Added a check for cache error
TABLE/KEY RELATIONS
Problem:
When there are many tables, linked together through the foreign key
constraints, then loading one table will recursively open other tables. This
can sometimes lead to thread stack overflow. In such situations the server
will exit.
I see the stack overflow problem when the thread_stack is 196608 (the default
value for 32-bit systems). I don't see the problem when the thread_stack is
set to 262144 (the default value for 64-bit systems).
Solution:
Currently, in InnoDB, there is a macro DICT_FK_MAX_RECURSIVE_LOAD which defines
the maximum number of tables that will be loaded recursively because of foreign
key relations. This is currently set to 250. We can reduce this number to 33
(anything more than 33 does not solve the problem for the default value). We
can keep it small enough so that thread stack overflow does not happen for the
default values. Reducing the DICT_FK_MAX_RECURSIVE_LOAD will not affect the
functionality of InnoDB. The tables will eventually be loaded.
rb#2058 approved by Marko
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the
current default database becomes "NULL" none of the
statements will be binlogged.
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
mysql-test/suite/rpl/r/rpl_loaddata_m.result:
Earlier when defalut database was "NULL" DROP TABLE
was not getting logged. Post this fix it will be logged
and the DROP will fail at slave as the table creation
was skipped by master as --binlog-ignore-db=test.
mysql-test/suite/rpl/t/rpl_loaddata_m.test:
Earlier when defalut database was "NULL" DROP TABLE
was not getting logged. Post this fix it will be logged
and the DROP will fail at slave as the table creation
was skipped by master as --binlog-ignore-db=test.
sql/rpl_filter.cc:
Replaced DBUG_RETURN(0) with DBUG_RETURN(1).
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
sql/log_event.cc:
Storing the Uservar parsing time query id into a new member of the event
to to temporarily substitute
with it the actual thread id at the Uservar execution time.
sql/log_event.h:
Storage for keeping query-id in User-var intance is added.
Bug#13243248 CHECK FOR "STACK OVERRUN" DOESN'T WORK WITH GCC-4.6, SERVER CRASHES
The existing check for stack direction may give wrong results
for new versions of gcc at high optimization levels.
Solution: Backport the stack-direction check from 5.5
As current size limit of 'url' field of help_topic
table is no longer sufficient for the contents of
the fill_help_tables-5.1.sql. So, loading the contents
in the table might result in warning (or error with
stricter modes).
Updated the type for 'url' field of help_topic as well
as help_category tables from char(128) to text.
Problem:
=======
Found using AddressSanitizer testing.
The mysqlbinlog utility may result in out-of-bound heap
buffer reads and thus, undefined behaviour, when processing
RBR events in the old (pre-5.1 GA) format.
The following code in process_event() would only be correct
if Rows_log_event was the base class for
Write,Update,Delete_rows_log_event_old classes:
case PRE_GA_WRITE_ROWS_EVENT:
case PRE_GA_DELETE_ROWS_EVENT:
case PRE_GA_UPDATE_ROWS_EVENT:
...
Rows_log_event *e= (Rows_log_event*) ev;
Table_map_log_event *ignored_map=
print_event_info->m_table_map_ignored.get_table(e->get_table_id());
...
if (e->get_flags(Rows_log_event::STMT_END_F))
{
...
}
However, Rows_log_event is only the base class for the
Write,Update_Delete_rows_event family of classes, but not
for their *_old counterparts. So the above typecasts are
incorrect for the old-format RBR events and may result (and
do result according to AddressSanitizer reports) in reading
memory outside of the previously allocated on heap buffer.
Fix:
===
The above mentioned invalid type cast has been replaced with
appropriate old counterpart.
Note:The above mentioned issue is present only mysql-5.1 and
5.5. This is fixed in mysql-5.6 and above as part of
Bug#55790. Hence few of the relevant changes of Bug#55790 are
being back ported to fix the current issue.
client/mysqlbinlog.cc:
The above mentioned invalid type cast of using new event
object to read old events, has been replaced with
appropriate old counterpart.
Note:The above mentioned issue is present only mysql-5.1 and
5.5. This is fixed in mysql-5.6 and above as part of
Bug#55790. Hence few of the relevant changes of Bug#55790 are
being back ported to fix the current issue.
INTERACTIVE MODE
In interactive mode, libedit/readline allocates memory
for every new line entered & later the allocated memory
never gets freed.
Fixed by freeing the allocated memory blocks appropriately.
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
sql/item_sum.cc:
update ORDER::next fields so that they point to new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
sql/sql_string.h:
Handled coping str_charset member in operator '=' overload
method.
For a fresh insert, page_zip_available() was counting some fields twice.
In the worst case, the compressed page size grows by PAGE_ZIP_DIR_SLOT_SIZE
plus the size of the record that is being inserted. The size of the record
already includes the fields that will be stored in the uncompressed portion
of the compressed page.
page_zip_get_trailer_len(): Remove the output parameter entry_size,
because no caller is interested in it.
page_zip_max_ins_size(), page_zip_available(): Assume that the page grows
by PAGE_ZIP_DIR_SLOT_SIZE and the record size (which includes the fields
that would be stored in the uncompressed portion of the page).
rb#2169 approved by Sunny Bains
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.