When logging to the binary log in row, updates and deletes to a BLACKHOLE
engine table are skipped.
It is impossible to log binary log in row format for updates and deletes to
a BLACKHOLE engine table, as no row events can be generated in these cases.
After fix, generate a warning for UPDATE/DELETE statements that modify a
BLACKHOLE table, as row events are not logged in row format.
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
bz
Problem:
In query like
select 1 from .. order by match .. against ...;
causes a debug assert failue.
Analysis:
In union type query like
(select * from order by a) order by b;
or
(select * from order by a) union (select * from order by b);
We skip resolving of order by a for 1st query and order by of a and b in
2nd query.
This means that, in case when our order by have Item_func_match class,
we skip resolving it.
But we maintain a ft_func_list and at the time of optimization, when we
Perform FULLTEXT search before all regular searches on the bases of the
list we call Item_func_match::init_search() which will cause debug assert
as the item is not resolved.
Solution:
We will skip execution if the item is not fixed and we will not
fix index(Item_func_match::fix_index()) for which
Item_func_match::fix_field() is not called so that on later changes
we can check the dependency on fix field.
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
Solution:
Make group_concat's order by parsing independent of the select
PARTIAL INDEX
Consider the following table definition:
CREATE TABLE t (
my_col CHAR(10),
...
INDEX my_idx (my_col(1))
)
The my_idx index is not able to distinguish between rows with
equal first-character my_col-values (e.g. "f", "foo", "fee").
Prior to this CS, the range optimizer would translate
"WHERE my_col NOT IN ('f', 'h')" into (optimizer trace syntax)
"ranges": [
"NULL < my_col < f",
"f < my_col"
]
But this was not correct because the rows with values "foo"
and "fee" would not belong to any of those ranges. However, the
predicate "my_col != 'f' AND my_col != 'h'" would translate
to
"ranges": [
"NULL < my_col"
]
because get_mm_leaf() changes from "<" to "<=" for partial
keyparts. This CS changes the range optimizer implementation
for NOT IN to behave like a conjunction of NOT EQUAL: it
replaces "<" with "<=" for all but the first range when the
keypart is partial.
BACKGROUND:
The testcase i_innodb.innodb_bug14036214 when run under valgrind
leaks memory.
ANALYSIS:
In the code path of mysql_update, a temporary file is opened
using open_cached_file().
When an error has occured in that code path, this temporary
file was not closed since call to close_cached_file() was
missing.
This problem exists in 5.5 but it does not exists in 5.6 and
trunk.
This is because in 5.6 and trunk, when we issue the update
statement in the test case, it does not take the same code path
as in 5.5. The code path is different because a different plan
is chosen by optimizer.
See Bug#14036214 for details.
However, the problem can still be examined in 5.6 and trunk
by code inspection.
FIX:
The file opened by open_cached_file() has been closed by calling
close_cached_file() when an error occurs so that it does not
results in a memory leak.
Problem:
A select query inside a group_concat function having an
outer reference results in a crash.
Analysis:
In function Item_group_concat::add, we do not check if
return value of get_tmp_table_field can be NULL for
a non-const item. This can happen for a query with a
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache"
to false. As a result in the call to const_item() from
Item_func_group_concat::add, it returns false and goes on
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type
Item_field, Item_result_field and Item_ref.
For all other items, it returns NULL.
Solution:
Check for the return value of get_tmp_table_field before we
access field contents.
Problem:
Insert with 'on duplicate key update' on a view,
crashes the server.
Analysis:
During an insert on to a view, we do the following:
For insert fields and values -
1. Resolve insert values.
2. Resolve insert fields.
3. Check if the fields and values are all from a
single table of a view in case of INSERT VALUES.
Do not check the same in case of INSERT SELECT,
as the values can be read from different table than
that of the view.
For the update fields (if DUP UPDATE is used)
1. Create a name resolution context with 'table_list' only.
2. Resolve update fields in this context.
3. Check if update fields and values are from the same
table as the insert fields.
4. Get the next name resolution context. Concatinate this
with the previous one.
5. Resolve update values in this context as we can refer
to other tables in the values clause.
Note that at step 3(of update fields), we check for
'used_tables map' of update values, without resolving them
first. Hence the crash.
Fix:
At step 3, do not pass the update values to check if its a
single table view update, as update values can refer other table.
Code has been re-organized to function like check_insert_fields.
SCHEDULER DROPS EVENTS
Problem: On a semi sync enabled server (Master/Slave),
if event scheduler drops an event after completion,
server crashes.
Analaysis: If an event is created with "ON COMPLETION
NOT PRESERVE" clause, event scheduler deletes the event
upon event completion(expiration) and the thread object
will be destroyed. In the destructor of the thread object,
mysys_var member is set to zero explicitly. Later from
the same destructor call(same execution path),
incase of semi sync enabled server, while cleanup is called,
THD::mysys_var member is accessed by THD::enter_cond()
function which causes server to crash.
Fix: mysys_var should not be explicitly set to zero and
also it is not required.
Fixed the get_data_size() methods for multi-point features to check properly for end
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper
function.
Test cases added.
Fixed review comments.
REGULAR SQL VS PREPARED STATEMENT
Analysis:
---------
When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.
Consider the example:
SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1 | sqlif |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+
Executing a prepared statement where the parameters are
supplied:
PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ? | ps_if_fail |
+----------+------------+
| 0.038687 | 1 |
+----------+------------+
1 row in set (0.00 sec)
In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.
But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.
Fix:
----
The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the
current default database becomes "NULL" none of the
statements will be binlogged.
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
Problem - When the slave was disconnected from the master, under certain
conditions, upon reconnect, it will report that it received a
packet larger the slave_max_allowed_packet which causes the
replication to stop.
Analysis -The reason of this failure is that on reconnect
the slave sets the max_allowed_packet from the master's mi->mysql
object which keeps the max_allowed_packet as 1MB. This causes the
slave to report such error on recieving packet bigger than 1MB.
START SLAVE on the slave fixes the problem since it restarts
slave threads which initializes the max_allowed_packet to
slave_max_allowed_packet.
Fix - The problem is fixed by some code refactoring and introduction of a new
function which updates the max_allowed_packet for the THD object of the
slave thread and the mysql->options max_allowed_packet.
(Based on Sinisa's patch)
Added a version checking facility to mysql_upgrade.
The versions used for checking is the version of the
server that mysql_upgrade is going to upgrade and the
server version that mysql_upgrade was build/distributed
with.
Also added an option '--version-check' to enable/disable
the version checking.
SHOW ENGINE INNOD
Problem:
The purpose of explain_filename() is to provide useful additional
information regarding the partitions given the filename. This function
was returning an error when it was not able to parse the given filename.
For example, within InnoDB, temporary files are created with #sql-
prefix. But this function was not able to parse it correctly.
Solution:
It is not an error, if explain_filename() could not parse the given
filename. If there is no partition information to explain, then silently
return from the function.
rb#1940 approved by mattiasj
The range optimizer uses 'save_in_field_no_warnings()' to verify properties of
'value <cmp> field' expressions.
If this execution yields an error, it should abort.
RETURNS RANDOM DATA
MySQL 5.5 specific version of bugfix.
When Loose Index Scan Range access is used, MySQL execution needs
to copy non-aggregated fields. end_send() checked if this was
necessary by checking if join_tab->select->quick had type
QS_TYPE_GROUP_MIN_MAX.
In this bug, however, MySQL created a sort index to sort the rows
read from this range access method. create_sort_index() deletes
join_tab->select->quick which makes it impossible to inquire
the join_tab if LIS has been used.
The fix for MySQL 5.5 is to introduce a variable in JOIN_TAB
that stores whether or not LIS has been used. There is no need
for this variable in later MySQL versions because the relevant
code has been refactored.
Post push fix:
setup_ref_array() now uses n_sum_items to determine size of ref_pointer_array.
The problem was that n_sum_items kept growing, it wasn't reset for each query.
A similar memory leak was fixed with the patch for:
Bug 14683676 ENDLESS MEMORY CONSUMPTION IN SETUP_REF_ARRAY WITH MAX IN SUBQUERY
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: When 'SET' type columns are used in a DML
inside a stored procedure and a NULL value is passed
to that column, replication is breaking.
Analysis: All stored procedure variables used inside
a DML will be substituted with NAME_CONST functions.
While NAME_CONST are used in this particular scenario,
i.e., when NULL value is passed then charset is copied
from 'empty_set_string' member of Field_set class.
The operator '=' overload method inside 'String' class
is not coping str_charset from R.H.S object to L.H.S object.
Hence charset is wrongly copied in the string assignment
Fix: Handle coping str_charset member in operator '=' overload
method.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
PROBLEM: If multiple statements are sent by a single
request then only the last statement was
getting logged. An attacker can bypass the
audit log just by sending two comsecutive
statements in one request.
SOLUTION: Each statements from a single request are
logged.
PROBLEM AFTER MYSQL_HA_FIND
This problem occured if a prepared statement tried to create a table
for which there already existed a view with the same name while a
SQL handler was opened.
Before DDL statements are executed, mysql_ha_rm_tables() is called
to remove any matching tables from the internal list of opened SQL
handler tables. This match was done on TABLE_LIST::db and
TABLE_LIST::table_name. This is problematic for views (which use
TABLE_LIST::view_db and TABLE_LIST::view_name) and anonymous
derived tables.
This patch fixes the problem by skipping TABLE_LISTs representing
anonymous derived tables and using get_db_name()/get_table_name()
which handles views when looking for SQL handler tables to remove.
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
GRANT STATEMENT
Description: A missing length check causes problem while
copying source to destination when
lower_case_table_names is set to a value
other than 0. This patch fixes the issue
by ensuring that requried bound check is
performed.