Problem:
A select query inside a group_concat function having an
outer reference results in a crash.
Analysis:
In function Item_group_concat::add, we do not check if
return value of get_tmp_table_field can be NULL for
a non-const item. This can happen for a query with a
outer reference.
While resolving the outer reference in the query present
inside group_concat function, we set the "const_item_cache"
to false. As a result in the call to const_item() from
Item_func_group_concat::add, it returns false and goes on
to check if this can be NULL resulting in the crash.
get_tmp_table_field does not return NULL for Items of type
Item_field, Item_result_field and Item_ref.
For all other items, it returns NULL.
Solution:
Check for the return value of get_tmp_table_field before we
access field contents.
TABLE/KEY RELATIONS
The DICT_FK_MAX_RECURSIVE_LOAD was reduced from 250 to 33 in rb#2058.
But in optimized build, this recursive depth is still too deep and
resulted in stack overflow. So reducing this depth to 20 now.
Fixed the get_data_size() methods for multi-point features to check properly for end
of their respective data arrays.
Extended the point checking function to take a 3d optional argument so cases where
there's additional data in each array element (besides the point data itself) can be
covered by the helper function.
Fixed the 3 cases where such offset was present to use the proper checking helper
function.
Test cases added.
Fixed review comments.
REGULAR SQL VS PREPARED STATEMENT
Analysis:
---------
When passing user variables as parameters to the
prepared statements, the IF() function evaluation
turns out to be incorrect.
Consider the example:
SET @var1='0.038687';
SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
+----------+----------+
| @var1 | sqlif |
+----------+----------+
| 0.038687 | 0.038687 |
+----------+----------+
Executing a prepared statement where the parameters are
supplied:
PREPARE fail_stmt FROM "SELECT ? ,
IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
+----------+------------+
| ? | ps_if_fail |
+----------+------------+
| 0.038687 | 1 |
+----------+------------+
1 row in set (0.00 sec)
In the regular statement or while executing the prepared
statements without passing parameters, the decimal
precision is set for the user variable of type string.
The comparison function used for evaluation considered
the precision while comparing the values.
But while executing the prepared statement with the
parameters supplied, the decimal precision was not
set. Thus the comparison function chosen was different
which looked at the absolute values for comparison.
Fix:
----
The fix is to set 'decimals' field of Item_param to the
default value which is nothing but the maximum number of
decimals(NOT_FIXED_DEC). This is set for cases where the
strings are converted to the numeric form within certain
functions. Thus the value is not rounded off during
comparison, ensuring correct evaluation.
NO ERRORS REPORTED
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.
TABLE/KEY RELATIONS
Problem:
When there are many tables, linked together through the foreign key
constraints, then loading one table will recursively open other tables. This
can sometimes lead to thread stack overflow. In such situations the server
will exit.
I see the stack overflow problem when the thread_stack is 196608 (the default
value for 32-bit systems). I don't see the problem when the thread_stack is
set to 262144 (the default value for 64-bit systems).
Solution:
Currently, in InnoDB, there is a macro DICT_FK_MAX_RECURSIVE_LOAD which defines
the maximum number of tables that will be loaded recursively because of foreign
key relations. This is currently set to 250. We can reduce this number to 33
(anything more than 33 does not solve the problem for the default value). We
can keep it small enough so that thread stack overflow does not happen for the
default values. Reducing the DICT_FK_MAX_RECURSIVE_LOAD will not affect the
functionality of InnoDB. The tables will eventually be loaded.
rb#2058 approved by Marko
The GIS WKB reader was checking for the presence of
enough data by first multiplying the number read (where
it could overflow) and only then comparing it to the
number of bytes available.
This can overflow and effectively turn off the check.
Fixed by:
1. Introducing a new function that does division only so
no overflow is possible.
2. Using the proper macros and parenthesizing them.
3. Doing an in-line division check in the only place where
the boundary check is done over a data structure other
than a dense points array.
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the
current default database becomes "NULL" none of the
statements will be binlogged.
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.
Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.
Bug#13243248 CHECK FOR "STACK OVERRUN" DOESN'T WORK WITH GCC-4.6, SERVER CRASHES
The existing check for stack direction may give wrong results
for new versions of gcc at high optimization levels.
Solution: Backport the stack-direction check from 5.5
As current size limit of 'url' field of help_topic
table is no longer sufficient for the contents of
the fill_help_tables-5.1.sql. So, loading the contents
in the table might result in warning (or error with
stricter modes).
Updated the type for 'url' field of help_topic as well
as help_category tables from char(128) to text.
Problem:
=======
Found using AddressSanitizer testing.
The mysqlbinlog utility may result in out-of-bound heap
buffer reads and thus, undefined behaviour, when processing
RBR events in the old (pre-5.1 GA) format.
The following code in process_event() would only be correct
if Rows_log_event was the base class for
Write,Update,Delete_rows_log_event_old classes:
case PRE_GA_WRITE_ROWS_EVENT:
case PRE_GA_DELETE_ROWS_EVENT:
case PRE_GA_UPDATE_ROWS_EVENT:
...
Rows_log_event *e= (Rows_log_event*) ev;
Table_map_log_event *ignored_map=
print_event_info->m_table_map_ignored.get_table(e->get_table_id());
...
if (e->get_flags(Rows_log_event::STMT_END_F))
{
...
}
However, Rows_log_event is only the base class for the
Write,Update_Delete_rows_event family of classes, but not
for their *_old counterparts. So the above typecasts are
incorrect for the old-format RBR events and may result (and
do result according to AddressSanitizer reports) in reading
memory outside of the previously allocated on heap buffer.
Fix:
===
The above mentioned invalid type cast has been replaced with
appropriate old counterpart.
Note:The above mentioned issue is present only mysql-5.1 and
5.5. This is fixed in mysql-5.6 and above as part of
Bug#55790. Hence few of the relevant changes of Bug#55790 are
being back ported to fix the current issue.
INTERACTIVE MODE
In interactive mode, libedit/readline allocates memory
for every new line entered & later the allocated memory
never gets freed.
Fixed by freeing the allocated memory blocks appropriately.
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
For a fresh insert, page_zip_available() was counting some fields twice.
In the worst case, the compressed page size grows by PAGE_ZIP_DIR_SLOT_SIZE
plus the size of the record that is being inserted. The size of the record
already includes the fields that will be stored in the uncompressed portion
of the compressed page.
page_zip_get_trailer_len(): Remove the output parameter entry_size,
because no caller is interested in it.
page_zip_max_ins_size(), page_zip_available(): Assume that the page grows
by PAGE_ZIP_DIR_SLOT_SIZE and the record size (which includes the fields
that would be stored in the uncompressed portion of the page).
rb#2169 approved by Sunny Bains
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
page_zip_compress_node_ptrs(): Do not attempt to invoke deflate() with
c_stream->avail_in, because it will result in Z_BUF_ERROR (and
page_zip_compress() failure and unnecessary further splits of the node
pointer page). A node pointer record can have empty payload, provided
that all key fields are empty.
Approved by Jimmy Yang
GRANT STATEMENT
Description: A missing length check causes problem while
copying source to destination when
lower_case_table_names is set to a value
other than 0. This patch fixes the issue
by ensuring that requried bound check is
performed.
Problem:
When the VALUES() function is inappropriately used in the SET stmt the server
exits.
set port = values(v);
This happens because the values(v) will be parsed as an Item_insert_value by
the parser. Both Item_field and Item_insert_value return the type as
FIELD_ITEM. But for Item_insert_value the field_name member is NULL. In
set_var constructor, when the type of the item is FIELD_ITEM we try to access
the non-existent field_name.
The class hierarchy is as follows:
Item -> Item_ident -> Item_field -> Item_insert_value
The Item_ident::field_name is NULL for Item_insert_value.
Solution:
In the parsing stage, in the set_var constructor if the item type is
FIELD_ITEM and if the field_name is non-existent, then it is probably
the Item_insert_value. So leave it as it is for later evaluation.
rb://2004 approved by Roy and Norvald.
HOST HAS '_' IN THE HOSTNAME
Problem:
=======
'_' and '%' are treated as a wildcards by the ACL code and
this is documented in the manual. The problem with
mysql_install_db is that it does not take this into account
when creating the initial GRANT tables:
--- cut ---
REPLACE INTO tmp_user SELECT @current_hostname,'root','','Y',
'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y',
'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',
0,0,0,0 FROM dual WHERE LOWER( @current_hostname) != 'localhost';
--- cut ---
If @current_hostname contains any wildcard characters, then
a wildcard entry will be defined for the 'root' user,
which is a flaw.
Analysis:
========
As per the bug description when we have a hostname with a
wildcard character in it, it allows clients from several other
hosts with similar name pattern to connect to the server as root.
For example, if the hostname is like 'host_.com' then the same
name is logged in mysql.user table. This allows 'root' users
from other hosts like 'host1.com', 'host2.com' ... to connect
to the server as root user.
While creating the intial GRANT tables we do not have a check
for wildcard characters in hostname.
Fix:
===
As part of fix escape character "\" is added before wildcard
character to make it a plain character, so that the one and
only host with the exact name will be able to connect to the
server.
OPENSSL
Description: Specify preference to disable compression
while using OpenSSL library. OpenSSL uses
zlib compression by default which may
lead to some problems.
PLATFORM= MACOSX10.6 X86_64 MAX
Problem: The test was failing on pb2's mac machine because
it was not cleaned up properly. The test checks if
the command 'start slave until' throws a proper
error when issued with a wrong number/type of
parameters. After this,the replication stream was
stopped using the include file 'rpl_end.inc'.
The errors thrown earlier left the slave in an
inconsistent state to be closed by the include
file which was caught by the mac machine.
Fix: Started slave by invoking start_slave.inc to have a
working slave before calling rpl_reset.inc
Problem: The test file was not in a good shape. It tested
start slave until relay log file/pos combination
wrongly. A couple of commands were executed at
master and replicated at slave. Next, the
coordinates in terms of relay log file and pos
were noted down followed by reset slave and start
slave until saved relay log file/pos. Reset slave
deletes all relay log files and makes the slave
forget its replication position. So, using the
saved coordiantes after reset slave is wrong.
Fix: Split the test in two parts:
a) Test for start slave until master log file/pos and
checking for correct errors in the failure
scenarios.
b) Test for start slave until relay log file/pos.
Problem: The variables auto_increment_increment and
auto_increment_offset were set in the the include
file rpl_init.inc. This was only configured for
some connections that are rarely used by test
cases, so likely that it will cause confusion.
If replication tests want to setup these variables
they should do so explicitly.
Fix:
a) Removed code to set the variables
auto_increment_increment and auto_increment_offset
in the include file.
b) Updated tests files using the same.
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.
Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
TLS AND DTLS RECORD PROTOCOLS
Description: In yassl, decryption phase in TLS protocol
depends on type of padding. This patch
removes this dependancy and makes error
generation/decryption process independent
of padding type.
post push fix:
rpl_stm_until.test was disabled because of
this bug. Enabled and fixed it.
Removed a part of the test that was obsolete.
It tested replication from 4.0 master to 5.0
slave.
FROM SHOW CREATE
Problem: The length of the internally generated foreign key name
is not checked.
Solution: The length of the internally generated foreign key name is
checked. If it is greater than the allowed limit, an error message
is reported. Also, the constraint name is printed in the same manner
as the table name, using the system charset information.
rb://1969 approved by Marko.