Commit graph

29120 commits

Author SHA1 Message Date
Luis Soares
fcf33b601c BUG#17066269
- Automerged from bug branch into latest mysql-5.5.
- Fixed trailing whitespaces.
- Updated the copyright notice year to 2014.
2014-01-09 12:53:49 +00:00
mithun
672f18c157 Bug #17307201 : FAILING ASSERTION: PREBUILT->TRX->CONC_STATE == 1
FROM SUBSELECT
ISSUE         : In function find_all_keys.
                If selected row do not satisfy condition
                then we call unlock_row to release the locked
                row. Suppose if we have subquery in condition
                and we have an innodb error during its execution.
                Then we should not call the unlock_row. If the error
                is because of deadlock, innodb will rollback the
                transaction. And calling unlock_row without
                transaction is an invalid case hence an assertion
                failure.
SOLUTION      : We call unlock_row only if only there is no
                error occurred previously.
                The solution is back ported from 5.6
                defect number 14226481


sql/filesort.cc:
  Now we call unlock_row only if there is no
  previous error.
2014-01-09 11:17:51 +05:30
Nisha Gopalakrishnan
df1df7eaae BUG#17324415:GETTING MYSQLD --HELP AS ROOT EXITS WITH 1
Analysis
--------

Running 'MYSQLD --help --verbose' as ROOT user without
using '--user' option displays the help contents but
aborts at the end with an exit code '1'.

While starting the server, a validation is performed to
ensure when the server is started as root user, it should
be done using '--user' option. Else we abort. In case
of help, we dump the help contents and abort.

Fix:
---
During the validation, we skip aborting the server incase
we are using the help option under the condition mentioned
above.

NOTE: Test case has not been added since it requires using 
      'root' user.
2014-01-08 10:04:05 +05:30
Bharathy Satish
c1fa843d65 Bug #17503460 MYSQL READ ONLY DOESN'T WORK FOR DROP TRIGGER
Problem: Drop Trigger succeeds even after setting read_only 
variable to ON.
Fix: Fix is to report the standard error 
(ER_OPTION_PREVENTS_STATEMENT)when global read_only variable 
is set to ON.
2014-01-07 15:11:05 +05:30
Murthy Narkedimilli
c92223e198 Updated/added copyright headers 2014-01-06 10:52:35 +05:30
Luis Soares
7481cf6c29 BUG#17066269: AUTO_INC VALUE NOT PROPERLY GENERATED WITH RBR AND
AUTO_INC COLUMN ONLY ON SLAVE

In RBR, if the slave's table as one additional auto_inc column,
then, it will insert the value 0 instead of generating the next
auto_inc number.

We fix this by checking that if an auto_inc extra column exists,
when compared to column data of the row event, we explicitly set
it to NULL and flag the engine that a nulled auto_inc column will
be inserted.
2013-12-18 11:17:24 +00:00
Venkatesh Duggirala
11c0805e1d Bug17632978 SLAVE CRASHES IF ROW EVENT IS CORRUPTED
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)

Post Push: Fixing Werror compiler issue
2013-12-18 13:52:49 +05:30
Venkatesh Duggirala
5fa9664b07 Bug#17632978 SLAVE CRASHES IF ROW EVENT IS CORRUPTED
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)

Problem: If slave receives a corrupted row event,
slave server is crashing.

Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.

Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.

sql/field.h:
  Removed a function which is not required anymore
sql/log_event.cc:
  Adding a validation on the field length before
  the tool tries to print the value.
sql/log_event.h:
  Changing unpack_row call according to the new arguments
sql/log_event_old.h:
  Changing unpack_row call according to the new arguments
sql/rpl_record.cc:
  Adding a new argument 'row_end' which tells
  the end position of the complete data in the
  row event. It will be used to do validation
  before doing 'unpack' field.
sql/rpl_record.h:
  Adding a new argument 'row_end' which tells
  the end position of the complete data in the
  row event. It will be used to do validation
  before doing 'unpack' field.
sql/rpl_utility.cc:
  Now calc_field_size() is required for client too.
2013-12-17 22:11:22 +05:30
Guilhem Bichot
c90cdf5d49 Bug#16539979 - BASIC SELECT COUNT(DISTINCT ID) IS BROKEN
Bug#17867117 - ERROR RESULT WHEN "COUNT + DISTINCT + CASE WHEN" NEED MERGE_WALK 

Problem:
COUNT DISTINCT gives incorrect result when it uses a Unique
Tree and its last inserted record has null value.

Here is how COUNT DISTINCT is processed, given that this query is not
using loose index scan.

When a row is produced as a result of joining tables (there is only
one table here), we store the SELECTed value in a Unique tree. This
allows elimination of any duplicates, and thus implements DISTINCT.

When we have processed all rows like this, we walk the Unique tree,
counting its elements, in Aggregator_distinct::endup() (tree->walk());
for each element we call Item_sum_count::add(). Such function wants to
ignore any NULL value, for that it checks item_sum -> args[0] ->
null_value. It is a mistake: when walking the Unique tree, the value
to be aggregated is not item_sum ->args[0] but rather table ->
field[0].

Solution:
instead of item_sum -> args[0] -> null_value, use arg_is_null(), which
knows where to look (like in fix for bug 57932).

As a consequence of this solution, we have to make arg_is_null() a
little more general:
1) Because it was so far only used for AVG() (which always has a
single argument), this function was looking at a single argument; now
that it has to work with COUNT(DISTINCT expression1,expression2), it
must look at all arguments.
2) Because we start using arg_is_null () for COUNT(DISTINCT), i.e. in
Item_sum_count::add (), it implies that we are also using it for
COUNT(no DISTINCT) (same add ()). For COUNT(no DISTINCT), the
nullness to check is that of item_sum -> args[0]. But the null_value
of such item is reliable only if val_*() has been called on it. So far
arg_is_null() was always used after a call to arg_val*(), so could
rely on null_value; but for COUNT, there is no call to arg_val*(), so
arg_is_null() has to call is_null() instead.

Testcase for 16539979 by Neeraj. Testcase for 17867117 contributed by
Xiaobin Lin from Taobao.
2013-12-04 12:32:42 +01:00
Anirudh Mangipudi
df20283086 Bug#12428404 MYSQLD.EXE CRASHES WHEN EXTRACTVALUE() IS CALLED WITH
MALFORMED XPATH EXP
Problem:
A malformed XPATH expression in the ExtractValue query is causing
a server crash. This malformed XPATH expression is resulted when 
the position attribute in the substring function contains ".." in
the beginning.
Solution:
The original crash is happening because the "../" is being evaluated
prematurely. It tries to access XML while it hasn't been parsed yet.
The premature evaluation is happening because the val_nodeset function
is being set to constant, in which case we proceed to evaluate them in
JOIN:prepare stage only. The solution to this is setting the val_nodeset
functions as non-constant. This forces us to evaluate the function in
the JOIN:exec stage and thus avoid any premature evaluation of the 
XML strings.
2013-11-25 13:50:19 +05:30
Anirudh Mangipudi
f80d565377 Bug#12428404 MYSQLD.EXE CRASHES WHEN EXTRACTVALUE() IS CALLED
WITH MALFORMED XPATH EXP
Problem:
A malformed XPATH expression in the ExtractValue query is 
causing a server crash. This malformed XPATH expression is
resulted when the position attribute in the substring function
contains ".." in the beginning.
Solution:
The original crash is happening because the "../" is being 
evaluated prematurely. It tries to access XML while it 
hasn't been parsed yet. The premature evaluation is happening
because the val_nodeset function is being set to constant, 
in which case we proceed to evaluate them in JOIN:prepare
stage only. The solution to this is setting the val_nodeset
functions as non-constant. This forces us to evaluate the function
 in the JOIN:exec stage and thus avoid any premature evaluation of
the XML strings.
2013-11-25 13:49:07 +05:30
Mattias Jonsson
dc7db7991a backport of Bug#17401628
revid:mattias.jonsson@oracle.com-20131119103616-u6t82s8cpgp0q3ex

Use of uninitialized memory in the priority queue used for returning records
in sorted order.

It happens if no previous partition have returned a row since the
beginning of index_init + an index_read* call returned
HA_ERR_KEY_NOT_FOUND for all partitions (otherwise the record
buffer/priority queue would be initialized) + an index_next/prev
call where all partitions returns HA_ERR_END_OF_FILE.
2013-11-20 13:13:18 +01:00
mithun
020edb1cab Bug #17708621 : EXCEEDING SORT_BUFFER_SIZE (FILE SORT)
WITH SORT ABORTED LEAKS FILE DESCRIPTORS

ISSUE : IO_CACHE used for index_merge quick select
is freed only on successful retrieval of all rows
from index merge.
Suppose if there is a interrupt( or failure) to
this operation of row retrieval (let it be a
KILL_QUERY signal) then we are not freeing the IO_CACHE
resources allocated by index_merge quick select.
And hence temp file associated with it is also not closed.
This lead to a file descriptor leak.

SOLUTION : As part of file sort operation now we always 
free the IO_CACHE allocated by index_merge quick select.

sql/filesort.cc:
  In filesort function we try to free if any
  IO_CACHE allocated by index_merge quick select
  and if it is not yet freed.
2013-11-18 18:12:01 +05:30
Atanu Ghosh
e9854f5826 Bug #17049656 : MYSQLD --LOCAL-SERVICE PARAMETER DOES NOT WORK
Problem: The "--local-install" service does not perform as expected for, at least,
         Windows.

Fix: A NULL pointer was dereferenced due to which there was crash.A check was introduced
     for NULL string before dereferencing it.No test cases written as it is a bug during 
     installation.
2013-11-14 14:27:31 +05:30
Venkatesh Duggirala
e0efc2c39a Bug#17641586 INCORRECTLY PRINTED BINLOG DUMP INFORMATION
Problem:
When log_warnings is greater than 1, master prints binlog
dump thread information in mysqld.1.err file.
The information contains slave server id, binlog file and
binlog position. The slave server id is uint32 and the print
format was wrongly specifified (%d instead of %u).
Hence a server id which is more than 2 billion is getting
printed with a negative value.
Eg: Start binlog_dump to slave_server(-1340259414),
pos(mysql-bin.001663, 325187493)

Fix: Changed the uint32 format to %u.
2013-11-12 22:09:10 +05:30
Sujatha Sivakumar
81fd7f8ab1 Bug#16736412: THE SERVER WAS CRASHED WHILE EXECUTING
"SHOW BINLOG EVENTS"

Fixing post push test issue. 
Changing the debug simulation.
2013-11-07 17:30:57 +05:30
Neeraj Bisht
88680a99c6 Bug#16691598 - ORDER BY LOWER(COLUMN) PRODUCES OUT-OF-ORDER RESULTS
Problem:-
We have created a table with UTF8_BIN collation.
In case, when in our query we have ORDER BY clause over a function 
call we are getting result in incorrect order.
Note:the bug is not there in 5.5.

Analysis:
In 5.5, for UTF16_BIN, we have min and max multi-byte length is 2 and 4 
respectively.In make_sortkey(),for 2 byte character character we are 
assuming that the resultant length will be 2 byte/character. But when we 
use my_strnxfrm_unicode_full_bin(), we store sorting weights using 3 bytes 
per character.This result in truncated result.

Same thing happen for UTF8MB4, where we have 1 byte min multi-byte and 
4 byte max multi-byte.We will accsume resultant data as 1 byte/character, 
which result in truncated result.

Solution:-
use strnxfrm(means use of MY_CS_STRNXFRM macro) is used for sort, in 
which the resultant length is not dependent on source length.
2013-11-07 16:46:24 +05:30
Sujatha Sivakumar
2a2641ad7f Bug#16736412: THE SERVER WAS CRASHED WHILE EXECUTING
"SHOW BINLOG EVENTS"

Problem:
========
mysql was crashed after executing "show binlog events in
'mysql-bin.000005' from 99", the crash happened randomly.

Analysis:
========
During construction of LOAD EVENT or NEW LOAD EVENT object
if the starting offset is provided as incorrect value then
all the object members that are retrieved from the offset
are also invalid.  Some times it will lead to out of bound
address offsets.  In the bug scenario, the file name is
extracrated from an invalid address and the same is fed to
strlen(fname) function. Passing invalid address to strlen
will lead to crash.

Fix:
===
Validate if the given offset falls within the event boundary
or not.

sql/log_event.cc:
  Added code to validate fname's address. "fname" should
  be within event boundary. Added code to find invalid
  invents.
2013-11-06 15:00:49 +05:30
Aditya A
5f83a7fbf8 Bug#17588348: INDEX MERGE USED ON PARTITIONED TABLE
CAN RETURN WRONG RESULT SET

PROBLEM
-------
In ha_partition::cmp_ref() we were only calling the 
underlying cmp_ref() of storage engine if the records
are in the same partiton,else we sort by partition and
returns the result.But the index merge intersect 
algorithm expects first to sort by row-id first and 
then by partition id.

FIX
---
Compare the refernces first using storage engine cmp_ref
and then if references are equal(only happens if 
non clustered index is used) then sort it by partition id.

[Approved by Mattiasj #rb3755]
-
2013-11-05 19:25:26 +05:30
Tor Didriksen
7ad0e1c527 Bug#12368495 CRASH AND/OR VALGRIND ERRORS WITH REVERSE FUNCTION AND CHARSET CONVERTS
Item_func_trim::val_str: we were using the non-mb algorithm for skipping leading spaces
in a multibyte-charset string.
2013-11-05 10:02:57 +01:00
Tor Didriksen
7d05f76483 merge 5.1 => 5.5 2013-11-01 16:52:21 +01:00
Tor Didriksen
e0f3a0fae9 Bug#17617945 BUFFER OVERFLOW IN GET_MERGE_MANY_BUFFS_COST WITH SMALL SORT_BUFFER_SIZE
get_cost_calc_buff_size() could return wrong value for the size of imerge_cost_buff.
2013-11-01 16:39:19 +01:00
Tor Didriksen
85d51dd13f remerge 5.1 => 5.5 2013-10-29 19:55:38 +01:00
Tor Didriksen
a44794d05e Bug#17326567 MYSQL SERVER FILESORT IMPLEMENTATION HAS A VERY SERIOUS BUG
The filesort implementation needs space for at least 15 records
(plus some internal overhead) in its main sort buffer.
2013-10-29 17:26:20 +01:00
Aditya A
c5896384bd Bug #16051817 GOT ERROR 124 FROM STORAGE ENGINE
ON DELETE FROM A PARTITIONED TABLE

PROBLEM
-------

The user first disables all the non unique indexes
in the table and then rebuilds one partition.
During rebuild the indexes on that particular
partition are enabled. Now when we give a query 
the optimizer is unaware that on one partition 
indexes are enabled and if the optimizer selects
that index,myisam thinks that the index is not 
active and gives an error.

FIX
---

Before rebuilding a partition check whether non
unique indexes are disabled on the partitons.
If they are disabled then after rebuild disable
the index on the partition. 

[Approved by Mattiasj #rb3469]
2013-10-21 12:07:02 +05:30
Anirudh Mangipudi
18079ac9b8 Bug #17357535 BACKPORT BUG#16241992 TO 5.5
Problem:
COM_CHANGE_USER allows brute-force attempts to crack a password at a very high
rate as it does not cause any significant delay after a login attempt has
failed. This issue was reproduced using John-The-Ripper password
cracking tool through which about 5000 passwords per second could be attempted.

Solution:
The non-GA version's solution was to disconnect the connection when a login
attempt failed. Now since our aim to to reduce the rate at which passwords 
are tested, we introduced a sleep(1) after every login attempt failed. This
significantly increased the delay with which the password was cracked.
2013-10-18 17:14:39 +05:30
Aditya A
94630ddd32 Bug#17559867 AFTER REBUILDING , A MYISAM PARTITION ENDS UP
AS A INNODB PARTITTION.
[Merged from 5.1]
2013-10-18 13:49:03 +05:30
Aditya A
cd6f3b55da Bug#17559867 AFTER REBUILDING,A MYISAM PARTITION ENDS UP
AS A INNODB PARTITTION.

PROBLEM
-------
The correct engine_type was not being set during 
rebuild of the partition due to which the handler
was always created with the default engine,
which is innodb for 5.5+ ,therefore even if the
table was myisam, after rebuilding the partitions
ended up as innodb partitions.

FIX
---
Set the correct engine type during rebuild.  

[Approved by mattiasj #rb3599]
2013-10-18 12:26:28 +05:30
Venkatesh Duggirala
633cc16e7c Bug#17234370 LAST_INSERT_ID IS REPLICATED INCORRECTLY IF
REPLICATION FILTERS ARE USED.
Merging fix from mysql-5.1
2013-10-16 22:15:59 +05:30
Venkatesh Duggirala
2b07397b20 Bug#17234370 LAST_INSERT_ID IS REPLICATED INCORRECTLY IF
REPLICATION FILTERS ARE USED.

Problem:
When Filtered-slave applies Int_var_log_event and when it
tries to write the event to its own binlog, LAST_INSERT_ID
value is written wrongly.

Analysis:
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is a variable which is set when LAST_INSERT_ID() is used by
a statement. If it is set, first_successful_insert_id_in_
prev_stmt_for_binlog will be stored in the statement-based
binlog. This variable is CUMULATIVE along the execution of
a stored function or trigger: if one substatement sets it
to 1 it will stay 1 until the function/trigger ends,
thus making sure that first_successful_insert_id_in_
prev_stmt_for_binlog does not change anymore and is
propagated to the caller for binlogging. This is achieved
using the following code
if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)               
{                                                                           
  /* It's the first time we read it */                                      
  first_successful_insert_id_in_prev_stmt_for_binlog=                       
  first_successful_insert_id_in_prev_stmt;                                
  stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;               
}

Slave server, after receiving Int_var_log_event event from
master, it is setting
stmt_depends_on_first_successful_insert_id_in_prev_stmt
to true(*which is wrong*) and not setting
first_successful_insert_id_in_prev_stmt_for_binlog. Because
of this problem, when the actual DML statement with
LAST_INSERT_ID() is parsed by slave SQL thread,
first_successful_insert_id_in_prev_stmt_for_binlog is not
set. Hence the value zero (default value) is written to
slave's binlog.

Why only *Filtered slave* is effected when the code is
in common place:
-------------------------------------------------------
In Query_log_event::do_apply_event,
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is reset to zero at the end of the function. In case of
normal slave (No Filters), this variable will be reset. 
In Filtered slave, Slave SQL thread defers all IRU events's
execution until IRU's Query_log event is received. Once it
receives Query_log_event it executes all pending IRU events
and then it executes Query_log_event. Hence the variable is
not getting reset to 0, causing this bug.

Fix: As described above, the root cause was setting 
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
when Int_var_log_event was executed by a SQL thread. Hence
removing the problematic line from the code.
2013-10-16 22:12:23 +05:30
Venkata Sidagam
e84d48742e Bug#16900358 FIX FOR CVE-2012-5611 IS INCOMPLETE
Merging from mysql-5.1 to mysql-5.5
2013-10-16 14:16:32 +05:30
Venkata Sidagam
de0e8a02d1 Bug#16900358 FIX FOR CVE-2012-5611 IS INCOMPLETE
Description: Fix for bug CVE-2012-5611 (bug 67685) is 
incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and 
check_grant_db() can be overflown by up to two bytes. That's 
probably not enough to do anything more serious than crashing 
mysqld.
Analysis: In acl_get() when "copy_length" is calculated it 
just adding the variable lengths. But when we are using them 
with strmov() we are adding +1 to each. This will lead to a 
three byte buffer overflow (i.e two +1's at strmov() and one 
byte for the null added by strmov() function). Similarly it 
happens for check_grant_db() function as well.
Fix: We need to add "+2" to "copy_length" in acl_get() 
and "+1" to "copy_length" in check_grant_db().
2013-10-16 14:14:44 +05:30
Sujatha Sivakumar
4522a8704f Bug#17429677:LAST ARGUMENT OF LOAD DATA ...SET ...STATEMENT
REPEATED TWICE IN BINLOG

Problem:
=======
If LOAD DATA ... SET ... is used the last argument of SET is
repeated twice in replication binlog.

Analysis:
========
LOAD DATA statements are reconstructed once again before
they are written to the binary log. When SET clauses are
specified as part of LOAD DATA statement, these SET clause
user command strings need to be stored in order to rebuild
the original user command. During parsing each column and
the value in the SET command are stored in two differenet
lists. All the values are stored in a string list.

When SET expression has more than one value as shown in the
following example:
SET a = @a, b = CONCAT(@b, '| 123456789');

Parser extracts values in the following manner i.e Item name
, value string, actual length of the value of the item with
in the string.

Item a:
Value for a:"= @a, b = CONCAT(@b, '| 123456789')
str_length = 4
Item b:
Value for b:"= CONCAT(@b, '| 123456789')
str_length = 27

During reconstructing the LOAD DATA command the above
strings are retrived as it is and appended to the LOAD DATA
statement. Hence it becomes as shown below.

SET `a`= @a, b = CONCAT(@b, '| 123456789'),
`b`= CONCAT(@b, '| 123456789')

Fix:
===
During reconstruction of SET command, retrieve exact item
value string rather than reading the entire string.

sql/sql_load.cc:
  Added code to extract the exact Item value.
2013-10-16 11:49:00 +05:30
Praveenkumar Hulakund
c66a037dca Bug#17474166 - EXECUTING STATEMENT LIKE 'SHOW ENGINE INNODB'
AND 'KILL SESSION' LEAD TO CRASH               

Analysis:
--------
This situation occurs when the connection executes query 
"show engine innodb status" and this connection is killed by
executing statement "kill <con>" by another connection.

In function "innodb_show_status", function "stat_print"
is called to print the status but return value of function
is not checked.  After killing connection, if write to 
connection fails then error is returned and same is set
in Diagnostic area. Since FALSE is returned from
"innodb_show_status" now, assert to check no error
is set in function "set_eof_status" (called from
my_eof) is failing. 

Fix:
----
Changed code to check return value of function "stat_print"
in "innodb_show_status".
2013-10-09 13:32:31 +05:30
Praveenkumar Hulakund
c904103b3f Bug#11745656 - KILL THREAD -> ERROR: "SERVER SHUTDOWN IN PROGRESS"
Description:
------------
There are 2 issues reported in the bug report,

1. One session running a "long" select, then, from the other
session, you kill that first one, while select is
running, and it receives that message "Server shutdown in
progress".
Reported Date: 02-Apr-2006

=> Looks like this isuse is already fixed in 2009 by the patch
   pushed for bug28141. 

2. Killing query which goes to filesort, logs error entries like:

120416  9:17:28 [ERROR] mysqld: Sort aborted: Server shutdown in
                                              progress 
120416  9:18:48 [ERROR] mysqld: Sort aborted: Server shutdown in
                                              progress 
120416  9:19:39 [ERROR] mysqld: Sort aborted: Server shutdown in
                                              progress 
Reported Date: 16-Apr-2012                                              

=> This issue is introduced in 5.5+ versions. Fixing this issue
   in this patch.


Analysis:
---------
In function "filesort()", on error we are logging error message.
To the error message, the message related THD::killed_errno is
also appeneded, if it is set.(THD::kill_errno value is obtained
by calling member function THD::killed_errno)

In the scenario mentioned in this bug report, when we kill the
connection, THD::kill_errno is set to the THD::KILL_CONNECTION.
Enum type THD::KILL_CONNECTION corressponds to value 
ER_SERVER_SHUTDOWN. Because of this, "Server shutdown in ...." is
appended to the message logged.

Fix:
----
Modified code of "filesort()" function to append "KILL_QUERY"
status to error message when thread is killed and server
shutdown is not in progress.
2013-10-05 15:29:02 +05:30
Tor Didriksen
b9ec183741 Bug#16482467 ORDER BY IGNORED IN SOME SITUATIONS FOR UPDATE QUERY
For queries like
update t1 set ... where <unique key> order by ... limit ...
we need to handle the fact that unique keys allow NULL
values, and hence can return more than one row.


sql/opt_range.cc:
  When the unique key has multiple key parts,
  check each key_part for nullability, rather than the first key part.
  (s/key->part ==/key_tree->part ==/)
  
  Also: revert the if() test, for better readability.
2013-09-10 11:20:29 +02:00
Libing Song
514b8261b5 Bug#17402313 DUMP THREAD SENDS SOME EVENTS MORE THAN ONCE
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.

To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.

This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.

Finally, two warnings are added when dump threads encounter the temporary
errors.
2013-09-10 09:35:49 +08:00
Tor Didriksen
27c6c4e8ac Bug#17296644 CONV(X, INT_MIN, INT_MIN) SEGFAULTS THE SERVER
Do not call abs(INT_MIN) as the result is undefined.
2013-09-09 14:20:50 +02:00
Tor Didriksen
28278b1410 Bug#16870783 RECENT REGRESSION: CRASH WITH GROUP_CONCAT AND INVALID SEPARATOR
Add missing initialization in lex_start()
2013-09-09 12:43:08 +02:00
Nisha Gopalakrishnan
14976fbe8a BUG#16032946 - PLEASE GIVE A MESSAGE FOR "THREAD_CONCURRENCY DOESN'T
DO WHAT YOU EXPECT"

Fix info:
--------

Backport of the deprecation bug fix (WL#5265) for global variable
'THREAD_CONCURRENCY' from mysql-5.6 to mysql-5.5

Note: With this backport, certain additional deprecation warnings
      would be reported under error conditions while setting the
      global/session variables.
2013-09-05 13:40:27 +05:30
Neeraj Bisht
e203951cb1 Bug#17222452 - SELECT COUNT(DISTINCT A,B) INCORRECTLY COUNTS ROWS
CONTAINING NULL

Problem:-
In MySQL, We can obtain the number of distinct expression
combinations that do not contain NULL by giving a list of 
expressions in COUNT(DISTINCT).
However rows with NULL values are
incorrectly included in the count when loose index scan is 
used.

Analysis:-
In case of loose index scan, we check whether the field is null or 
not and increase the count in Item_sum_count::add().
But there we are checking for the first field in COUNT(DISTINCT), 
not for every field. This is causing an incorrect result.

Solution:-
Check all field in Item_sum_count::add(), whether there values 
are null or not. Then only increment the count.
******
Bug#17222452 - SELECT COUNT(DISTINCT A,B) INCORRECTLY COUNTS ROWS 
	       CONTAINING NULL

Problem:-
In MySQL, We can obtain the number of distinct expression
combinations that do not contain NULL by giving a list of 
expressions in COUNT(DISTINCT).
However rows with NULL values are
incorrectly included in the count when loose index scan is 
used.

Analysis:-
In case of loose index scan, we check whether the field is null or 
not and increase the count in Item_sum_count::add().
But there we are checking for the first field in COUNT(DISTINCT), 
not for every field. This is causing an incorrect result.

Solution:-
Check all field in Item_sum_count::add(), whether there values 
are null or not. Then only increment the count.
2013-09-04 10:45:55 +05:30
Neeraj Bisht
277697b81f Bug#16346241 - SERVER CRASH IN ITEM_PARAM::QUERY_VAL_STR
Problem:-
Second execution of prepared statement for query with 
parameter in limit clause, causes an assert when using 
connectors (e.g., Connector C).  


Analysis:-
In prepared statement, LIMIT parameters can be
specified using '?' markers. Value for the parameter can
be supplied while executing the prepared statement.

Passing string, float or double values for LIMIT clause
works well from command-line client. That's because, while 
setting the LIMIT parameter value from a user-variable,
the value is converted to integer value.

However, when prepared statement is executed from other
interfaces as J connectors, or C applications etc,
the value for the parameters are sent to the server
with execute command. Each item in command has value and
the data TYPE. So, while setting parameter values
from this log, value is set to all the parameters
with the same data type as passed.
Here, we have the logic to convert the value to change the 
state and item_type if it is part of LIMIT parameter and 
its item_type is not INT.
But when we reset this parameter we save the item_type but change 
state. So on second execution we have old item_type but our state 
has been changed, which make us to use string type variable 
in Item_param::query_str_val(). This cause an assert.

Fix:
Instead of checking the item_type of the parameter, check for 
the state of the parameter. As state value are reset everytime
we execute the statement.
2013-08-28 14:54:53 +05:30
Dmitry Lenev
fb5d9a82a7 Fix for bug #17356954 "CANNOT USE SAVEPOINTS AFTER ER_LOCK_DEADLOCK OR
ER_LOCK_WAIT_TIMEOUT".

The problem was that after changes caused by fix bug 14188793 "DEADLOCK
CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION"/
bug 17054007 "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB
DEADLOCK implicit rollback of transaction which occurred on ER_LOCK_DEADLOCK
(and ER_LOCK_WAIT_TIMEOUT if innodb_rollback_on_timeout option was set)
didn't start new transaction in @@autocommit=1 mode.

Such behavior although consistent with behavior of explicit ROLLBACK has
broken expectations of users and backward compatibility assumptions.

This patch fixes problem by reverting to starting new transaction
in 5.5/5.6.

The plan is to keep new behavior in trunk so the code change from this
patch is to be null-merged there.
2013-08-26 14:43:12 +04:00
Praveenkumar Hulakund
72ff5686a0 Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND
"SHOW PROCESSLIST"

Follow up path, addressing pb2 test failure.
2013-08-23 18:56:31 +05:30
Neeraj Bisht
6773c60f07 Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS
Problem:-
In a Procedure, when we are comparing value of select query 
with IN clause and they both have different collation, cause 
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
              sel1                   sel2

Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call 
Item_in_subselect::fix_fields while will resolve sel2. There 
in Item_in_subselect::select_transformer, we evaluate the 
left expression(sel1) and store it in Item_cache_* object 
(to avoid re-evaluating it many times during subquery execution) 
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2  
in Item_in_subselect::select_transformer() which will compare 
t2.d and sel1(which is cached in Item_in_optimizer).

Later while checking the collation in agg_item_collations() 
we get error and we cleanup the item. While cleaning up we cleaned 
the cached value in Item_in_optimizer object.

When we execute the procedure second time, we have condition for 
sel2 and while setup_cond(), we can't able to find reference item 
as it is cleanup while item cleanup.So it assert.


Solution:-
We should not cleanup the cached value for Item_in_optimizer object, 
if we have put the condition to subselect.
2013-08-23 16:56:17 +05:30
Neeraj Bisht
356b641454 Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS
Problem:-
In a Procedure, when we are comparing value of select query 
with IN clause and they both have different collation, cause 
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
              sel1                   sel2

Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call 
Item_in_subselect::fix_fields while will resolve sel2. There 
in Item_in_subselect::select_transformer, we evaluate the 
left expression(sel1) and store it in Item_cache_* object 
(to avoid re-evaluating it many times during subquery execution) 
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2  
in Item_in_subselect::select_transformer() which will compare 
t2.d and sel1(which is cached in Item_in_optimizer).

Later while checking the collation in agg_item_collations() 
we get error and we cleanup the item. While cleaning up we cleaned 
the cached value in Item_in_optimizer object.

When we execute the procedure second time, we have condition for 
sel2 and while setup_cond(), we can't able to find reference item 
as it is cleanup while item cleanup.So it assert.


Solution:-
We should not cleanup the cached value for Item_in_optimizer object, 
if we have put the condition to subselect.
2013-08-23 16:54:25 +05:30
Ashish Agarwal
292aa926c1 WL#7076: Backporting wl6715 to support both formats
in 5.5, 5.6, 5.7.
2013-08-23 09:07:09 +05:30
Praveenkumar Hulakund
7fffec875a Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND
"SHOW PROCESSLIST"

Merging from 5.1 to 5.5
2013-08-21 10:44:22 +05:30
Praveenkumar Hulakund
3b1e98d218 Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND
"SHOW PROCESSLIST"

Analysis:
----------
The problem here is, if one connection changes its
default db and at the same time another connection executes
"SHOW PROCESSLIST", when it wants to read db of the another
connection then there is a chance of accessing the invalid
memory. 

The db name stored in THD is not guarded while changing user
DB and while reading the user DB in "SHOW PROCESSLIST".
So, if THD.db is freed by thd "owner" thread and if another
thread executing "SHOW PROCESSLIST" statement tries to read
and copy THD.db at the same time then we may endup in the issue
reported here.

Fix:
----------
Used mutex "LOCK_thd_data" to guard THD.db while freeing it
and while copying it to processlist.
2013-08-21 10:39:40 +05:30
Dmitry Lenev
fc2c669297 Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".

The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.

The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.

The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.

The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.

This patch solves these two problems in the following way:

- In case when MDL deadlock is detect transaction rollback is requested
  by setting THD::transaction_rollback_request flag.

- Code performing rollback of transaction if THD::transaction_rollback_request
  is moved out from trans_rollback_stmt(). Now we handle rollback request
  on the same level as we call trans_rollback_stmt() and release statement/
  transaction MDL locks.
2013-08-20 13:12:34 +04:00