Commit graph

70671 commits

Author SHA1 Message Date
Mattias Jonsson
f436b188e2 bug#13949735: crash regression from bug#13694811.
There can be cases when the optimizer calls ha_partition::records_in_range
when there are no matching partitions. So the DBUG_ASSERT of
!tot_used_partitions does assert.

Fixed by returning 0 instead when no matching partitions are found.

This will avoid the crash. records_in_range will then try to find the
biggest used partition, which will not find any partition and
records_in_range will then return 0, meaning non rows can be found.

Patch contributed by Davi Arnaut at twitter.
2012-05-15 12:45:52 +02:00
Tor Didriksen
02f90402c8 Bug#14039955 RPAD FUNCTION LEADS TO UNINITIALIZED VALUES WARNING IN MY_STRTOD
Rewrite the "parser" in my_strtod_int() to avoid
reading past the end of the input string.
2012-05-18 12:57:38 +02:00
Rohit Kalhans
b4ffce10d3 upmerge from mysql-5.1 -> mysql-5.5. 2012-05-18 14:53:23 +05:30
Mayank Prasad
0581d1c46c Bug#11766101 : 59140: LIKE CONCAT('%',@A,'%') DOESN'T MATCH WHEN @A CONTAINS LATIN1 STRING
Issue/Cause:
Issue is of memory corruption.During optimization phase, pattern to be matched in where 
clause, is prepared. This is done in Item_func_concat::val_str() function which forms the
resultant string (tmp_value) and return its pointer. In caller, Item_func_like::fix_fields, 
pattern is made to point to this string (tmp_value). In further processing, tmp_value is 
getting modified which causes pattern to have changed/wrong values.

Fix:
Allocate its own memroy location in caller, copy value of resultant string (tmp_value) 
into that and make pattern to point to that. This makes sure no further changes to 
tmp_value will affect pattern.
2012-05-17 22:24:23 +05:30
Gopal Shankar
6053cb8cc1 null merge for Bug#12636001 2012-05-17 18:21:45 +05:30
unknown
b45c750200 2012-05-17 11:46:11 +01:00
unknown
1bfd04ea7c 2012-05-17 10:19:53 +05:30
Annamalai Gurusami
dfff1ff58b null merge from mysql-5.1 to mysql-5.5 2012-05-16 16:38:02 +05:30
Annamalai Gurusami
31a6bcbee4 Merge from mysql-5.1 to mysql-5.5. 2012-05-16 16:33:22 +05:30
Venkata Sidagam
79af7db585 Null merge from 5.1 to 5.5 2012-05-16 16:19:20 +05:30
Venkata Sidagam
7244e84e6d Merging the fix from mysql-5.1 to mysql-5.5 2012-05-16 16:06:07 +05:30
Annamalai Gurusami
7a6c449c62 Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
The following scenario crashes our mysql server:

1.  set global innodb_file_per_table=1;
2.  create table t1(c1 int) engine=innodb;
3.  alter table t1 discard tablespace;
4.  alter table t1 add unique index(c1);

Step 4 crashes the server.  This patch introduces a check on discarded
tablespace to avoid the crash.

rb://1041 approved by Marko Makela
2012-05-16 15:23:56 +05:30
Annamalai Gurusami
453fcc0cf5 Merge from mysql-5.1 to mysql-5.5. 2012-05-16 14:10:18 +05:30
Venkata Sidagam
0e9c1e9fc3 Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH,
FULLTEXT INDEX AND CONCURRENT DML.

Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
   a) truncate table
   b) insert into table ....
   c) select ... match .. against .. non-boolean/boolean mode

After some time we could observe two different assert core dumps

Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key 
root block.
1st select thread block->status is set to BLOCK_ERROR 
because the my_pread() in read_block() is returning '0'. 
Truncate table made the index file size as 1024 and pread 
was asked to get the block of count bytes(1024 bytes) 
from offset of 1024 which it cannot read since its 
"end of file" and retuning '0' setting 
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
the 1st select thread enter into the free_block() and will 
be under wait on conditional mutex by making status as 
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
thread will also work on the same block and sees the status as 
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
and asserting the server.

2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock', 
which allows other threads to continue and gets the pread() 
return value as'0'(please see the explanation above) and 
tries to get the lock on 'keycache->cache_lock' and waits 
there for the lock.
Insert thread requests for the block, block will be assigned 
from the hash list and makes the page_status as 
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
in the queue since there are some other threads performing 
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock' 
mutex in the read_block() will continue after getting the my_pread() 
value as '0' and sets the block status as BLOCK_ERROR and goes to 
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks 
block->status as not BLOCK_READ and it asserts.  

Fix:
---
In the full text code, multiple readers of index file is not guarded. 
Hence added below below code in _ft2_search() and walk_and_match().

to lock the key_root I have used below code in _ft2_search()
 if (info->s->concurrent_insert)
    mysql_rwlock_rdlock(&share->key_root_lock[0]);

and to unlock 
 if (info->s->concurrent_insert)
   mysql_rwlock_unlock(&share->key_root_lock[0]);

storage/myisam/ft_boolean_search.c:
  Since its a recursion function, to avoid confusion in taking and 
  releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
  function. And _ft2_search() will take the lock, call 
  _ft2_search_internal() and release the lock in case of concurrent 
  inserts.
storage/myisam/ft_nlq_search.c:
  Added read locks code in walk_and_match()
2012-05-16 13:55:22 +05:30
Olav Sandstaa
c5dc1ea526 Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3

This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.

Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:

1. The t3 table contains four records. The outer query will read
   these and for each of these it will execute the subquery.

2. Before the first execution of the subquery it will be optimized. In
   this case the important is what happens to the first table t1:
   -make_join_select() will call the range optimizer which decides
    that t1 should be accessed using a range scan on the k1 index
    It creates a QUICK_RANGE_SELECT object for this.
   -As the last part of optimization the ICP code pushes the
    condition down to the storage engine for table t1 on the k1 index.

   This produces the following information in the explain for this table:

     2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort

   Note the use of filesort.

3. The first execution of the subquery does (among other things) due
   to the need for sorting:
   a. Call create_sort_index() which again will call find_all_keys():
   b. find_all_keys() will read the required keys for all qualifying
      rows from the storage engine. To do this it checks if it has a
      quick-select for the table. It will use the quick-select for
      reading records. In this case it will read four records from the
      storage engine (based on the range criteria). The storage engine
      will evaluate the pushed index condition for each record.
   c. At the end of create_sort_index() there is code that cleans up a
      lot of stuff on the join tab. One of the things that is cleaned
      is the select object. The result of this is that the
      quick-select object created in make_join_select is deleted.

4. The second execution of the subquery does the same as the first but
   the result is different:
   a. Call create_sort_index() which again will call find_all_keys()
      (same as for the first execution)
   b. find_all_keys() will read the keys from the storage engine. To
      do this it checks if it has a quick-select for the table. Now
      there is NO quick-select object(!) (since it was deleted in
      step 3c). So find_all_keys defaults to read the table using a
      table scan instead. So instead of reading the four relevant records
      in the range it reads the entire table (6 records). It then
      evaluates the table's condition (and here it goes wrong). Since
      the entire condition has been pushed down to the storage engine
      using ICP all 6 records qualify. (Note that the storage engine
      will not evaluate the pushed index condition in this case since
      it was pushed for the k1 index and now we do a table scan
      without any index being used).
      The result is that here we return six qualifying key values
      instead of four due to not evaluating the table's condition.
   c. As above.

5. The two last execution of the subquery will also produce wrong results
   for the same reason.

Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.

Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.

The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.


sql/sql_select.cc:
  Fix for Bug#12667154: Change how create_sort_index() cleans up the 
  join_tab's select and quick-select objects in order to avoid that a 
  quick-select object created outside of create_sort_index() is deleted.
2012-05-16 09:49:23 +02:00
unknown
3780f47339 2012-05-16 09:18:57 +02:00
Nuno Carvalho
f0e93ac2bf BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Automerge from mysql-5.1 into mysql-5.5.
2012-05-15 22:18:59 +01:00
Marko Mäkelä
a6da754a31 Merge mysql-5.1 to mysql-5.5. 2012-05-15 15:11:34 +03:00
Georgi Kodinov
bef6c0c161 merge 5.1->5.5 2012-05-15 13:18:42 +03:00
Georgi Kodinov
fcb033053d Bug #11761822: yassl rejects valid certificate which openssl accepts
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this 
particular certificate.
Added a test case with the certificate itself.
2012-05-15 13:12:22 +03:00
Bjorn Munch
0e66663159 Upmerge optional testsuite path 2012-05-15 09:19:58 +02:00
Bjorn Munch
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
Joerg Bruehe
d8faf3f179 Raise version after cloning 5.5.25 2012-05-14 15:15:13 +02:00
Annamalai Gurusami
326b40c9c8 Merging from mysql-5.1 to mysql-5.5. 2012-05-10 10:33:16 +05:30
Annamalai Gurusami
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Georgi Kodinov
63dac66356 merge 2012-05-09 17:08:44 +03:00
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Joerg Bruehe
5be07ceadd Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00
Venkata Sidagam
1d47bbe3bf Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM

Merging the fix from mysql-5.1 to mysql-5.5

mysql-test/t/mysqldump.test:
  There is a difference in the testcase which is added as 
  part of this fix, when compared with mysql-5.1. In mysql-5.5 
  and mysql-5.6, "DROP mysql database" fails by enabling 
  logging, hence removed those lines.
2012-05-07 16:51:26 +05:30
Venkata Sidagam
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
Venkata Sidagam
daafaa0f86 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-04 18:33:34 +05:30
Annamalai Gurusami
b757c13078 In perl, to break out of a foreach loop we need to use
the keyword "last" and not "break".  Fixing the failing
test case.
2012-05-04 12:29:49 +05:30
Alexander Nozdrin
95205bbaff Third attempt to do a follow-up for Bug#12762885 - 61713: MYSQL WILL NOT BIND
TO "LOCALHOST" IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.

Previous commit comments were wrong. The default value has always been NULL.
The original patch for Bug#12762885 just makes it visible in the logs.

This patch uses "0.0.0.0" string if bind-address is not set.
2012-04-27 21:14:35 +04:00
Alexander Nozdrin
476762bd7b Revert two follow-ups for Bug#12762885:
- alexander.nozdrin@oracle.com-20120427151428-7llk1mlwx8xmbx0t
  - alexander.nozdrin@oracle.com-20120427144227-kltwiuu8snds4j3l.
2012-04-27 21:07:53 +04:00
Alexander Nozdrin
25cdec81e0 Proper follow-up for Bug#12762885 - 61713: MYSQL WILL NOT BIND TO "LOCALHOST"
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.

The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 
0.0.0.0 is choosen. So, there was no change in the behaviour.

This patch restores default value of the bind-address option to "0.0.0.0".
2012-04-27 19:14:28 +04:00
Alexander Nozdrin
6fa011056a Follow-up for Bug#12762885 - 61713: MYSQL WILL NOT BIND TO "LOCALHOST"
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.

The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 
0.0.0.0 is choosen. So, there was no change in the behaviour.

This patch restores default value of the bind-address option to "0.0.0.0".
2012-04-27 18:42:27 +04:00
Yasufumi Kinoshita
96b62513f6 Bug#11758510 (#50723): INNODB CHECK TABLE FATAL SEMAPHORE WAIT TIMEOUT POSSIBLY TOO SHORT FOR BI
Fixed not to check timeout during the check table.
2012-04-27 19:40:12 +09:00
Yasufumi Kinoshita
08e7444e54 Bug#11758510 (#50723): INNODB CHECK TABLE FATAL SEMAPHORE WAIT TIMEOUT POSSIBLY TOO SHORT FOR BI
Fixed not to check timeout during the check table.
2012-04-27 19:38:13 +09:00
irana
02522904bc merge from 5.1 2012-04-26 08:21:25 -07:00
irana
1a2cf649dc InnoDB: Adjust error message when a dropped tablespace is accessed. 2012-04-26 08:17:14 -07:00
Manish Kumar
3f98e95354 BUG#13812374 - RPL.RPL_REPORT_PORT FAILS OCCASIONALLY ON PB2
Problem - The failure on PB2 is possbily due to the port number being still in
          use even after the server restarts which is not reflected in the
          server restart.
      
Fix - The problem is fixed by starting the servers forcefully using the option
      file and also the parameters for the server restart is passed correctly.

mysql-test/suite/rpl/t/rpl_report_port-master.opt:
  Option file for the master.
2012-04-26 19:34:03 +05:30
Inaam Rana
08ead005b1 Bug#13990648: 65061: LRU FLUSH RATE CALCULATION IS BASED ON INVALID VALUES
rb://1043
approved by: Sunny Bains

Two internal counters were incremented twice for a single
operations. The counters are:
srv_buf_pool_flushed
buf_lru_flush_page_count
2012-04-23 22:15:29 -04:00
Georgi Kodinov
b872ce0312 Fixed a cmake compile problem because of the 2.8.8 fix. 2012-04-23 17:18:55 +03:00
Georgi Kodinov
e881e03e01 Bug #59148 - made tests experimental
Fixed a cmake 2.8.8 compilation problem.
2012-04-23 16:26:22 +03:00
Kent Boortz
18e2bee9d1 Merge 2012-04-23 13:03:42 +02:00
Kent Boortz
678f221cb9 Allow Windows absolute paths in N:\ formatfor the --vardir option 2012-04-23 12:52:14 +02:00
Inaam Rana
2415d955c8 Bug#12677594 - 61575: INNODB: WARNING: IO_SETUP() FAILED WITH EAGAIN.
rb://1033
approved by: Marko Makela

Check return value from os_aio_init() and refuse to start if it fails.
2012-04-23 06:39:16 -04:00
Andrei Elkin
d8ac252f0e merge from 5.1 (null) 2012-04-23 12:08:24 +03:00
Andrei Elkin
f189ccf530 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
Andrei Elkin
fd9e7e2737 null merge from 5.1. 2012-04-23 11:56:23 +03:00