Commit graph

29199 commits

Author SHA1 Message Date
Nuno Carvalho
09cb0649e5 BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Automerge from mysql-5.1 into mysql-5.5.
2012-05-15 22:18:59 +01:00
Nuno Carvalho
8bb98d7535 BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Bjorn Munch
375afcf1df Upmerge optional testsuite path 2012-05-15 09:19:58 +02:00
Bjorn Munch
e7b735fb4b Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
Annamalai Gurusami
33d9d40ccd Merging from mysql-5.1 to mysql-5.5. 2012-05-10 10:33:16 +05:30
Annamalai Gurusami
b76a59f5a6 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Sunanda Menon
d37a28c9b0 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Joerg Bruehe
ad1e123f47 Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00
Venkata Sidagam
066dc9a281 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM

Merging the fix from mysql-5.1 to mysql-5.5
2012-05-07 16:51:26 +05:30
Venkata Sidagam
14aa2c020e Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.
2012-05-07 16:46:44 +05:30
Venkata Sidagam
41cdad9868 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.
2012-05-04 18:33:34 +05:30
Annamalai Gurusami
f619c8ce83 In perl, to break out of a foreach loop we need to use
the keyword "last" and not "break".  Fixing the failing
test case.
2012-05-04 12:29:49 +05:30
Alexander Nozdrin
5f4c6942bf Revert two follow-ups for Bug#12762885:
- alexander.nozdrin@oracle.com-20120427151428-7llk1mlwx8xmbx0t
  - alexander.nozdrin@oracle.com-20120427144227-kltwiuu8snds4j3l.
2012-04-27 21:07:53 +04:00
Alexander Nozdrin
3885dc55dd Proper follow-up for Bug#12762885 - 61713: MYSQL WILL NOT BIND TO "LOCALHOST"
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.

The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 
0.0.0.0 is choosen. So, there was no change in the behaviour.

This patch restores default value of the bind-address option to "0.0.0.0".
2012-04-27 19:14:28 +04:00
Manish Kumar
4b917744a2 BUG#13812374 - RPL.RPL_REPORT_PORT FAILS OCCASIONALLY ON PB2
Problem - The failure on PB2 is possbily due to the port number being still in
          use even after the server restarts which is not reflected in the
          server restart.
      
Fix - The problem is fixed by starting the servers forcefully using the option
      file and also the parameters for the server restart is passed correctly.
2012-04-26 19:34:03 +05:30
Kent Boortz
9927a07bc1 Allow Windows absolute paths in N:\ formatfor the --vardir option 2012-04-23 12:52:14 +02:00
Andrei Elkin
a891de4221 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
Andrei Elkin
d5925c2044 BUG#11754117
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
2012-04-23 11:51:19 +03:00
Andrei Elkin
ec2caa37ba merge bug11754117-45670 fixes from 5.1: fixing result files. 2012-04-21 14:19:06 +03:00
Andrei Elkin
bf66e3ab63 merge bug11754117-45670 fixes from 5.1. 2012-04-21 13:24:39 +03:00
Mayank Prasad
e748999eb5 BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
Details:
 - test case bug12427262.test was failing on windows because
   on windows '/' was not recognized. And this was used in
   LIKE clause of the query being run in this test case.

Fix:
 - Windows needs '\\\\' for path seperater in mysql. I was 
   not sure how to keep a single query with two different 
   syntax based on platform. So modifying query to make sure
   it runs correctly on both platform.
2012-04-21 05:23:09 +05:30
Nuno Carvalho
8ac39aa8e0 BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
Merge from 5.1 into 5.5.

Conflicts:
 * sql/log.h
 * sql/sql_repl.cc
2012-04-20 23:35:53 +01:00
Nuno Carvalho
ca33df2094 BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.

The problem is solved by moving 'LOG_INFO linfo;' to function scope.
2012-04-20 22:25:59 +01:00
Andrei Elkin
f3509d1d67 BUG#11754117 incorrect logging of INSERT into auto-increment
BUG#11761686 insert_id event is not filtered.
  
Two issues are covered.
  
INSERT into autoincrement field which is not the first part in the composed primary key 
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
  
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
  
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
  
Fixed with deferring their execution until the parent query.

******
Bug#11754117 

Post review fixes.
2012-04-20 19:41:20 +03:00
Mayank Prasad
2786a6e232 BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
Details:
- Merge : 5.1 -> 5.5
- Addded a new test case which was not added in 5.1 because PS was
  not there in 5.1.
2012-04-19 15:59:46 +05:30
Tor Didriksen
d612986b36 Backport 5.5=>5.1 Patch for Bug#13805127:
Stored program cache produces wrong result in same THD.
2012-04-18 13:14:05 +02:00
Annamalai Gurusami
46c51c40e1 Bug #12902967 Creating self referencing fk on same index unhandled,
confusing error. Updated the test script to work properly on
windows platform.
2012-04-18 15:16:11 +05:30
Nuno Carvalho
0eea06c5d0 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Merge from 5.1 into 5.5.
2012-04-18 10:12:19 +01:00
Nuno Carvalho
a9a7e6ea24 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this 
output - for instance MEM generates the SUM of the sizes of the logs 
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an 
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor". 

As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog 
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or 
REPLICATION CLIENT. 
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as 
lighter privileges can be given to users of such tools or scripts.
2012-04-18 10:08:01 +01:00
Chaithra Gopalareddy
2479f3cb7b Merge from 5.1 to 5.5 2012-04-18 11:34:36 +05:30
Chaithra Gopalareddy
81058259c7 Bug#12713907:STRANGE OPTIMIZE & WRONG RESULT UNDER
ORDER BY COUNT(*) LIMIT.

PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)

The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.

Following is the code flow:

Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan

Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort

Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched

FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
2012-04-18 11:25:01 +05:30
Annamalai Gurusami
30b2e00584 Bug #12902967 Creating self referencing fk on same index unhandled,
confusing error. I have added the not_embedded.inc in the test file.
2012-04-18 09:21:23 +05:30
Annamalai Gurusami
6fc42540f8 Bug #12902967 CREATING SELF REFERENCING FK ON SAME INDEX
UNHANDLED, CONFUSING ERROR

The main confusion with the error message is that "it 
implies that your data dictionary may now be out of 
sync".  This patch will remove the unwanted and the 
misleading error message by not doing an unnecessary 
operation in the error handling code.  

rb://980 approved by: Dmitry Lenev
2012-04-17 16:54:02 +05:30
Georgi Kodinov
d59986d974 merge mysql-5.5->mysql-5.5-security 2012-04-12 14:04:12 +03:00
Sujatha Sivakumar
79579f5681 BUG#12662190:COM_COMMIT IS NOT INCREMENTED FROM THE BINARY LOGS ON SLAVE, COM_BEGIN IS
PROBLEM:

--------

When binary log statements are replayed on the slave, BEGIN is represented

in com_counters but COMMIT is not. Similarly in 'ROW' based replication

'INSERT','UPDATE',and 'DELETE' com_counters are not getting incremented

when the binary log statements are replayed at slave.

ANALYSIS:
---------

In 'ROW' based replication for COMMIT,INSERT,UPDATE and DELETE operations
following special events are invoked.
Xid_log_event,Write_rows_log_event,Update_rows_log_event,Update_rows_log_event.

The above mentioned events doesn't go through the parser where the
'COM_COUNTERS' are incremented.


FIX:
-----
Increment statements are added at appropriate events.
Respective functions are listed below.

'Xid_log_event::do_apply_event'
'Write_rows_log_event::do_before_row_operations'
'Update_rows_log_event::do_before_row_operations'
'Delete_rows_log_event::do_before_row_operations'
2012-04-12 11:07:39 +05:30
gopal.shankar@oracle.com
796fad1424 Bug#11815557 60269: MYSQL SHOULD REJECT ATTEMPTS TO CREATE SYSTEM
TABLES IN INCORRECT ENGINE

PROBLEM:
  CREATE/ALTER TABLE currently can move system tables like
mysql.db, user, host etc, to engines other than MyISAM. This is not
completely supported as of now, by mysqld. When some of system tables
like plugin, servers, event, func, *_priv, time_zone* are moved
to innodb, mysqld restart crashes. Currently system tables
can be moved to BLACKHOLE also!!!.

ANALYSIS:
  The problem is that there is no check before creating or moving
a system table to some particular engine.

  System tables are suppose to be residing in MyISAM. We can think
of restricting system tables to exist only in MyISAM. But, there could
be future needs of these system tables to be part of other engines
by design. For eg, NDB cluster expects some tables to be on innodb
or ndb engine. This calls for a solution, by which system
tables can be supported by any desired engine, with minimal effort.

FIX:
  The solution provides a handlerton interface using which,
mysqld server can query particular storage engine handlerton for
system tables that it supports. This way each storage engine
layer can define their own system database and system tables.

  The check_engine() function uses the new handlerton function
ha_check_if_supported_system_table() to check if db.tablename
provided in the DDL is supported by the SE.

Note: This fix has modified a test in help.test, which was moving
mysql.help_* to innodb. The primary intention of the test was not
to move them between engines.
2012-04-11 15:53:17 +05:30
Georgi Kodinov
e6704d116d merge mysql-5.5->mysql-5.5-security 2012-04-10 14:23:17 +03:00
Georgi Kodinov
6e1c96db9a merge mysql-5.1->mysql-5.1-security 2012-04-10 14:21:57 +03:00
Manish Kumar
8465e5cf88 BUG#13812374 - RPL.RPL_REPORT_PORT FAILS OCCASIONALLY ON PB2
Problem - this failure occured in the test added for the fix of the 
          bug-13333431. The basic problem of the failure was the 
          value of the report_port which persisted even after the end 
          of the test (ie. rpl_end.inc). So this causes the assertion 
          in the test to fail if it is executed again.

Fix - restarted the server with the default value being passed to the 
      report_port after testing the two expected case so that in the 
      next run of the test we will not encounter the previous value of
      report_port.
2012-04-10 11:44:17 +05:30
Venkata Sidagam
baec942a3f Merged from 5.1 to 5.5 2012-04-09 16:43:54 +05:30
Venkata Sidagam
17743904ba Bug #11766072 59107: MYSQLSLAP CRASHES IF STARTED WITH NO ARGUMENTS ON WINDOWS
This bug is a duplicate of Bug #31173, which was pushed to the 
mysql-trunk 5.6 on 4th Aug, 2010. This is just a back-port of 
the fix
2012-04-09 16:42:41 +05:30
Venkata Sidagam
3ee454f9b6 Bug #11766072 59107: MYSQLSLAP CRASHES IF STARTED WITH NO ARGUMENTS ON WINDOWS
This bug is a duplicate of Bug #31173, which was pushed to the 
mysql-trunk 5.6 on 4th Aug, 2010. This is just a back-port of 
the fix
2012-04-09 14:51:46 +05:30
Sergey Glukhov
eb790303d8 5.1-security -> 5.5-security merge 2012-04-04 14:19:00 +04:00
Sergey Glukhov
17817a3009 Bug#11766300 59387: FAILING ASSERTION: CURSOR->POS_STATE == 1997660512 (BTR_PCUR_IS_POSITIONE
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).
2012-04-04 13:29:45 +04:00
Rohit Kalhans
fe9352454f BUG#11765650 - 58637: MARK UPDATES THAT DEPEND ON ORDER OF TWO KEYS UNSAFE
Description: When the table has more than one unique or primary key, 
 INSERT... ON DUP KEY UPDATE statement is sensitive to the order in which
 the storage engines checks the keys. Depending on this order, the storage
 engine may determine different rows to mysql, and hence mysql can update
 different rows on master and slave.
      
 Solution: We mark INSERT...ON DUP KEY UPDATE on a table with more than on unique
 key as unsafe therefore the event will be logged in row format if it is available
 (ROW/MIXED). If only STATEMENT format is available, a warning will be thrown.
2012-03-30 18:35:53 +05:30
Tor Didriksen
71261282b1 Patch for Bug#13805127: Stored program cache produces wrong result in same THD.
Background:

  - as described in MySQL Internals Prepared Stored
    (http://forge.mysql.com/wiki/MySQL_Internals_Prepared_Stored),
    the Optimizer sometimes does destructive changes to the parsed
    LEX-object (Item-tree), which makes it impossible to re-use
    that tree for PS/SP re-execution.

  - in order to be able to re-use the Item-tree, the destructive
    changes are remembered and rolled back after the statement execution.

The problem, discovered by this bug, was that the objects representing
GROUP-BY clause did not restored after query execution. So, the GROUP-BY
part of the statement could not be properly re-initialized for re-execution
after destructive changes.

Those objects do not take part in the Item-tree, so they can not be saved
using the approach for Item-tree.

The fix is as follows:

  - introduce a new array in st_select_lex to store the original
    ORDER pointers, representing the GROUP-BY clause;

  - Initialize this array in fix_prepare_information().

  - restore the list of GROUP-BY items in reinit_stmt_before_use().
2012-03-29 15:07:54 +02:00
Sunny Bains
90436a096c Bug #13817703 - auto_increment_offset != 1 + innodb_autoinc_lock_mode=1 => bulk inserts fail
Fix the calculation of the next autoinc value when offset > 1. Some of the
results have changed due to the changes in the allocation calculation. The
new calculation will result in slightly bigger gaps for bulk inserts.
  
rb://866 Approved by Jimmy Yang.
Backported from mysql-trunk (5.6)
2012-03-29 18:02:08 +11:00
Gleb Shchepa
4a03cdb7b6 Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE

Infinite loop in the subselect_indexsubquery_engine::exec()
function caused Server hang with 100% CPU usage.

The BLACKHOLE storage engine didn't update handler's
table->status variable after index operations, that
caused an infinite "while(!table->status)" execution.

Index access methods of the BLACKHOLE engine handler
have been updated to set table->status variable to
STATUS_NOT_FOUND or 0 when such a method returns a
HA_ERR_END_OF_FILE error or 0 respectively.
2012-03-28 12:22:31 +04:00
Praveenkumar Hulakund
86ad81b3af Merge from 5.1 to 5.5 2012-03-28 13:35:08 +05:30
Praveenkumar Hulakund
19c375c94c Bug#11763507 - 56224: FUNCTION NAME IS CASE-SENSITIVE
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."

In other words, 'lower_case_table_names' does not affect the behaviour of 
those identifiers.

On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.

The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.

Fix:
Modified the code so that comparison in case insensitive for routines 
and events for "SHOW" operation.

As part of this commit, only fixing the test failures due to the actual code fix.
2012-03-28 12:05:31 +05:30