Commit graph

29240 commits

Author SHA1 Message Date
Marc Alff
e5f925b066 Bug#14100113 - PERFSCHEMA.FUNC_MUTEX FAILS WITH RESULT CONTENT MISMATCH OCCASIONALLY ON PB2
Improved the robustness of the func_mutex test.
2012-09-07 11:09:22 +02:00
Annamalai Gurusami
4473cf902a Merge from mysql-5.1 to mysql-5.5. 2012-09-03 11:57:25 +05:30
Annamalai Gurusami
3f0e739e3e Merge from mysql-5.1 to mysql-5.5. 2012-09-01 11:27:53 +05:30
Annamalai Gurusami
f3a6816fe5 Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN
THOUGH IT IS NOT.

The following error message is misleading because it claims 
that the BLOB space is not counted.  

"ERROR 1118 (42000): Row size too large. The maximum row size for 
the used table type, not counting BLOBs, is 8126. You have to 
change some columns to TEXT or BLOBs"

When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row.  So 
the above error message is changed as follows depending on
the row format used:

For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format, 
BLOB prefix of 0 bytes is stored inline."

For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."

rb://1252 approved by Marko Makela
2012-08-31 15:42:00 +05:30
Aditya A
30aadd6b41 Bug#14145950 AUTO_INCREMENT ON DOUBLE WILL FAIL ON WINDOWS
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)

  Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
  
  The assertion introduce in the fix for Bug#13817703 
  is too strong, a negative  number can be greater 
  than the column max value, when the column value is
  a negative number.
  
  rb://978 Approved by Jimmy Yang.

rb:1236 approved by Marko Makela
2012-08-27 15:42:11 +05:30
Martin Hansson
456409022d Bug#14498355: Merge 2012-08-24 11:56:14 +02:00
Martin Hansson
df2bdd6063 Bug#14498355: DEPRECATION WARNINGS SHOULD NOT CONTAIN MYSQL VERSION
NUMBERS

If a system variable was declared as deprecated without mention of an
alternative, the message would look funny, e.g. for @@delayed_insert_limit:

Warning 1287 '@@delayed_insert_limit' is deprecated and
will be removed in MySQL .

The message was meant to display the version number, but it's not
possible to give one when declaring a system variable.

The fix does two things:

1) The definition of the message
ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT is changed so that it does
not display a version number. I.e. in English the message now reads:

Warning 1287 The syntax '@@delayed_insert_limit' is deprecated and
will be removed in a future version.

2) The message ER_WARN_DEPRECATED_SYNTAX_WITH_VER is discontinued in
favor of ER_WARN_DEPRECATED_SYNTAX for system variables. This change
was already done in versions 5.6 and above as part of wl#5265. This
part is simply back-ported from the worklog.
2012-08-24 10:17:08 +02:00
Marc Alff
9bc328a41a Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
WARNING

This patch is for mysql-5.5 only,
to be null-merged to mysql-5.6 and mysql-trunk.

This is a partial rollback of the file io instrumentation,
removing the instrumentation for mysql_file_stat in the archive engine.

See the bug comments for details.
2012-08-24 10:01:59 +02:00
Venkata Sidagam
3f8a9984f6 Bug #13115401: -SSL-KEY VALUE IS NOT VALIDATED AND IT ALLOWS INSECURE
CONNECTIONS IF SPE

Merged from mysql-5.1 to mysql-5.5
2012-08-11 15:52:11 +05:30
Venkata Sidagam
18087b049e Bug #13115401: -SSL-KEY VALUE IS NOT VALIDATED AND IT ALLOWS INSECURE
CONNECTIONS IF SPE

Problem description: -ssl-key value is not validated, you can assign any bogus 
text to --ssl-key and it is not verified that it exists, and more importantly, 
it allows the client to connect to mysqld.

Fix: Added proper validations checks for --ssl-key.

Note:
1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections
   listed below and the details are :

 http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl
    and
 REQUIRE SSL section of
 http://dev.mysql.com/doc/refman/5.6/en/grant.html

2) Client having with option '--ssl', should able to get ssl connection. This 
will be implemented as part of separate fix in 5.6 and trunk.
2012-08-11 15:43:04 +05:30
Venkata Sidagam
e130d9efbf Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Fixed the missing of federated/include folder at the time 
of preparing package distribution, issue happens only in 5.1
2012-07-27 12:05:37 +05:30
Venkata Sidagam
fe7a28e759 Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Merged pb2 test failure fix from mysql-5.1 to mysql-5.5
2012-07-26 23:27:01 +05:30
Venkata Sidagam
b6ecca263c Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Fix for pb2 test failure.
2012-07-26 23:23:04 +05:30
Venkata Sidagam
fdaafddd80 Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Merged from mysql-5.1 to mysql-5.5
2012-07-26 15:29:19 +05:30
Venkata Sidagam
aef1982be0 Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Problem description:
Table 't' created with two colums having compound index on both the 
columns under innodb/myisam engine at remote machine. In the local 
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong 
results on local machine.

Analysis: 
The given query at federated engine is wrongly transformed by 
federated::create_where_from_key() function and the same was sent to 
the remote machine. Hence the local machine is showing wrong results.

Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE  (`c1` IS NOT NULL ) AND 
( (`c1` >= 2)  AND  (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and 
'<=' respectively.

ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key 
is used to get "(`c1` IS NOT NULL )" part of the where clause, this 
transformation is correct. The end_key is used to get "( (`c1` >= 2) 
AND  (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=') 
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3, 
flag = HA_READ_AFTER_KEY}

The store_length is having value '5'. Based on store_length and length 
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of 
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case, 
here the '>=' is getting added as a condition instead of '<='.

Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all 
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in 
the if condition.


mysql-test/suite/federated/federated.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_archive.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_bug_13118.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_bug_25714.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_bug_35333.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_debug.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_innodb.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_server.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_transactions.test:
  modified the federated.inc file location
mysql-test/suite/federated/include/federated.inc:
  moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/federated_cleanup.inc:
  moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/have_federated_db.inc:
  moved the file from federated suite to federated/include folder
storage/federated/ha_federated.cc:
  updated the 'if condition' in ha_federated::create_where_from_key() 
  function.
2012-07-26 15:09:22 +05:30
Annamalai Gurusami
1383660024 Bug #13113026 INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRUFROM 5.6 BACKPORT
Backporting the WL#5716, "Information schema table for InnoDB 
buffer pool information". Backporting revisions 2876.244.113, 
2876.244.102 from mysql-trunk.

rb://1175 approved by Jimmy Yang.
2012-07-25 13:51:39 +05:30
Annamalai Gurusami
1c6f78e337 Bug #13113026 INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRUFROM 5.6 BACKPORT
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.

rb://1177 approved by Jimmy Yang.
2012-07-25 10:48:16 +05:30
Chaithra Gopalareddy
0ef427ae3f Merge from 5.1 to 5.5 2012-07-18 15:18:15 +05:30
Chaithra Gopalareddy
ddcd6867e9 Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH
"ORDER BY" AND "LIMIT BY" CLAUSE

PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.

ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.

While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function. 
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and 
hence the estimated number of rows calculated above are 
wrong (which results in choosing the second index).

FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.

This fix is backported from 5.6. This problem is fixed in 5.6 as   
part of changes for work log #5558


mysql-test/r/subselect.result:
  Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
  Do not pass the actual 'limit' value if 'need_tmp' is true.
2012-07-18 14:36:08 +05:30
Nuno Carvalho
8dc96fc7aa BUG#14310067: RPL_CANT_READ_EVENT_INCIDENT AND RPL_BUG41902 FAIL ON 5.5
rpl_cant_read_event_incident:
Slave applies updates from bug11747416_32228_binlog.000001 file which 
contains a CREATE TABLE t statement and an incident, when SQL thread is
running slowly IO thread may reach the incident before SQL thread
executes the create table statement. 
Execute "drop table if exists t" and also perform a RESET MASTER to
clean slave binary logs.

rpl_bug41902:
Error "MYSQL_BIN_LOG::purge_logs was called with file
./master-bin.000001 not listed in the index." suppression is not 
considering windows path, there is ".\master-bin.000001".
Changed suppression to: "MYSQL_BIN_LOG::purge_logs was called with file
..master-bin.000001 not listed in the index", to match ".\" and "./".
2012-07-13 10:04:59 +01:00
Sunanda Menon
3dd7dd8307 Merge from mysql-5.5.25a-release 2012-07-06 11:35:46 +02:00
Rohit Kalhans
91c8e79fcd BUG#11762667:MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.

We resolve this problem by creating a new inc file 
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.
 

mysql-test/include/mysqlbinlog_have_debug.inc:
  new inc file to check if mysqlbinlog is debug compiled.
2012-07-03 18:00:21 +05:30
Joerg Bruehe
2fc2d9a232 Added some extra optional path to test suites. 2012-07-02 13:09:33 +02:00
Bjorn Munch
5f5df5091e Fix mysql_plugin test to handle version XXa 2012-06-29 14:19:31 +02:00
Georgi Kodinov
06f6e4fe95 Bug #12998841: libmysql divulges plaintext password upon request in 5.5
1. Clear text password client plugin disabled by default.
2. Added an environment variable LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN, that
when set to something starting with '1', 'Y' or 'y' will enable the clear
text
plugin for all connections.
3. Added a new mysql_options() option : MYSQL_ENABLE_CLEARTEXT_PLUGIN
that takes an my_bool argument. When the value of the argument is non-zero
the clear text plugin is enabled for this connection only.
4. Added an enable-cleartext-plugin config file option that takes a numeric

argument. If the numeric value of the numeric argument is non-zero the
clear
text plugin is enabled for the connection
5. Added a boolean command line option "--enable_cleartext_plugin" to
mysql, mysqlslap and mysqladmin. When specified it will call mysql_options
with the effect of #3
6. Added a new CLEARTEXT option to the connect command in mysqltest.
When specified it will enable the cleartext plugin for usage.
7. Added test cases and updated existing ones that need the clear text
plugin.
2012-07-05 09:55:20 +03:00
Georgi Kodinov
9ce35ffc86 Bug #11753490: 44939: sql dumps containing broad views fail when
executing

The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while 
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.

There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and 
the formula inside mysqldump.

1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint 
to save on row space.

2. Added a warning on the mysqldump's standard error for a possible 
problems replaying the dump file if the columns of a view exceed 1000.

3. Added a test case.
2012-07-04 17:48:58 +03:00
Rohit Kalhans
77c599eb1e upmerge from mysql-5.1 to mysql-5.5 2012-07-03 18:08:31 +05:30
Mayank Prasad
608c2c018e Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Details:
 - Modified test case to make sure its run for all and not only
   for archive Storage Engine.
2012-07-03 09:55:51 +05:30
Gleb Shchepa
4661d95433 manual merge (WL6219)
sql/sql_yacc.yy:
  manual merge (backport of WL6219)
2012-06-29 14:12:21 +04:00
Gleb Shchepa
767501fb54 Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
Print the warning(note):

 YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead

on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
2012-06-29 12:55:45 +04:00
unknown
775293b898 BUG #13946716: FEDERATED_PLUGIN TEST CASE FAIL ON 64BIT ARCHITECTURES 2012-06-14 17:07:49 +05:30
Manish Kumar
db982ec87f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Upmerge from mysql-5.1 -> mysql-5.5
2012-06-12 12:59:56 +05:30
Manish Kumar
4a2d65cc31 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-06-12 12:59:13 +05:30
Tor Didriksen
6e2ce8739c Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Followup patch: wrong result unless archive storage engine is available.
2012-06-05 12:27:20 +02:00
Jon Olav Hauglid
1c0388a5be Bug#13982017: ALTER TABLE RENAME ENDS UP WITH ERROR 1050 (42S01)
Fixed by backport of:
    ------------------------------------------------------------
    revno: 3402.50.156
    committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
    branch nick: mysql-trunk-test
    timestamp: Wed 2012-02-08 14:10:23 +0100
    message:
      Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
      
      This assert could be triggered if an InnoDB table was being moved
      to a different database using ALTER TABLE ... RENAME, while this
      database concurrently was being dropped by DROP DATABASE.
      
      The reason for the problem was that no metadata lock was taken
      on the target database by ALTER TABLE ... RENAME.
      DROP DATABASE was therefore not blocked and could remove
      the database while ALTER TABLE ... RENAME was executing. This
      could cause the assert in InnoDB to be triggered.
      
      This patch fixes the problem by taking a IX metadata lock on
      the target database before ALTER TABLE ... RENAME starts
      moving a table to a different database.
      
      Note that this problem did not occur with RENAME TABLE which
      already takes the correct metadata locks.
      
      Also note that this patch slightly changes the behavior of
      ALTER TABLE ... RENAME. Before, the statement would abort and
      return an error if a lock on the target table name could not
      be taken immediately. With this patch, ALTER TABLE ... RENAME
      will instead block and wait until the lock can be taken 
      (or until we get a lock timeout). This also means that it is
      possible to get ER_LOCK_DEADLOCK errors in this situation
      since we allow ALTER TABLE ... RENAME to wait and not just
      abort immediately.
2012-06-01 09:31:24 +02:00
Manish Kumar
a9ca9403a7 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-05-30 10:10:52 +05:30
Mayank Prasad
b2888adb70 PB2 Failure Fix : Disabled the test case in correct manner. 2012-05-25 15:44:27 +05:30
Mayank Prasad
efe42792b3 Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Details:
 - Archive storage engine file access were not instrumented and thus
   were not shown in PS tables.
      
Fix:
 - Added instrumentation code by using PS Apis for I/O.
2012-05-24 23:00:32 +05:30
Norvald H. Ryeng
889ce03108 WL#6311 Remove --safe-mode
Print deprecation warning if the --safe-mode command line option is
used.
2012-05-23 12:27:32 +02:00
Marc Alff
be7a6d93cc Improved the test performance_schema.func_file_io,
so that investigating test failures is easier.

Detect cases when @before_count / @after_count is NULL.
2012-05-23 10:21:35 +02:00
Annamalai Gurusami
8aedf7d30f After the fix for Bug #12752572, there is a pb2 failure. Fixing it by
updating the result file.  Because a multi-row insert now reserves the
auto increment values before hand, if any explicitly specified auto
increment values are there, then some of the reserved values are lost.
2012-05-23 10:12:52 +05:30
Manish Kumar
1605b7f68f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.

sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value ,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
2012-05-21 12:57:39 +05:30
Nuno Carvalho
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Nuno Carvalho
f0e93ac2bf BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Automerge from mysql-5.1 into mysql-5.5.
2012-05-15 22:18:59 +01:00
Bjorn Munch
0e66663159 Upmerge optional testsuite path 2012-05-15 09:19:58 +02:00
Bjorn Munch
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
Annamalai Gurusami
326b40c9c8 Merging from mysql-5.1 to mysql-5.5. 2012-05-10 10:33:16 +05:30
Annamalai Gurusami
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Joerg Bruehe
5be07ceadd Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00