Commit graph

70622 commits

Author SHA1 Message Date
Jon Olav Hauglid
6094d19095 Bug#13586591 RQG GRAMMAR CONF/ENGINES/ENGINE_STRESS.YY
CRASHES INNODB | TRX_STATE_NOT_STARTED

The problem was that if DELETE with subselect caused a
deadlock inside InnoDB, this deadlock was not properly
handled by the SQL layer. This meant that the SQL layer
would try to unlock the row after InnoDB had rolled
back the transaction. This caused an assertion inside
InnoDB.

This patch fixes the problem by checking for errors
reported by SQL_SELECT::skip_record() and not calling
unlock_row() if any errors have been reported.
2012-06-12 15:04:57 +02:00
Manish Kumar
db982ec87f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Upmerge from mysql-5.1 -> mysql-5.5
2012-06-12 12:59:56 +05:30
Manish Kumar
4a2d65cc31 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-06-12 12:59:13 +05:30
Bjorn Munch
d74c3df6ce Backport SuSE 11 fix to RPM spec file 2012-06-08 16:31:03 +02:00
Narayanan Venkateswaran
164080e4a8 WL#6161 Integrating with InnoDB codebase in MySQL 5.5
Changes in the InnoDB codebase required to compile and
integrate the MEB codebase with MySQL 5.5.

@ storage/innobase/btr/btr0btr.c
  Excluded buffer pool usage from MEB build.
 
  buf_pool_from_bpage calls are in buf0buf.ic, and
  the buffer pool functions from that file are
  disabled in MEB.
@ storage/innobase/buf/buf0buf.c
  Disabling more buffer pool functions unused in MEB.
@ storage/innobase/dict/dict0dict.c
  Disabling dict_ind_free that is unused in MEB.
@ storage/innobase/dict/dict0mem.c
  The include

  #include "ha_prototypes.h"

  Was causing conflicts with definitions in my_global.h

  Linking C executable mysqlbackup
  libinnodb.a(dict0mem.c.o): In function `dict_mem_foreign_table_name_lookup_set':
  dict0mem.c:(.text+0x91c): undefined reference to `innobase_get_lower_case_table_names'
  libinnodb.a(dict0mem.c.o): In function `dict_mem_referenced_table_name_lookup_set':
  dict0mem.c:(.text+0x9fc): undefined reference to `innobase_get_lower_case_table_names'
  libinnodb.a(dict0mem.c.o): In function `dict_mem_foreign_table_name_lookup_set':
  dict0mem.c:(.text+0x96e): undefined reference to `innobase_casedn_str'
  libinnodb.a(dict0mem.c.o): In function `dict_mem_referenced_table_name_lookup_set':
  dict0mem.c:(.text+0xa4e): undefined reference to `innobase_casedn_str'
  collect2: ld returned 1 exit status
  make[2]: *** [mysqlbackup] Error 1

  innobase_get_lower_case_table_names
  innobase_casedn_str
  are functions that are part of ha_innodb.cc that is not part of the build
        
  dict_mem_foreign_table_name_lookup_set
  function is not there in the current codebase, meaning we do not use it in MEB.
@ storage/innobase/fil/fil0fil.c
  The srv_fast_shutdown variable is declared in
  srv0srv.c that is not compiled in the
  mysqlbackup codebase.

  This throws an undeclared error.

  From the Manual
  ---------------

  innodb_fast_shutdown
  --------------------

  The InnoDB shutdown mode. The default value is 1
  as of MySQL 3.23.50, which causes a “fast� shutdown
  (the normal type of shutdown). If the value is 0,
  InnoDB does a full purge and an insert buffer merge
  before a shutdown. These operations can take minutes,
  or even hours in extreme cases. If the value is 1,
  InnoDB skips these operations at shutdown.

  This ideally does not matter from mysqlbackup
  @ storage/innobase/ha/ha0ha.c
  In file included from /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/ha/ha0ha.c:34:0:
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/btr0sea.h:286:17: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
  make[2]: *** [CMakeFiles/innodb.dir/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/ha/ha0ha.c.o] Error 1
  make[1]: *** [CMakeFiles/innodb.dir/all] Error 2
  make: *** [all] Error 2

  # include "sync0rw.h" is excluded from hotbackup compilation in dict0dict.h

  This causes extern rw_lock_t*	btr_search_latch_temp; to throw a failure because
  the definition of rw_lock_t is not found.
@ storage/innobase/include/buf0buf.h
  Excluding buffer pool functions that are unused from the
  MEB codebase.
@ storage/innobase/include/buf0buf.ic
  replicated the exclusion of

  #include "buf0flu.h"
  #include "buf0lru.h"
  #include "buf0rea.h"

  by looking at the current codebase in <meb-trunk>/src/innodb
  @ storage/innobase/include/dict0dict.h
  dict_table_x_lock_indexes, dict_table_x_unlock_indexes, dict_table_is_corrupted,
  dict_index_is_corrupted, buf_block_buf_fix_inc_func are unused in MEB and was
  leading to compilation errors and hence excluded.
@ storage/innobase/include/dict0dict.ic
  dict_table_x_lock_indexes, dict_table_x_unlock_indexes, dict_table_is_corrupted,
  dict_index_is_corrupted, buf_block_buf_fix_inc_func are unused in MEB and was
  leading to compilation errors and hence excluded.
@ storage/innobase/include/log0log.h
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/log0log.h: At top level:
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/log0log.h:767:2: error: expected specifier-qualifier-list before Ã¢â  ‚¬Ëœmutex_t’

  mutex_t definitions were excluded as seen from ambient code
  hence excluding definition for log_flush_order_mutex also.
@ storage/innobase/include/os0file.h
  Bug in InnoDB code, create_mode should have been create.
@ storage/innobase/include/srv0srv.h
  In file included from /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/buf/buf0buf.c:50:0:
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/srv0srv.h: At top level:
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/srv0srv.h:120:16: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘srv_use_native_aio’

  srv_use_native_aio - we do not use native aio of the OS anyway from MEB. MEB does not compile
  InnoDB with this option. Hence disabling it.
@ storage/innobase/include/trx0sys.h
  [ 56%] Building C object CMakeFiles/innodb.dir/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c.o
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c: In function ‘trx_sys_read_file_format_id’:
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c:1499:20: error: ‘TRX_SYS_FILE_FORMAT_TAG_MAGIC_N’   undeclared (first use in this function)
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c:1499:20: note: each undeclared identifier is reported only once for  each function it appears in
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c: At top level:
  /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/buf0buf.h:607:1: warning: ‘buf_block_buf_fix_inc_func’ declared ‘static’ but never defined
  make[2]: *** [CMakeFiles/innodb.dir/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c.o] Error 1

  unused calls excluded to enable compilation
@ storage/innobase/mem/mem0dbg.c
    excluding #include "ha_prototypes.h" that lead to definitions in ha_innodb.cc
@ storage/innobase/os/os0file.c
    InnoDB not compiled with aio support from MEB anyway. Hence excluding this from
    the compilation.
@ storage/innobase/page/page0zip.c
  page0zip.c:(.text+0x4e9e): undefined reference to `buf_pool_from_block'
  collect2: ld returned 1 exit status

  buf_pool_from_block defined in buf0buf.ic, most of the file is excluded for compilation of MEB
@ storage/innobase/ut/ut0dbg.c
  excluding #include "ha_prototypes.h" since it leads to definitions in ha_innodb.cc
  innobase_basename(file) is defined in ha_innodb.cc. Hence excluding that also.
@ storage/innobase/ut/ut0ut.c
  cal_tm unused from MEB, was leading to earnings, hence disabling for MEB.
2012-06-07 19:14:26 +05:30
Tor Didriksen
71b101fcb3 merge 5.1 => 5.5 2012-06-05 16:12:22 +02:00
Tor Didriksen
2b085e1fba Bug#14051002 VALGRIND: CONDITIONAL JUMP OR MOVE IN RR_CMP / MY_QSORT
Patch for 5.1 and 5.5: fix typo in byte comparison in rr_cmp()
2012-06-05 15:53:39 +02:00
Tor Didriksen
6e2ce8739c Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Followup patch: wrong result unless archive storage engine is available.
2012-06-05 12:27:20 +02:00
Annamalai Gurusami
f2d7e1ff9e Null merge from mysql-5.1 to mysql-5.5. 2012-06-01 14:14:10 +05:30
Annamalai Gurusami
a28a2ca798 Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
WHEN KILLING

Suppose there is a query waiting for a lock.  If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file.  Since this is a user
requested interruption, no spurious error message must be logged
in the server log.  This patch will remove the error message from
the log.

approved by joh and tatjana
2012-06-01 14:12:57 +05:30
Jon Olav Hauglid
1c0388a5be Bug#13982017: ALTER TABLE RENAME ENDS UP WITH ERROR 1050 (42S01)
Fixed by backport of:
    ------------------------------------------------------------
    revno: 3402.50.156
    committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
    branch nick: mysql-trunk-test
    timestamp: Wed 2012-02-08 14:10:23 +0100
    message:
      Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
      
      This assert could be triggered if an InnoDB table was being moved
      to a different database using ALTER TABLE ... RENAME, while this
      database concurrently was being dropped by DROP DATABASE.
      
      The reason for the problem was that no metadata lock was taken
      on the target database by ALTER TABLE ... RENAME.
      DROP DATABASE was therefore not blocked and could remove
      the database while ALTER TABLE ... RENAME was executing. This
      could cause the assert in InnoDB to be triggered.
      
      This patch fixes the problem by taking a IX metadata lock on
      the target database before ALTER TABLE ... RENAME starts
      moving a table to a different database.
      
      Note that this problem did not occur with RENAME TABLE which
      already takes the correct metadata locks.
      
      Also note that this patch slightly changes the behavior of
      ALTER TABLE ... RENAME. Before, the statement would abort and
      return an error if a lock on the target table name could not
      be taken immediately. With this patch, ALTER TABLE ... RENAME
      will instead block and wait until the lock can be taken 
      (or until we get a lock timeout). This also means that it is
      possible to get ER_LOCK_DEADLOCK errors in this situation
      since we allow ALTER TABLE ... RENAME to wait and not just
      abort immediately.
2012-06-01 09:31:24 +02:00
Annamalai Gurusami
4acca3431f Merge from mysql-5.1 to mysql-5.5. 2012-06-01 11:49:07 +05:30
unknown
85c4d21632 2012-05-31 22:34:08 +05:30
unknown
2ebd927ec0 2012-05-31 22:28:18 +05:30
Marc Alff
41d39f1830 Local merge 2012-05-31 11:49:03 +02:00
Marc Alff
cf48fcfba3 Bug 14116252 - CANNOT BUILD ARCHIVE ENGINE WHEN WITH_PERFSCHEMA_STORAGE_ENGINE=0
Fixed a build break with compiling without the performance schema,
instrumentation should be protected by HAVE_PSI_INTERFACE
2012-05-31 11:47:13 +02:00
Rohit Kalhans
e7d969bdaa upmerge from mysql-5.1 => mysql-5.5 2012-05-31 14:37:23 +05:30
unknown
70d01f5182 2012-05-31 14:32:29 +05:30
Nuno Carvalho
367de4de2d BUG#14135691: MISSING INITIALIZATION OF MYSQL_BIN_LOG::SYNC_COUNTER
MYSQL_BIN_LOG::sync_counter is not initialized on MYSQL_BIN_LOG object
creation.

Added missing sync_counter initialization.
2012-05-30 14:55:13 +01:00
Tor Didriksen
f20a4c912d Bug#12845091 .EMPTY FILE IN /DATA/TEST PREVENTS USERS FROM DROPPING TEST DB ON 5.5 AND 5.6
For those who build in-source, we need to change .bzrignore:
s/.empty/dummy.bak/
2012-05-30 12:47:29 +02:00
Rohit Kalhans
71c4f94dd5 upmerge from mysql-5.1 branch -> mysql-5.5 branch 2012-05-30 14:27:27 +05:30
unknown
f8a6521789 2012-05-30 14:00:29 +05:30
Rohit Kalhans
96eb519eb7 Fixing the build failure on Windows debug build. 2012-05-30 13:54:15 +05:30
Manish Kumar
a9ca9403a7 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-05-30 10:10:52 +05:30
Annamalai Gurusami
a2bc9b3669 Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
WHEN KILLING

Suppose there is a query waiting for a lock.  If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file.  Since this is a user
requested interruption, no spurious error message must be logged
in the server log.  This patch will remove the error message from
the log.

approved by joh and tatjana
2012-05-30 10:05:04 +05:30
Tor Didriksen
31a93ea75d Bug#12845091 .EMPTY FILE IN /DATA/TEST PREVENTS USERS FROM DROPPING TEST DB ON 5.5 AND 5.6 2012-05-29 10:54:57 +02:00
Rohit Kalhans
484a79415b upmerge from mysql-5.1 branch -> mysql-5.5 branch 2012-05-29 12:21:17 +05:30
Rohit Kalhans
d8b2d4a069 Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
Problem: mysqlbinlog exits without any error code in case of
file write error. It is because of the fact that the calls
to Log_event::print() method does not return a value and the
thus any error were being ignored.

Resolution: We resolve this problem by checking for the 
IO_CACHE::error == -1 after every call to Log_event:: print()
and terminating the further execution.

client/mysqlbinlog.cc:
  - handled error conditions during event->print() calls
  - added check for error in end_io_cache()
mysys/my_write.c:
  Added debug code to simulate file write error.
  error returned will be ENOSPC=> error no space on the disk
sql/log_event.cc:
  Added debug code to simulate file write error, by reducing the size of io cache.
2012-05-29 12:11:30 +05:30
Praveenkumar Hulakund
b2c3acc987 Bug#14003080:65104: MAX_USER_CONNECTIONS WITH PROCESSLIST EMPTY
Analysis:
-------------
If server is started with limit of MAX_CONNECTIONS and 
MAX_USER_CONNECTIONS then only MAX_USER_CONNECTIONS of any particular
users can be connected to server and total MAX_CONNECTIONS of client can
be connected to server.

Server maintains a counter for total CONNECTIONS and total CONNECTIONS 
from particular user.

Here, MAX_CONNECTIONS of connections are created to server. Out of this
MAX_CONNECTIONS, connections from particular user (say USER1) are
also created. The connections from USER1 is lesser than 
MAX_USER_CONNECTIONS. After that there was one more connection request from
USER1. Since USER1 can still create connections as he havent reached
MAX_USER_CONNECTIONS, server increments counter of CONNECTIONS per user.
As server already has MAX_CONNECTIONS of connections, next check to total
CONNECTION count fails. In this case control is returned WITHOUT 
decrementing the CONNECTIONS per user. So the counter per user CONNECTIONS goes
on incrementing for each attempt until current connections are closed. 
And because of this counter per CONNECTIONS reached MAX_USER_CONNECTIONS. 
So, next connections form USER1 user always returns with MAX_USER_CONNECTION 
limit error, even when total connection to sever are less than MAX_CONNECTIONS.

Fix:
-------------
This issue is occurred because of not handling counters properly in the
server. Changed the code to handle per user connection counters properly.
2012-05-28 11:14:43 +05:30
Mayank Prasad
b2888adb70 PB2 Failure Fix : Disabled the test case in correct manner. 2012-05-25 15:44:27 +05:30
unknown
1c1d19af8c 2012-05-25 13:19:44 +05:30
Mayank Prasad
efe42792b3 Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Details:
 - Archive storage engine file access were not instrumented and thus
   were not shown in PS tables.
      
Fix:
 - Added instrumentation code by using PS Apis for I/O.
2012-05-24 23:00:32 +05:30
Inaam Rana
b91be27733 merge from 5.1 2012-05-24 12:44:27 -04:00
Inaam Rana
01748ce128 Bug #14100254 65389: MVCC IS BROKEN WITH IMPLICIT LOCK
rb://1088
approved by: Marko Makela

This bug was introduced in early stages of plugin. We were not
checking for an implicit lock on sec index rec for trx_id that is
stamped on current version of the clust_index in case where the
clust_index has a previous delete marked version.
2012-05-24 12:37:03 -04:00
Sujatha Sivakumar
dcd4fa3fd5 Bug#13833962:DISABLE [NOTE] START BINLOG_DUMP TO SLAVE_SERVER(0) MESSAGES
IN THE ERROR LOG
      
Problem:
Using mysqlbinlog with the --read-from-remote-server option as shown below
prints a message in error log for each call. This happens for 5.5 and above
versions
      
mysqlbinlog -uroot -p --read-from-remote-server --host=localhost test
      
Message in error log file is given below:
120312 10:27:57 [Note] Start binlog_dump to slave_server(0), pos(test, 4)
      
The problem is that it can fill up the error log if the command is called
very often.
      
Analysis:
The below mentioned print function is called from "mysql_binlog_send" function
which causes the "Start binlog_dump..." string to be printed in error log file.

sql_print_information("Start binlog_dump to master_thread_id(%lu) 
slave_server(%d)..."
      
Fix:
A condition has been added in such a way that the 'sql_print_information' 
will be invoked only when the "log_warnings" variable is set to >1 
otherwise don't call the 'sql_print_information' function.
2012-05-24 16:25:07 +05:30
Norvald H. Ryeng
889ce03108 WL#6311 Remove --safe-mode
Print deprecation warning if the --safe-mode command line option is
used.
2012-05-23 12:27:32 +02:00
Marc Alff
be7a6d93cc Improved the test performance_schema.func_file_io,
so that investigating test failures is easier.

Detect cases when @before_count / @after_count is NULL.
2012-05-23 10:21:35 +02:00
Annamalai Gurusami
8aedf7d30f After the fix for Bug #12752572, there is a pb2 failure. Fixing it by
updating the result file.  Because a multi-row insert now reserves the
auto increment values before hand, if any explicitly specified auto
increment values are there, then some of the reserved values are lost.
2012-05-23 10:12:52 +05:30
Annamalai Gurusami
41c37c77b4 Merge from mysql-5.1 to mysql-5.5 2012-05-21 17:27:21 +05:30
Annamalai Gurusami
e979417c06 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

Modified the fix based on review comment by Svoj.
2012-05-21 17:25:40 +05:30
Tor Didriksen
a73d2ae02c Bug#13986705 CRASH IN GET_INTERVAL_VALUE() WITH DATE CALCULATION WITH UTF32 INTERVALS
This is a followup to the fix for Bug#12340997
get_interval_value() was trying to parse the input string,
looking for leading '-' while skipping whitespace.
The macro my_isspace() does not work for utf32 character set,
since my_charset_utf32_general_ci.ctype == NULL.

Solution: convert input to ASCII before parsing,
and use the character set of the returned ASCII string.
2012-05-21 10:47:12 +02:00
Manish Kumar
1605b7f68f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.

sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value ,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
2012-05-21 12:57:39 +05:30
Rohit Kalhans
781137c0dd BUG#14005409 - 64624
Problem: After the fix for Bug#12589870, a new field that
stores the length of db name was added in the buffer that
stores the query to be executed. Unlike for the plain user
session, the replication execution did not allocate the
necessary chunk in Query-event constructor. This caused an
invalid read while accessing this field.
      
Solution: We fix this problem by allocating a necessary chunk
in the buffer created in the Query_log_event::Query_log_event()
and store the length of database name.

sql/log_event.cc:
  Added a new field in the buffer created in the
  Query_log_event's constructor and store the length
  of database name.
2012-05-18 14:44:40 +05:30
Gopal Shankar
21faded51e Bug#12636001 : deadlock from thd_security_context
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,

con1: Run Query on a table. 
  It is important that this SELECT must back-off while
  trying to open the t1 and enter into wait_for_condition().
  The SELECT then is blocked trying to lock mysys_var->mutex
  which is held by con3. The very significant fact here is
  that mysys_var->current_mutex will still point to LOCK_open,
  even if LOCK_open is no longer held by con1 at this point.

con2: Try dropping table used in con1 or query some table.
  It will hold LOCK_open and be blocked trying to lock
  kernel_mutex held by con4.

con3: Try killing the query run by con1.
  It will hold THD::LOCK_thd_data belonging to con1 while
  trying to lock mysys_var->current_mutex belonging to con1.
  But current_mutex will point to LOCK_open which is held
  by con2.

con4: Get innodb engine status
  It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
  belonging to con1 which is held by con3.

So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.

CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
         kernel_mutex -> THD::LOCK_thd_data)

FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.

Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.

Note: The problem is not found in 5.5. Introduction MDL subsystem 
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
As the use of LOCK_open is removed in open_table() and 
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
2012-05-17 18:07:59 +05:30
unknown
3e1758d36a 2012-05-17 11:41:46 +01:00
unknown
932376f97a 2012-05-17 10:15:54 +05:30
Annamalai Gurusami
ce9e6b5a9a Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
The following scenario crashes our mysql server:

1.  set global innodb_file_per_table=1;
2.  create table t1(c1 int) engine=innodb;
3.  alter table t1 discard tablespace;
4.  alter table t1 add unique index(c1);

Step 4 crashes the server.  This patch introduces a check on discarded
tablespace to avoid the crash.

rb://1041 approved by Marko Makela
2012-05-16 16:36:49 +05:30
Venkata Sidagam
6b05e434bf Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH,
FULLTEXT INDEX AND CONCURRENT DML.

Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
   a) truncate table
   b) insert into table ....
   c) select ... match .. against .. non-boolean/boolean mode

After some time we could observe two different assert core dumps

Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key 
root block.
1st select thread block->status is set to BLOCK_ERROR 
because the my_pread() in read_block() is returning '0'. 
Truncate table made the index file size as 1024 and pread 
was asked to get the block of count bytes(1024 bytes) 
from offset of 1024 which it cannot read since its 
"end of file" and retuning '0' setting 
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
the 1st select thread enter into the free_block() and will 
be under wait on conditional mutex by making status as 
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
thread will also work on the same block and sees the status as 
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
and asserting the server.

2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock', 
which allows other threads to continue and gets the pread() 
return value as'0'(please see the explanation above) and 
tries to get the lock on 'keycache->cache_lock' and waits 
there for the lock.
Insert thread requests for the block, block will be assigned 
from the hash list and makes the page_status as 
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
in the queue since there are some other threads performing 
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock' 
mutex in the read_block() will continue after getting the my_pread() 
value as '0' and sets the block status as BLOCK_ERROR and goes to 
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks 
block->status as not BLOCK_READ and it asserts.  

Fix:
---
In the full text code, multiple readers of index file is not guarded. 
Hence added below below code in _ft2_search() and walk_and_match().

to lock the key_root I have used below code in _ft2_search()
 if (info->s->concurrent_insert)
    mysql_rwlock_rdlock(&share->key_root_lock[0]);

and to unlock 
 if (info->s->concurrent_insert)
   mysql_rwlock_unlock(&share->key_root_lock[0]);

storage/myisam/ft_boolean_search.c:
  Since its a recursion function, to avoid confusion in taking and 
  releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
  function. And _ft2_search() will take the lock, call 
  _ft2_search_internal() and release the lock in case of concurrent 
  inserts.
storage/myisam/ft_nlq_search.c:
  Added read locks code in walk_and_match()
2012-05-16 16:14:27 +05:30
Annamalai Gurusami
bcb5d73767 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

rb://1074 approved by Alexander Nozdrin
2012-05-16 11:17:48 +05:30
Nuno Carvalho
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00