Commit graph

67066 commits

Author SHA1 Message Date
Rohit Kalhans
d8b2d4a069 Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
Problem: mysqlbinlog exits without any error code in case of
file write error. It is because of the fact that the calls
to Log_event::print() method does not return a value and the
thus any error were being ignored.

Resolution: We resolve this problem by checking for the 
IO_CACHE::error == -1 after every call to Log_event:: print()
and terminating the further execution.

client/mysqlbinlog.cc:
  - handled error conditions during event->print() calls
  - added check for error in end_io_cache()
mysys/my_write.c:
  Added debug code to simulate file write error.
  error returned will be ENOSPC=> error no space on the disk
sql/log_event.cc:
  Added debug code to simulate file write error, by reducing the size of io cache.
2012-05-29 12:11:30 +05:30
unknown
f45784c850 Fix of LP bug#992380 + revise fix_fields about missing with_subselect collection
The problem is that some fix_fields do not call Item_func::fix_fields and do not collect with subselect_information.
2012-05-25 10:29:53 +03:00
Inaam Rana
01748ce128 Bug #14100254 65389: MVCC IS BROKEN WITH IMPLICIT LOCK
rb://1088
approved by: Marko Makela

This bug was introduced in early stages of plugin. We were not
checking for an implicit lock on sec index rec for trx_id that is
stamped on current version of the clust_index in case where the
clust_index has a previous delete marked version.
2012-05-24 12:37:03 -04:00
unknown
d56f5dae1e Fix bug lp:1001506
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.

Analysis:

When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.

The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.

However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.

Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().
2012-05-23 18:18:08 +03:00
unknown
950abd5268 Fix of LP bug#992380 + revise fix_fields about missing with_subselect collection
The problem is that some fix_fields do not call Item_func::fix_fields and do not collect with subselect_information.
2012-05-22 08:48:10 +03:00
Annamalai Gurusami
e979417c06 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

Modified the fix based on review comment by Svoj.
2012-05-21 17:25:40 +05:30
Manish Kumar
1605b7f68f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.

sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value ,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
2012-05-21 12:57:39 +05:30
Sergei Golubchik
280fcf0808 5.1 merge 2012-05-18 14:23:05 +02:00
Sergei Golubchik
57f824b099 post-merge fixes
sql/slave.cc:
  add mutex protection, like in sql_parse.cc
2012-05-18 12:42:06 +02:00
Rohit Kalhans
781137c0dd BUG#14005409 - 64624
Problem: After the fix for Bug#12589870, a new field that
stores the length of db name was added in the buffer that
stores the query to be executed. Unlike for the plain user
session, the replication execution did not allocate the
necessary chunk in Query-event constructor. This caused an
invalid read while accessing this field.
      
Solution: We fix this problem by allocating a necessary chunk
in the buffer created in the Query_log_event::Query_log_event()
and store the length of database name.

sql/log_event.cc:
  Added a new field in the buffer created in the
  Query_log_event's constructor and store the length
  of database name.
2012-05-18 14:44:40 +05:30
Gopal Shankar
21faded51e Bug#12636001 : deadlock from thd_security_context
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,

con1: Run Query on a table. 
  It is important that this SELECT must back-off while
  trying to open the t1 and enter into wait_for_condition().
  The SELECT then is blocked trying to lock mysys_var->mutex
  which is held by con3. The very significant fact here is
  that mysys_var->current_mutex will still point to LOCK_open,
  even if LOCK_open is no longer held by con1 at this point.

con2: Try dropping table used in con1 or query some table.
  It will hold LOCK_open and be blocked trying to lock
  kernel_mutex held by con4.

con3: Try killing the query run by con1.
  It will hold THD::LOCK_thd_data belonging to con1 while
  trying to lock mysys_var->current_mutex belonging to con1.
  But current_mutex will point to LOCK_open which is held
  by con2.

con4: Get innodb engine status
  It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
  belonging to con1 which is held by con3.

So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.

CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
         kernel_mutex -> THD::LOCK_thd_data)

FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.

Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.

Note: The problem is not found in 5.5. Introduction MDL subsystem 
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
As the use of LOCK_open is removed in open_table() and 
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
2012-05-17 18:07:59 +05:30
unknown
3e1758d36a 2012-05-17 11:41:46 +01:00
Sergei Golubchik
0a8c9b98f6 merge with mysql-5.1.63 2012-05-17 12:12:33 +02:00
unknown
5a47413934 fix of LP bug#998321
The problem is that we can't check null_value field of non-basic constant without the item execution.:
2012-05-17 10:13:25 +03:00
unknown
932376f97a 2012-05-17 10:15:54 +05:30
Annamalai Gurusami
ce9e6b5a9a Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
The following scenario crashes our mysql server:

1.  set global innodb_file_per_table=1;
2.  create table t1(c1 int) engine=innodb;
3.  alter table t1 discard tablespace;
4.  alter table t1 add unique index(c1);

Step 4 crashes the server.  This patch introduces a check on discarded
tablespace to avoid the crash.

rb://1041 approved by Marko Makela
2012-05-16 16:36:49 +05:30
Venkata Sidagam
6b05e434bf Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH,
FULLTEXT INDEX AND CONCURRENT DML.

Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
   a) truncate table
   b) insert into table ....
   c) select ... match .. against .. non-boolean/boolean mode

After some time we could observe two different assert core dumps

Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key 
root block.
1st select thread block->status is set to BLOCK_ERROR 
because the my_pread() in read_block() is returning '0'. 
Truncate table made the index file size as 1024 and pread 
was asked to get the block of count bytes(1024 bytes) 
from offset of 1024 which it cannot read since its 
"end of file" and retuning '0' setting 
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
the 1st select thread enter into the free_block() and will 
be under wait on conditional mutex by making status as 
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
thread will also work on the same block and sees the status as 
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
and asserting the server.

2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock', 
which allows other threads to continue and gets the pread() 
return value as'0'(please see the explanation above) and 
tries to get the lock on 'keycache->cache_lock' and waits 
there for the lock.
Insert thread requests for the block, block will be assigned 
from the hash list and makes the page_status as 
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
in the queue since there are some other threads performing 
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock' 
mutex in the read_block() will continue after getting the my_pread() 
value as '0' and sets the block status as BLOCK_ERROR and goes to 
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks 
block->status as not BLOCK_READ and it asserts.  

Fix:
---
In the full text code, multiple readers of index file is not guarded. 
Hence added below below code in _ft2_search() and walk_and_match().

to lock the key_root I have used below code in _ft2_search()
 if (info->s->concurrent_insert)
    mysql_rwlock_rdlock(&share->key_root_lock[0]);

and to unlock 
 if (info->s->concurrent_insert)
   mysql_rwlock_unlock(&share->key_root_lock[0]);

storage/myisam/ft_boolean_search.c:
  Since its a recursion function, to avoid confusion in taking and 
  releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
  function. And _ft2_search() will take the lock, call 
  _ft2_search_internal() and release the lock in case of concurrent 
  inserts.
storage/myisam/ft_nlq_search.c:
  Added read locks code in walk_and_match()
2012-05-16 16:14:27 +05:30
Annamalai Gurusami
bcb5d73767 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

rb://1074 approved by Alexander Nozdrin
2012-05-16 11:17:48 +05:30
Nuno Carvalho
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Marko Mäkelä
5917c58a5d Bug#14025221 FOREIGN KEY REFERENCES FREED MEMORY AFTER DROP INDEX
dict_table_replace_index_in_foreign_list(): Replace the dropped index
also in the foreign key constraints of child tables that are
referencing this table.

row_ins_check_foreign_constraint(): If the underlying index is
missing, refuse the operation.

rb:1051 approved by Jimmy Yang
2012-05-15 15:04:39 +03:00
Georgi Kodinov
fcb033053d Bug #11761822: yassl rejects valid certificate which openssl accepts
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this 
particular certificate.
Added a test case with the certificate itself.
2012-05-15 13:12:22 +03:00
Bjorn Munch
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
Sergey Petrunya
e1b6e1b899 Merge 5.2->5.3 2012-05-12 12:12:35 +04:00
Sergey Petrunya
97ae1682f1 BUG#997747: Assertion `join->best_read < ((double)1.79..5e+308L)' failed
in greedy_search with LEFT JOINs and unique keys
- Backport the fix for BUG#806524 from MariaDB 5.3
2012-05-12 11:53:14 +04:00
unknown
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
unknown
6fc863c749 Fixed typo 2012-05-10 09:00:21 +03:00
Annamalai Gurusami
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Vladislav Vaintroub
54534a6984 MDEV-262 : log_state occationally fails in buildbot.
The failures are  missing entries in the slow query log.  The reason for the failure  are sleep() calls  with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions  (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
2012-05-08 12:38:22 +02:00
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Vladislav Vaintroub
597e98bc83 MDEV-261 : mysqtest crashes when assigning variable to result of select , like
let x = `SELECT <something>`

The fix is to detect the condition "no active connection",  to report error and die.
Note, that the check for no active connection was already in place for ordinary commands, 
and was missing only for assign-variable command.
2012-05-08 00:26:41 +02:00
Venkata Sidagam
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
unknown
8065143637 Fix for LP bug#993726
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
2012-05-07 13:26:34 +03:00
unknown
213476ef3e Fix for bug lp:992405
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY

Original comment:
-----------------
3714 Jorgen Loland	2012-03-01
      BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT 
                     QUERY OUTPUT
      
      For all but simple grouped queries, temporary tables are used to
      resolve grouping. In these cases, the list of grouping fields is
      stored in the temporary table and grouping is resolved
      there (e.g. by adding a unique constraint on the involved
      fields). Because of this, grouping is already done when the rows
      are read from the temporary table.
      
      In the case where a group clause may be optimized away, grouping
      does not have to be resolved using a temporary table. However, if
      a temporary table is explicitly requested (e.g. because the
      SQL_BUFFER_RESULT hint is used, or the statement is
      INSERT...SELECT), a temporary table is used anyway. In this case,
      the temporary table is created with an empty group list (because
      the group clause was optimized away) and it will therefore not
      create groups. Since the temporary table does not take care of
      grouping, JOIN::group shall not be set to false in 
      make_simple_join(). This was fixed in bug 12578908. 
      
      However, there is an exception where make_simple_join() should
      set JOIN::group to false even if the query uses a temporary table
      that was explicitly requested but is not strictly needed. That
      exception is if the loose index scan access method (explain
      says "Using index for group-by") is used to read into the 
      temporary table. With loose index scan, grouping is resolved 
      by the access method. This is exactly what happens in this bug.
2012-05-07 11:02:58 +03:00
unknown
c9a73aa204 Fix bug lp:993745
This is a backport of the fix for MySQL bug #13723054 in 5.6.

Original comment:
      The crash is caused by arbitrary memory area owerwriting in case of
      BLOB fields during attempt to copy BLOB field key image into record
      buffer(record buffer is too small to get BLOB key part image).
      note:
      QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
      because it uses record buffer as temporary buffer for key values
      however this case is filtered out by covering_keys() check
      in get_best_group_min_max() as BLOBs always require key length
      modificator in the key declaration and if the key has a BLOB
      then it can not be covered key.
      The fix is to use 'max_used_key_length' key length instead of 0.

Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
2012-05-03 14:49:52 +03:00
Sergei Golubchik
167ad4c4a5 update the result file 2012-05-02 22:00:31 +02:00
Oleksandr Byelkin
8fe40c50db MDEV-214 lp:967242 Wrong result with JOIN, AND in ON condition, multi-part key, GROUP BY, subquery and OR in WHERE
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.

Solution is to mark constant only top equalities connected with AND.
2012-05-02 18:11:02 +02:00
Sergei Golubchik
b192f7a2e7 5.1 merge 2012-05-02 17:06:30 +02:00
Vladislav Vaintroub
26fcc55017 LP993103: Wrong result with LAST_DAY('0000-00-00 00:00:00')IS NULL in WHERE condition
Fix is to set  maybe_null  flag for Item_func_last_day.
2012-05-02 16:53:02 +02:00
Yasufumi Kinoshita
08e7444e54 Bug#11758510 (#50723): INNODB CHECK TABLE FATAL SEMAPHORE WAIT TIMEOUT POSSIBLY TOO SHORT FOR BI
Fixed not to check timeout during the check table.
2012-04-27 19:38:13 +09:00
irana
1a2cf649dc InnoDB: Adjust error message when a dropped tablespace is accessed. 2012-04-26 08:17:14 -07:00
Vladislav Vaintroub
feb4776ecf MDEV233 - Support Wix3.6 for MSI 2012-04-25 15:30:19 +02:00
Sergei Golubchik
a37768e11d lp:986120 Problem installing mariadb 5 on solaris 10
remove a redundant line in Makefile.am
2012-04-24 17:29:03 +02:00
Andrei Elkin
f189ccf530 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
Andrei Elkin
bd0c38064f BUG#11754117
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
2012-04-23 11:51:19 +03:00
Nuno Carvalho
cdaae1692b BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.

The problem is solved by moving 'LOG_INFO linfo;' to function scope.
2012-04-20 22:25:59 +01:00
Andrei Elkin
49e484c8cd BUG#11754117 incorrect logging of INSERT into auto-increment
BUG#11761686 insert_id event is not filtered.
  
Two issues are covered.
  
INSERT into autoincrement field which is not the first part in the composed primary key 
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
  
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
  
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
  
Fixed with deferring their execution until the parent query.

******
Bug#11754117 

Post review fixes.

mysql-test/suite/rpl/r/rpl_auto_increment_bug45679.result:
  a new result file is added.
mysql-test/suite/rpl/r/rpl_filter_tables_not_exist.result:
  results updated.
mysql-test/suite/rpl/t/rpl_auto_increment_bug45679.test:
  regression test for BUG#11754117-45670 is added.
mysql-test/suite/rpl/t/rpl_filter_tables_not_exist.test:
  regression test for filtering issue of BUG#11754117 - 45670 is added.
sql/log_event.cc:
  Logics are added for deferring and executing events associated 
  with the Query event.
sql/log_event.h:
  Interface to deferred events batch execution is added.
sql/rpl_rli.cc:
  initialization for new RLI members is added.
sql/rpl_rli.h:
  New members to RLI are added to facilitate deferred events gathering
  and execution control;
  two general character RLI cleanup methods are constructed.
sql/rpl_utility.cc:
  Deferred_log_events methods are difined.
sql/rpl_utility.h:
  A new class Deferred_log_events is defined to implement
  IRU events gathering, execution and cleanup.
sql/slave.cc:
  Necessary changes to initialize `rli->deferred_events' and prevent
  deferred event deletion in the main read-exec branch.
sql/sql_base.cc:
  A new safe-check function for multi-part pk with auto-increment is defined
  and deployed in lock_tables().
sql/sql_class.cc:
  Initialization for a new member and replication cleanups are added
  to THD class.
sql/sql_class.h:
  THD class receives a new member to hold a specific execution
  context for slave applier.
sql/sql_parse.cc:
  Execution of the deferred event in started prior to its parent query.
2012-04-20 19:41:20 +03:00
Mayank Prasad
bf4161adae BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
Reason:
 This is a regression happened because of changes done in code refactoring 
 in 5.1 from 5.0.

Issue: 
 While doing "Show tables" lex->verbose was being checked to avoid opening
 FRM files to get table type. In case of "Show full table", lex->verbose
 is true to indicate table type is required. In 5.0, this check was
 present which got missing in >=5.5.

Fix:
 Added the required check to avoid opening FRM files unnecessarily in case
 of "Show tables".
2012-04-19 14:57:34 +05:30
Sergei Golubchik
ed4ead3a98 lp:982664 there are few broken clients that lie about their capabilities
(for example, one of them sets client capabilities by copying server capabilities)

We cannot fix them - let's tolerate them
2012-04-18 20:04:50 +02:00
Tor Didriksen
892106db92 new header file must be listed in Makefile.am 2012-04-18 14:13:13 +02:00
Tor Didriksen
11b2cf4f03 Backport 5.5=>5.1 Patch for Bug#13805127:
Stored program cache produces wrong result in same THD.
2012-04-18 13:14:05 +02:00