Commit graph

67229 commits

Author SHA1 Message Date
Manish Kumar
4a2d65cc31 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-06-12 12:59:13 +05:30
Igor Babaev
01cd123592 Merge. 2012-06-12 00:09:20 -07:00
Igor Babaev
7b32d88c05 Fixed LP bug #1008293.
One of the reported problems manifested itself in the scenario when one
thread tried to to get statistics on a key cache while the second thread
had not finished initialization of the key cache structure yet. 
The problem was resolved by forcing serialization of such operations
on key caches.

To serialize function calls to perform certain operations over a key cache
a new mutex associated with the key cache now is used. It is stored in the
field op_lock of the KEY_CACHE structure. It is locked when the operation
is performed. Some of the serialized key cache operations utilize calls 
for other key cache operations. To avoid recursive locking of op_lock
the new functions that perform the operations of key cache initialization,
destruction and re-partitioning with an additional parameter were introduced.
The parameter says whether the operation over op_lock are to be performed or
are to be omitted. The old functions for the operations of key cache 
initialization, destruction,and  re-partitioning  now just call the
corresponding new functions with the additional parameter set to true
requesting to use op_lock while all other calls of these new function
have this parameter set to false. 

Another problem reported in the bug entry concerned the operation of
assigning an index to a key cache. This operation can be called
while the key cache structures are not initialized yet. In this
case any call of flush_key_blocks() should return without any actions.

No test case is provided with this patch.
2012-06-11 22:12:47 -07:00
Sergey Petrunya
ae51b5b698 Merge 2012-06-10 14:04:21 +04:00
Sergey Petrunya
a7229e8c20 BUG#1010351: New "via" keyword in 5.2+ can't be used as identifier anymore
- Add the VIA_SYM token into keyword_sp list, which makes it allowed for
  use as keyword and SP label.
2012-06-10 13:50:21 +04:00
Tor Didriksen
2b085e1fba Bug#14051002 VALGRIND: CONDITIONAL JUMP OR MOVE IN RR_CMP / MY_QSORT
Patch for 5.1 and 5.5: fix typo in byte comparison in rr_cmp()
2012-06-05 15:53:39 +02:00
Sergei Golubchik
41d860ef53 5.1 merge 2012-06-01 23:45:54 +02:00
Sergei Golubchik
34f2f8ea41 MDEV-256 lp:995501 - mysqltest attempts to parse Perl code inside a block
with false condition, gets confused and throws wrong errors
2012-06-01 17:53:59 +02:00
Annamalai Gurusami
a28a2ca798 Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
WHEN KILLING

Suppose there is a query waiting for a lock.  If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file.  Since this is a user
requested interruption, no spurious error message must be logged
in the server log.  This patch will remove the error message from
the log.

approved by joh and tatjana
2012-06-01 14:12:57 +05:30
unknown
2ebd927ec0 2012-05-31 22:28:18 +05:30
unknown
70d01f5182 2012-05-31 14:32:29 +05:30
unknown
f8a6521789 2012-05-30 14:00:29 +05:30
Rohit Kalhans
96eb519eb7 Fixing the build failure on Windows debug build. 2012-05-30 13:54:15 +05:30
Rohit Kalhans
d8b2d4a069 Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
Problem: mysqlbinlog exits without any error code in case of
file write error. It is because of the fact that the calls
to Log_event::print() method does not return a value and the
thus any error were being ignored.

Resolution: We resolve this problem by checking for the 
IO_CACHE::error == -1 after every call to Log_event:: print()
and terminating the further execution.

client/mysqlbinlog.cc:
  - handled error conditions during event->print() calls
  - added check for error in end_io_cache()
mysys/my_write.c:
  Added debug code to simulate file write error.
  error returned will be ENOSPC=> error no space on the disk
sql/log_event.cc:
  Added debug code to simulate file write error, by reducing the size of io cache.
2012-05-29 12:11:30 +05:30
unknown
f45784c850 Fix of LP bug#992380 + revise fix_fields about missing with_subselect collection
The problem is that some fix_fields do not call Item_func::fix_fields and do not collect with subselect_information.
2012-05-25 10:29:53 +03:00
Inaam Rana
01748ce128 Bug #14100254 65389: MVCC IS BROKEN WITH IMPLICIT LOCK
rb://1088
approved by: Marko Makela

This bug was introduced in early stages of plugin. We were not
checking for an implicit lock on sec index rec for trx_id that is
stamped on current version of the clust_index in case where the
clust_index has a previous delete marked version.
2012-05-24 12:37:03 -04:00
unknown
d56f5dae1e Fix bug lp:1001506
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.

Analysis:

When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.

The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.

However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.

Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().
2012-05-23 18:18:08 +03:00
unknown
950abd5268 Fix of LP bug#992380 + revise fix_fields about missing with_subselect collection
The problem is that some fix_fields do not call Item_func::fix_fields and do not collect with subselect_information.
2012-05-22 08:48:10 +03:00
Annamalai Gurusami
e979417c06 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

Modified the fix based on review comment by Svoj.
2012-05-21 17:25:40 +05:30
Manish Kumar
1605b7f68f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.

sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value ,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
2012-05-21 12:57:39 +05:30
Sergei Golubchik
280fcf0808 5.1 merge 2012-05-18 14:23:05 +02:00
Sergei Golubchik
57f824b099 post-merge fixes
sql/slave.cc:
  add mutex protection, like in sql_parse.cc
2012-05-18 12:42:06 +02:00
Rohit Kalhans
781137c0dd BUG#14005409 - 64624
Problem: After the fix for Bug#12589870, a new field that
stores the length of db name was added in the buffer that
stores the query to be executed. Unlike for the plain user
session, the replication execution did not allocate the
necessary chunk in Query-event constructor. This caused an
invalid read while accessing this field.
      
Solution: We fix this problem by allocating a necessary chunk
in the buffer created in the Query_log_event::Query_log_event()
and store the length of database name.

sql/log_event.cc:
  Added a new field in the buffer created in the
  Query_log_event's constructor and store the length
  of database name.
2012-05-18 14:44:40 +05:30
Gopal Shankar
21faded51e Bug#12636001 : deadlock from thd_security_context
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,

con1: Run Query on a table. 
  It is important that this SELECT must back-off while
  trying to open the t1 and enter into wait_for_condition().
  The SELECT then is blocked trying to lock mysys_var->mutex
  which is held by con3. The very significant fact here is
  that mysys_var->current_mutex will still point to LOCK_open,
  even if LOCK_open is no longer held by con1 at this point.

con2: Try dropping table used in con1 or query some table.
  It will hold LOCK_open and be blocked trying to lock
  kernel_mutex held by con4.

con3: Try killing the query run by con1.
  It will hold THD::LOCK_thd_data belonging to con1 while
  trying to lock mysys_var->current_mutex belonging to con1.
  But current_mutex will point to LOCK_open which is held
  by con2.

con4: Get innodb engine status
  It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
  belonging to con1 which is held by con3.

So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.

CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
         kernel_mutex -> THD::LOCK_thd_data)

FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.

Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.

Note: The problem is not found in 5.5. Introduction MDL subsystem 
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
As the use of LOCK_open is removed in open_table() and 
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
2012-05-17 18:07:59 +05:30
unknown
3e1758d36a 2012-05-17 11:41:46 +01:00
Sergei Golubchik
0a8c9b98f6 merge with mysql-5.1.63 2012-05-17 12:12:33 +02:00
unknown
5a47413934 fix of LP bug#998321
The problem is that we can't check null_value field of non-basic constant without the item execution.:
2012-05-17 10:13:25 +03:00
unknown
932376f97a 2012-05-17 10:15:54 +05:30
Annamalai Gurusami
ce9e6b5a9a Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
The following scenario crashes our mysql server:

1.  set global innodb_file_per_table=1;
2.  create table t1(c1 int) engine=innodb;
3.  alter table t1 discard tablespace;
4.  alter table t1 add unique index(c1);

Step 4 crashes the server.  This patch introduces a check on discarded
tablespace to avoid the crash.

rb://1041 approved by Marko Makela
2012-05-16 16:36:49 +05:30
Venkata Sidagam
6b05e434bf Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH,
FULLTEXT INDEX AND CONCURRENT DML.

Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
   a) truncate table
   b) insert into table ....
   c) select ... match .. against .. non-boolean/boolean mode

After some time we could observe two different assert core dumps

Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key 
root block.
1st select thread block->status is set to BLOCK_ERROR 
because the my_pread() in read_block() is returning '0'. 
Truncate table made the index file size as 1024 and pread 
was asked to get the block of count bytes(1024 bytes) 
from offset of 1024 which it cannot read since its 
"end of file" and retuning '0' setting 
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
the 1st select thread enter into the free_block() and will 
be under wait on conditional mutex by making status as 
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
thread will also work on the same block and sees the status as 
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
and asserting the server.

2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock', 
which allows other threads to continue and gets the pread() 
return value as'0'(please see the explanation above) and 
tries to get the lock on 'keycache->cache_lock' and waits 
there for the lock.
Insert thread requests for the block, block will be assigned 
from the hash list and makes the page_status as 
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
in the queue since there are some other threads performing 
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock' 
mutex in the read_block() will continue after getting the my_pread() 
value as '0' and sets the block status as BLOCK_ERROR and goes to 
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks 
block->status as not BLOCK_READ and it asserts.  

Fix:
---
In the full text code, multiple readers of index file is not guarded. 
Hence added below below code in _ft2_search() and walk_and_match().

to lock the key_root I have used below code in _ft2_search()
 if (info->s->concurrent_insert)
    mysql_rwlock_rdlock(&share->key_root_lock[0]);

and to unlock 
 if (info->s->concurrent_insert)
   mysql_rwlock_unlock(&share->key_root_lock[0]);

storage/myisam/ft_boolean_search.c:
  Since its a recursion function, to avoid confusion in taking and 
  releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
  function. And _ft2_search() will take the lock, call 
  _ft2_search_internal() and release the lock in case of concurrent 
  inserts.
storage/myisam/ft_nlq_search.c:
  Added read locks code in walk_and_match()
2012-05-16 16:14:27 +05:30
Annamalai Gurusami
bcb5d73767 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

rb://1074 approved by Alexander Nozdrin
2012-05-16 11:17:48 +05:30
Nuno Carvalho
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Marko Mäkelä
5917c58a5d Bug#14025221 FOREIGN KEY REFERENCES FREED MEMORY AFTER DROP INDEX
dict_table_replace_index_in_foreign_list(): Replace the dropped index
also in the foreign key constraints of child tables that are
referencing this table.

row_ins_check_foreign_constraint(): If the underlying index is
missing, refuse the operation.

rb:1051 approved by Jimmy Yang
2012-05-15 15:04:39 +03:00
Georgi Kodinov
fcb033053d Bug #11761822: yassl rejects valid certificate which openssl accepts
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this 
particular certificate.
Added a test case with the certificate itself.
2012-05-15 13:12:22 +03:00
Bjorn Munch
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
Sergey Petrunya
e1b6e1b899 Merge 5.2->5.3 2012-05-12 12:12:35 +04:00
Sergey Petrunya
97ae1682f1 BUG#997747: Assertion `join->best_read < ((double)1.79..5e+308L)' failed
in greedy_search with LEFT JOINs and unique keys
- Backport the fix for BUG#806524 from MariaDB 5.3
2012-05-12 11:53:14 +04:00
unknown
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
unknown
6fc863c749 Fixed typo 2012-05-10 09:00:21 +03:00
Annamalai Gurusami
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Vladislav Vaintroub
54534a6984 MDEV-262 : log_state occationally fails in buildbot.
The failures are  missing entries in the slow query log.  The reason for the failure  are sleep() calls  with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions  (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
2012-05-08 12:38:22 +02:00
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Vladislav Vaintroub
597e98bc83 MDEV-261 : mysqtest crashes when assigning variable to result of select , like
let x = `SELECT <something>`

The fix is to detect the condition "no active connection",  to report error and die.
Note, that the check for no active connection was already in place for ordinary commands, 
and was missing only for assign-variable command.
2012-05-08 00:26:41 +02:00
Venkata Sidagam
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
unknown
8065143637 Fix for LP bug#993726
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
2012-05-07 13:26:34 +03:00
unknown
213476ef3e Fix for bug lp:992405
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY

Original comment:
-----------------
3714 Jorgen Loland	2012-03-01
      BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT 
                     QUERY OUTPUT
      
      For all but simple grouped queries, temporary tables are used to
      resolve grouping. In these cases, the list of grouping fields is
      stored in the temporary table and grouping is resolved
      there (e.g. by adding a unique constraint on the involved
      fields). Because of this, grouping is already done when the rows
      are read from the temporary table.
      
      In the case where a group clause may be optimized away, grouping
      does not have to be resolved using a temporary table. However, if
      a temporary table is explicitly requested (e.g. because the
      SQL_BUFFER_RESULT hint is used, or the statement is
      INSERT...SELECT), a temporary table is used anyway. In this case,
      the temporary table is created with an empty group list (because
      the group clause was optimized away) and it will therefore not
      create groups. Since the temporary table does not take care of
      grouping, JOIN::group shall not be set to false in 
      make_simple_join(). This was fixed in bug 12578908. 
      
      However, there is an exception where make_simple_join() should
      set JOIN::group to false even if the query uses a temporary table
      that was explicitly requested but is not strictly needed. That
      exception is if the loose index scan access method (explain
      says "Using index for group-by") is used to read into the 
      temporary table. With loose index scan, grouping is resolved 
      by the access method. This is exactly what happens in this bug.
2012-05-07 11:02:58 +03:00
unknown
c9a73aa204 Fix bug lp:993745
This is a backport of the fix for MySQL bug #13723054 in 5.6.

Original comment:
      The crash is caused by arbitrary memory area owerwriting in case of
      BLOB fields during attempt to copy BLOB field key image into record
      buffer(record buffer is too small to get BLOB key part image).
      note:
      QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
      because it uses record buffer as temporary buffer for key values
      however this case is filtered out by covering_keys() check
      in get_best_group_min_max() as BLOBs always require key length
      modificator in the key declaration and if the key has a BLOB
      then it can not be covered key.
      The fix is to use 'max_used_key_length' key length instead of 0.

Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
2012-05-03 14:49:52 +03:00
Sergei Golubchik
167ad4c4a5 update the result file 2012-05-02 22:00:31 +02:00
Oleksandr Byelkin
8fe40c50db MDEV-214 lp:967242 Wrong result with JOIN, AND in ON condition, multi-part key, GROUP BY, subquery and OR in WHERE
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.

Solution is to mark constant only top equalities connected with AND.
2012-05-02 18:11:02 +02:00
Sergei Golubchik
b192f7a2e7 5.1 merge 2012-05-02 17:06:30 +02:00