Commit graph

26798 commits

Author SHA1 Message Date
Aditya A
cdf5f4538d Bug#11751825 - OPTIMIZE PARTITION RECREATES FULL TABLE INSTEAD JUST PARTITION
PROBLEM 
-------

optimize on partiton will recreate the whole table 
instead of just partition.

ANALYSIS
--------

At present innodb doesn't support optimize option ,so we do a rebuild of the 
whole table and then call analyze() on the table.Presently for any optimize()
option (on table or partition) we display the following info to the user 

"Table does not support optimize, doing recreate + analyze instead".

FIX
---

It was decided for GA versions(5.1 and 5.5) whenever the user tries to 
optimize a partition(s) we will will display the following info the user

"Table does not support optimize on partitions.
All partitions will be rebuilt and analyzed."

Earlier partitions were not analyzed.Now all partitions  will be analyzed.  

If the user wants to optimize the whole table ,we will display the
previous info to the user. i.e

"Table does not support optimize, doing recreate + analyze instead"

For 5.6+ versions we will raise a new bug to support optimize() options
in innodb.
2012-11-06 18:35:03 +05:30
Nuno Carvalho
03572c5b82 BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
Moved explicit instantiation of available_buffer and valid_buffer_range 
template functions to sql/log_event.cc.
2012-10-21 20:28:19 +01:00
Nuno Carvalho
a08ccb57e3 BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
Merge bundle on mysql-5.1.
2012-10-21 15:34:38 +01:00
Neeraj Bisht
4aaadc1236 Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.

Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.

One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.

and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).

both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
2012-10-18 23:45:15 +05:30
Mattias Jonsson
60f5f31468 Bug#14589559: ASSERTION `FILE_ENTRY_BUF[2] == 0'
FAILED IN DEACTIVATE_DDL_LOG_ENTRY

deallocate_ddl_log_entry() can be called without having
locked LOCK_gdl. It uses a global buffer for reading and
writing entries in the ddl_log, and since it is not protected
by any mutex, two concurrent threads can overwrite the
content in the global buffer, so it can be different from
what was read.
Thread a reads from entry 1 into global
buffer, thread b reads from entry 2 into global buffer,
thread a writes from global buffer into entry 1
-> entry 1 is not the content of entry 2.

This is especially bad for replace entries, which uses
two phases, and does not deactivate the whole entry
after the first phase, but increases the phase instead.

Fixed by using thread local storage (stack) instead of global
storage (global buffer).

Also added buffer and size arguments to
read/write_ddl_log_file_entry.

Also only read/write first bytes in entries in
deactivate_ddl_log_entry.

Also fixed the scenario where it will try to recover from a server
compiled with a different value of IO_SIZE (very uncommon!)

updated patch with set_ddl_log_entry_from_buf
and removed read_ddl_log_entry.

Manually tested, no test case included.
2012-10-18 11:59:47 +02:00
Neeraj Bisht
c55dd6bd4a Bug#11745891 - LAST_INSERT(ID) DOES NOT SUPPORT BIGINT UNSIGNED
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.

Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.

In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.

Solution:
When we are fetching the value from last_insert_id, We are setting the 
unsigned_flag, so that it take only unsigned BIGINT value.
2012-10-16 23:18:48 +05:30
Marc Alff
c8f6ab2916 Bug#14629232 SECURITY VULNERABILITY WITH SHOW PROFILE
This fix resolves a security vulnerability of SHOW PROFILE.

See the bug report for details.
2012-10-12 19:38:45 +02:00
Nuno Carvalho
0cdd810bee BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
This bug had two problems:
 P1) Reads out of bounds;
 P2) Writes out of bounds.

PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.

Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.

PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.

Currently value is allocated on heap memory.
2012-10-12 08:32:10 +01:00
Tor Didriksen
4d29767e6c Bug#14683676 ENDLESS MEMORY CONSUMPTION IN SETUP_REF_ARRAY WITH MAX IN SUBQUERY
n_child_sum_items kept increasing.
Since it is used for calculating the size of ref_pointer_array,
we will allocate larger and larger chunks of memory, until we hit some
operating system limit.
The memory is free()d at disconnect, but is most likely *not*
returned to the operating system.
2012-10-01 13:12:38 +02:00
Tor Didriksen
abdb79062e Backport
Bug  57135: CRASH IN ITEM_FUNC_CASE::FIND_ITEM WITH CASE WHEN
Bug  57692: Crash in item_func_in::val_int() with ZEROFILL
2012-09-25 16:03:05 +02:00
Jon Olav Hauglid
139c8ed591 Bug#14621627 THREAD CACHE IS UNFAIR
When a client connects to a MySQL server, first a THD object is created.
If there are any idle server threads waiting, the THD object is then added
to a list and a server thread is woken up. This thread then retrieves the 
THD object from the list and starts executing.

The problem was that this list of THD objects waiting for a server thread,
was not working in a FIFO fashion, but rather LIFO. This is unfair, as it means
that the last THD added (=last client connected) will be assigned a  server 
thread first.

Note however that for this to be a problem, several clients must be able
to connect and have THD objects constructed before any server threads
manages to be woken up. This is not a very likely scenario.

This patch fixes the problem by changing the THD list to work FIFO
rather than LIFO.

This is the 5.1/5.5 version of the patch.
2012-09-25 13:09:53 +02:00
Raghav Kapoor
5c089b0874 BUG#13864642: DROP/CREATE USER BEHAVING ODDLY
BACKGROUND:
In certain situations DROP USER fails to remove all privileges
belonging to user being dropped from in-memory structures.
Current workaround is to do DROP USER twice in scenario below
OR doing FLUSH PRIVILEGES after doing DROP USER.

ANALYSIS:
In MySQL, When we grant some stored routines privileges to a
user they are stored in their respective hash.
When doing DROP USER all the stored routine privilege entries
associated with that user has to be deleted from its respective 
hash.
The root cause for this bug is some entries from the hash
are not getting deleted. 
The problem is that code that deletes entries from the hash tries
to do so while iterating over it, without taking enough measures
to address the fact that such deletion can reshuffle elements in 
the hash. If the user/administrator creates the same user again 
he is thrown an  error 'Error 1396 ER_CANNOT_USER' from MySQL.
This prompts the user to either do FLUSH PRIVILEGES or do DROP USER 
again. This behaviour is not desirable as it is a workaround and
does not solves the problem mentioned above.

FIX:
This bug is fixed by introducing a dynamic array to store the 
pointersto all stored routine privilege objects that either have
to be deleted or updated. This is done in 3 steps.
Step 1: Fetching the element from the hash and checking whether 
it is to be deleted or updated.
Step 2: Storing the pointer to that privilege object in dynamic array.
Step 3: Traversing the dynamic array to perform the appropriate action 
either delete or update.
This is a much cleaner way to delete or update the privilege entries 
associated with some user and solves the problem mentioned above.
Also the code has been refactored a bit by introducing an enum
instead of hard coded numbers used for respective dynamic arrays 
and hashes in handle_grant_struct() function.
2012-09-25 15:58:46 +05:30
Rohit Kalhans
6d0574d306 BUG#14548159: Followup patch to fix some issues on PB2 2012-09-23 15:45:22 +05:30
Rohit Kalhans
5f003eca00 BUG#14548159: NUMEROUS CASES OF INCORRECT IDENTIFIER
QUOTING IN REPLICATION 

Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.

Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
2012-09-22 17:50:51 +05:30
Harin Vadodaria
9f0727804c Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST
INC_HOST_ERRORS() IS CALLED.

Issue       : Sequence of calling inc_host_errors()
              and reset_host_errors() required some
              changes in order to maintain correct
              connection error count.

Solution    : Call to reset_host_errors() is shifted
              to a location after which no calls to
              inc_host_errors() are made.
2012-09-17 17:02:17 +05:30
Sujatha Sivakumar
4bfeb52d92 Bug#11750014:ASSERTION TRX_DATA->EMPTY() IN BINLOG_CLOSE_CONNECTION
Problem:
=======

trx_data->empty() assert happens at `binlog_close_connection'

Analysis:
========

trx_data->empty() function checks for no pending events
and the transaction cache to be empty.This function returns
"true" if no pending events are present and cache is empty.
Otherwise it returns false. `binlog_close_connection' call
expects the above function to return true. But if the
return value is false then assert is raised.

This bug was reproducible in a diskfull scenario. In this
disk full scenario try to do an insert operation so that
a new pending event is created and flushing this pending
event fails. Due to this failure the server goes down
and invokes `binlog_close_connection' for clean closure.
Since the pending event still remains the assert is caused.
This assert is caused only in non transactional databases.


Fix:
===

In a disk full scenario when the insertion fails the
transaction is rolled back and `binlog_end_trans`
is called to flush the pending events. But flush operation
fails as the disk is full and the function simply returns
`1' without taking any action to delete the pending event.

This leaves the event to remain till the closure of
connection.  `delete pending' statement has been added to 
do the required clean up action.
2012-09-17 11:48:02 +05:30
Tor Didriksen
06e4a9696b Backport Bug#13724099 2012-09-12 08:36:12 +02:00
Andrei Elkin
7d115e1c2b Bug#14597605 Issue with Null-value user on slave
An "orthographic" typo in User_var::set_deferred() was made in fixes for
bug@14275000. While editing the signature of the initial patch to remove
the only argument, the assigned value of the argument remained in the body ... 
to be successfully compiled (!) thanks to names coincidence:
the arg to User_var method and its member.

Fixed with correcting the typo.
2012-09-10 17:32:04 +03:00
Tor Didriksen
fe36ad9778 Bug#13734987 MEMORY LEAK WITH I_S/SHOW AND VIEWS WITH SUBQUERY
In fill_schema_table_by_open(): free item list before restoring active arena.
2012-09-05 17:40:13 +02:00
Mattias Jonsson
a0fd666041 Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY
pre-push fix, removed unused variable.
2012-08-20 12:39:36 +02:00
Mattias Jonsson
a144799c05 merge 2012-08-20 11:18:17 +02:00
Mattias Jonsson
5abec3c287 Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY
Additional patch to remove the part_id -> ref_buffer offset.

The partitioning id and the associate record buffer can
be found without having to calculate it.

By initializing it for each used partition, and then reuse
the key-buffer from the queue, it is not needed to have
such map.
2012-08-17 14:25:32 +02:00
Alexander Barkov
3fe9686201 Backporting Bug 14100466 from 5.6. 2012-08-17 13:14:04 +04:00
Mattias Jonsson
8c72fd8b78 Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY
The buffer for the current read row from each partition
(m_ordered_rec_buffer) used for sorted reads was
allocated on open and freed when the ha_partition handler
was closed or destroyed.

For tables with many partitions and big records this could
take up too much valuable memory.

Solution is to only allocate the memory when it is needed
and free it when nolonger needed. I.e. allocate it in
index_init and free it in index_end (and to handle failures
also free it on reset, close etc.)

Also only allocating needed memory, according to
partitioning pruning.

Manually tested that it does not use as much memory and
releases it after queries.
2012-08-15 14:31:26 +02:00
Sujatha Sivakumar
c6d8569d41 Bug#13596613:SHOW SLAVE STATUS GIVES WRONG OUTPUT WITH
MASTER-MASTER AND USING SET USE

Problem:
=======
In a master-master set-up, a master can show a wrong
'SHOW SLAVE STATUS' output.

Requirements:
- master-master
- log_slave_updates

This is caused when using SET user-variables and then using
it to perform writes. From then on the master that performed
the insert will have a SHOW SLAVE STATUS that is wrong and  
it will never get updated until a write happens on the other
master. On"Master A" the "exec_master_log_pos" is not
getting updated.

Analysis:
========
Slave receives a "User_var" event from the master and after
applying the event, when "log_slave_updates" option is
enabled the slave tries to write this applied event into
its own binary log. At the time of writing this event the
slave should use the "originating server-id". But in the
above case the sever always logs the  "user var events"
by using its global server-id. Due to this in a
"master-master" replication when the event comes back to the
originating server the "User_var_event" doesn't get skipped.
"User_var_events" are context based events and they always
follow with a query event which marks their end of group.
Due to the above mentioned problem with "User_var_event"
logging the "User_var_event" never gets skipped where as
its corresponding "query_event" gets skipped. Hence the
"User_var" event always waits for the next "query event"
and the "Exec_master_log_position" does not get updated
properly.

Fix:
===
`MYSQL_BIN_LOG::write' function is used to write events
into binary log. Within this function a new object for
"User_var_log_event" is created and this new object is used
to write the "User_var" event in the binlog. "User var"
event is inherited from "Log_event". This "Log_event" has
different overloaded constructors. When a "THD" object
is present "Log_event(thd,...)" constructor should be used
to initialise the objects and in the absence of a valid
"THD" object "Log_event()" minimal constructor should be
used. In the above mentioned problem always default minimal
constructor was used which is incorrect. This minimal
constructor is replaced with "Log_event(thd,...)".
2012-08-14 14:11:01 +05:30
Sergey Glukhov
af3fdefca5 Bug MEMORY LEAK WHEN REFERENCING OUTER FIELD IN HAVING
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.
2012-08-09 15:34:52 +04:00
Sunanda Menon
dd8a7a93bc Merge from mysql-5.1.65-release 2012-08-09 08:50:43 +02:00
Nirbhay Choubey
d4e4538b2d Bug#13928675 MYSQL CLIENT COPYRIGHT NOTICE MUST
SHOW 2012 INSTEAD OF 2011

* Added a new macro to hold the current year :
  COPYRIGHT_NOTICE_CURRENT_YEAR
* Modified ORACLE_WELCOME_COPYRIGHT_NOTICE macro
  to take the initial year as parameter and pick
  current year from the above mentioned macro.
2012-08-07 18:58:19 +05:30
Chaithra Gopalareddy
00fff1e12e Bug : EXPORT_SET CRASHES DUE TO OVERALLOCATION OF MEMORY
Backport the fix from 5.6 to 5.1
Base bug number : 11765562
2012-08-05 16:29:28 +05:30
Tor Didriksen
611484f9a5 Bug#14111180 HANDLE_FATAL_SIGNAL IN PTR_COMPARE_1 / QUEUE_INSERT
Space available for merging was calculated incorrectly.
2012-07-27 09:13:10 +02:00
Praveenkumar Hulakund
bb64579de8 BUG#13868860 - LIMIT '5' IS EXECUTED WITHOUT ERROR WHEN '5'
IS PLACE HOLDER AND USE SERVER-SIDE 

Analysis:
LIMIT always takes nonnegative integer constant values. 

http://dev.mysql.com/doc/refman/5.6/en/select.html

So parsing of value '5' for LIMIT in SELECT fails.

But, within prepared statement, LIMIT parameters can be
specified using '?' markers. Value for the parameter can
be supplied while executing the prepared statement.

Passing string values, float or double value for LIMIT
works well from CLI. Because, while setting the value
for the parameters from the variable list (added using
SET), if the value is for parameter LIMIT then its 
converted to integer value. 

But, when prepared statement is executed from the other
interfaces as J connectors, or C applications etc.
The value for the parameters are sent to the server
with execute command. Each item in log has value and
the data TYPE. So, While setting parameter value
from this log, value is set to all the parameters
with the same data type as passed.
But here logic to convert value to integer type
if its for LIMIT parameter is missing.
Because of this,string '5' is set to LIMIT.
And the same is logged into the binlog file too. 

Fix:
When executing prepared statement having parameter for
CLI it worked fine, as the value set for the parameter
is converted to integer. And this failed in other 
interfaces as J connector,C Applications etc as this 
conversion is missing.

So, as a fix added check while setting value for the
parameters. If the parameter is for LIMIT value then
its converted to integer value.
2012-07-26 23:44:43 +05:30
Tor Didriksen
76386edb83 Backport of Bug#14171740 65562: STRING::SHRINK SHOULD BE A NO-OP WHEN ALLOCED=0 2012-07-26 15:05:24 +02:00
Alexander Barkov
882e9381cc Fixing wrong copyright. Index.xml was modified in 2005,
while the copyright notice still mentioned 2003.
2012-07-24 09:27:00 +04:00
Chaithra Gopalareddy
a56c4692d4 Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH
"ORDER BY" AND "LIMIT BY" CLAUSE

PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.

ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.

While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function. 
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and 
hence the estimated number of rows calculated above are 
wrong (which results in choosing the second index).

FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.

This fix is backported from 5.6. This problem is fixed in 5.6 as   
part of changes for work log 
2012-07-18 14:36:08 +05:30
Annamalai Gurusami
e612f55237 Bug 58157: INNODB LOCKS AN UNMATCHED ROW EVEN THOUGH USING
RBR AND RC

Description: When scanning and locking rows with < or <=, InnoDB locks
the next row even though row based binary logging and read committed
is used.

Solution: In the handler, when the row is identified to fall outside
of the range (as specified in the query predicates), then request the
storage engine to unlock the row (if possible). This is done in
handler::read_range_first() and handler::read_range_next().
2012-07-12 16:42:07 +05:30
Bjorn Munch
0731ba8194 Merge unpushed changes from 5.1.64-release 2012-07-12 10:00:14 +02:00
Andrei Elkin
04a244138e merge from 5.1 repo. 2012-07-10 13:00:03 +03:00
Sujatha Sivakumar
cf858b71ce BUG#11762670:MY_B_WRITE RETURN VALUE IGNORED
Problem:
=======
The return value from my_b_write is ignored by: `my_b_write_quoted',
`my_b_write_bit',`Query_log_event::print_query_header'

Most callers of `my_b_printf' ignore the return value. `log_event.cc' 
has many calls to it. 

Analysis:
========
`my_b_write' is used to write data into a file. If the write fails it
sets appropriate error number and error message through my_error()
function call and sets the IO_CACHE::error == -1.
`my_b_printf' function is also used to write data into a file, it
internally invokes my_b_write to do the write operation. Upon
success it returns number of characters written to file and on error
it returns -1 and sets the error through my_error() and also sets
IO_CACHE::error == -1.  Most of the event specific print functions
for example `Create_file_log_event::print', `Execute_load_log_event::print'
etc are the ones which make several calls to the above two functions and
they do not check for the return value after the 'print' call. All the above 
mentioned abuse cases deal with the client side.

Fix:
===
As part of bug fix a check for IO_CACHE::error == -1 has been added at 
a very high level after the call to the 'print' function.  There are 
few more places where the return value of "my_b_write" is ignored
those are mentioned below.

+++ mysys/mf_iocache2.c    2012-06-04 07:03:15 +0000
@@ -430,7 +430,8 @@
           memset(buffz, '0', minimum_width - length2);
         else
           memset(buffz, ' ', minimum_width - length2);
-        my_b_write(info, buffz, minimum_width - length2);

+++ sql/log.cc	2012-06-08 09:04:46 +0000
@@ -2388,7 +2388,12 @@
     {
       end= strxmov(buff, "# administrator command: ", NullS);
       buff_len= (ulong) (end - buff);
-      my_b_write(&log_file, (uchar*) buff, buff_len);

At these places appropriate return value handlers have been added.
2012-07-10 14:23:17 +05:30
Andrei Elkin
dbd63d008f Bug#14275000
Fixes for BUG11761686 left a flaw that managed to slip away from testing.
Only effective filtering branch was actually tested with a regression test
added to rpl_filter_tables_not_exist.
The reason of the failure is destuction of too early mem-root-allocated memory 
at the end of the deferred User-var's do_apply_event().

Fixed with bypassing free_root() in the deferred execution branch.
Deallocation of created in do_apply_event() items is done by the base code
through THD::cleanup_after_query() -> free_items() that the parent Query
can't miss.
2012-07-05 14:37:48 +03:00
Gleb Shchepa
0b847cd4de minor update to make MSVS happy 2012-06-29 18:24:43 +04:00
Georgi Kodinov
428ff7f8a0 Bug : malformed resultset packet crashes client
Several fixes :

* sql-common/client.c
Added a validity check of the fields metadata packet sent 
by the server.
Now libmysql will check if the length of the data sent by
the server matches what's expected by the protocol before
using the data.

* client/mysqltest.cc
Fixed the error handling code in mysqltest to avoid sending
new commands when the reading the result set failed (and 
there are unread data in the pipe).

* sql_common.h + libmysql/libmysql.c + sql-common/client.c
unpack_fields() now generates a proper error when it fails.
Added a new argument to this function to support the error 
generation.

* sql/protocol.cc
Added a debug trigger to cause the server to send a NULL
insted of the packet expected by the client for testing 
purposes.
2012-06-28 18:38:55 +03:00
Jon Olav Hauglid
4358669767 Bug#14238406 NEW COMPILATION WARNINGS WITH GCC 4.7 (-WERROR=NARROWING)
This patch fixes various compilation warnings of the type
"error: narrowing conversion of 'x' from 'datatype1' to
'datatype2'
2012-06-29 13:25:57 +02:00
Gleb Shchepa
ba966cff98 Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
Print the warning(note):

 YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead

on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
2012-06-29 12:55:45 +04:00
Norvald H. Ryeng
537f770633 Merge. 2012-06-28 14:34:49 +02:00
Kent Boortz
3de8d224cf Solve a linkage problem with "libmysqld" on several Solaris platforms:
a multiple definition of 'THD::clear_error()' in (at least)
libmysqld.a(lib_sql.o) and libmysqld.a(libfederated_a-ha_federated.o).

Patch provided by Ramil Kalimullin.
2012-06-26 16:30:15 +02:00
Joerg Bruehe
c45bd024eb Fixing wrong comment syntax (discovered by Kent) 2012-06-21 16:26:50 +02:00
Harin Vadodaria
78dce2d5c2 Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST
INC_HOST_ERRORS() IS CALLED.

Description: Reverting patch 3755 for bug#11753779
2012-06-19 12:56:40 +05:30
Norvald H. Ryeng
5f61cc438b Bug#13003736 CRASH IN ITEM_REF::WALK WITH SUBQUERIES
Problem: Some queries with subqueries and a HAVING clause that
consists only of a column not in the select or grouping lists causes
the server to crash.

During parsing, an Item_ref is constructed for the HAVING column. The
name of the column is resolved when JOIN::prepare calls fix_fields()
on its having clause. Since the column is not mentioned in the select
or grouping lists, a ref pointer is not found and a new Item_field is
created instead. The Item_ref is replaced by the Item_field in the
tree of HAVING clauses. Since the tree consists only of this item, the
pointer that is updated is JOIN::having. However,
st_select_lex::having still points to the Item_ref as the root of the
tree of HAVING clauses.

The bug is triggered when doing filesort for create_sort_index(). When
find_all_keys() calls select->cond->walk() it eventually reaches
Item_subselect::walk() where it continues to walk the having clauses
from lex->having. This means that it finds the Item_ref instead of the
new Item_field, and Item_ref::walk() tries to dereference the ref
pointer, which is still null.

The crash is reproducible only in 5.5, but the problem lies latent in
5.1 and trunk as well.

Fix: After calling fix_fields on the having clause in JOIN::prepare(),
set select_lex::having to point to the same item as JOIN::having.

This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
query is executed as a prepared statement. The Item_field is created
in the runtime arena when the query is prepared, and the pointer to
the item is saved by st_select_lex::fix_prepare_information() and
brought back as a dangling pointer when the query is executed, after
the runtime arena has been reclaimed.

Fix: Backport fix from trunk that switches to the permanent arena
before calling Item_ref::fix_fields() in JOIN::prepare().
2012-06-18 09:20:12 +02:00
Harin Vadodaria
3ec0a7eb04 Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST
INC_HOST_ERRORS() IS CALLED.

Issue       : Sequence of calling inc_host_errors()
              and reset_host_errors() required some
              changes in order to maintain correct
              connection error count.

Solution    : Call to reset_host_errors() is shifted
              to a location after which no calls to
              inc_host_errors() are made.
2012-06-13 16:03:58 +05:30
Manish Kumar
1211b5d50b BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.
2012-06-12 12:59:13 +05:30