Commit graph

1222 commits

Author SHA1 Message Date
Ritheesh Vedire
bebb3427f2 Bug#16814264: FILTER OUT THE PERFORMANCE_SCHEMA RELAY LOG EVENTS FROM RELAY LOG
Performance schema tables are local to a server and they should not
  be allowed to be executed by the slave from the relay log.
  From 5.6.10, P_S events are not written into the binary log.
  But prior to that, from mysql 5.5 onwards, P_S events are written 
  to the binary log by master.
  The following are problematic scenarios:
      
  1. Master 5.5 -> Slave 5.5
     ========================
    A) RBR: Slave crashes
    B) SBR: P_S statements are replicated.
      
  2.Master 5.5 -> Slave 5.6
    ========================
    A) RBR: SQL thd generates error
    B) SBR : P_S statements are replicated
      
  3. 5.5 binlog executed on a server 5.5 using mysqlbinlog|mysql
     =================================================================
     A) RBR: Server crash  (because of BINLOG'... statement)
     B) SBR: P_S statements are executed
      
  4. 5.5 binlog executed on server 5.6 using mysqlbinlog|mysql
     ================================================================
     A) RBR: SQL error (because of BINLOG'... statement)
     B) SBR: P_S statements are executed.
      
      
    The generalized behaviour should be:
    a) Slave SQL thread should certainly ignore P_S events read from
       the relay log.
    b) mysqlbinlog|mysql should replay the binlog succesfully.
2014-01-31 04:25:39 +05:30
Sujatha Sivakumar
8765bec5a2 Bug#17081415:>=4GB ROW EVENT CRASHES SERVER WITH WILD MEMCPY
OF ROW DATA

Problem:
========
Inserting a row larger than 4G when server uses RBR leads
to crash.

Analysis:
========
Row-based binary logging logs changes in individual table
rows. During the execution of DML statements in RBR the
actual row data will be stored within "m_rows_buf" buffer
and this buffer contents will be written to binary log.
"m_rows_buf" is prepared within the following function
"Rows_log_event::do_add_row_data".

When a huge row is specified as in this bug scenario where
row size is 4294971520 > UINT_MAX (4294967295) then the
"m_rows_buf" is reallocated to accommodate the row data and
then the row is copied to the buffer. During this realloc
call, the length is getting type casted to "uint" which
results in overflow. Because of the overflow the reallocated
memory happens to be incorrect than what was requested
and it results in a crash during copy of rowdata to buffer.

Hence rows of size > 4GB cannot be written to binary log.
By default the event_length can be stored within 4 bytes
which in turn restricts an event's size to grow. Hence large
rows cannot be replicated using row based replication.

Fix:
===
An error is generated if the row size exceeds 4GB value.

sql/log_event.cc:
  An error is generated if the row size exceeds 4GB value.
  Debug simulations are added to test the fix.
2014-01-10 15:11:56 +05:30
Luis Soares
fcf33b601c BUG#17066269
- Automerged from bug branch into latest mysql-5.5.
- Fixed trailing whitespaces.
- Updated the copyright notice year to 2014.
2014-01-09 12:53:49 +00:00
Luis Soares
7481cf6c29 BUG#17066269: AUTO_INC VALUE NOT PROPERLY GENERATED WITH RBR AND
AUTO_INC COLUMN ONLY ON SLAVE

In RBR, if the slave's table as one additional auto_inc column,
then, it will insert the value 0 instead of generating the next
auto_inc number.

We fix this by checking that if an auto_inc extra column exists,
when compared to column data of the row event, we explicitly set
it to NULL and flag the engine that a nulled auto_inc column will
be inserted.
2013-12-18 11:17:24 +00:00
Venkatesh Duggirala
5fa9664b07 Bug#17632978 SLAVE CRASHES IF ROW EVENT IS CORRUPTED
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)

Problem: If slave receives a corrupted row event,
slave server is crashing.

Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.

Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.

sql/field.h:
  Removed a function which is not required anymore
sql/log_event.cc:
  Adding a validation on the field length before
  the tool tries to print the value.
sql/log_event.h:
  Changing unpack_row call according to the new arguments
sql/log_event_old.h:
  Changing unpack_row call according to the new arguments
sql/rpl_record.cc:
  Adding a new argument 'row_end' which tells
  the end position of the complete data in the
  row event. It will be used to do validation
  before doing 'unpack' field.
sql/rpl_record.h:
  Adding a new argument 'row_end' which tells
  the end position of the complete data in the
  row event. It will be used to do validation
  before doing 'unpack' field.
sql/rpl_utility.cc:
  Now calc_field_size() is required for client too.
2013-12-17 22:11:22 +05:30
Sujatha Sivakumar
81fd7f8ab1 Bug#16736412: THE SERVER WAS CRASHED WHILE EXECUTING
"SHOW BINLOG EVENTS"

Fixing post push test issue. 
Changing the debug simulation.
2013-11-07 17:30:57 +05:30
Sujatha Sivakumar
2a2641ad7f Bug#16736412: THE SERVER WAS CRASHED WHILE EXECUTING
"SHOW BINLOG EVENTS"

Problem:
========
mysql was crashed after executing "show binlog events in
'mysql-bin.000005' from 99", the crash happened randomly.

Analysis:
========
During construction of LOAD EVENT or NEW LOAD EVENT object
if the starting offset is provided as incorrect value then
all the object members that are retrieved from the offset
are also invalid.  Some times it will lead to out of bound
address offsets.  In the bug scenario, the file name is
extracrated from an invalid address and the same is fed to
strlen(fname) function. Passing invalid address to strlen
will lead to crash.

Fix:
===
Validate if the given offset falls within the event boundary
or not.

sql/log_event.cc:
  Added code to validate fname's address. "fname" should
  be within event boundary. Added code to find invalid
  invents.
2013-11-06 15:00:49 +05:30
Venkatesh Duggirala
633cc16e7c Bug#17234370 LAST_INSERT_ID IS REPLICATED INCORRECTLY IF
REPLICATION FILTERS ARE USED.
Merging fix from mysql-5.1
2013-10-16 22:15:59 +05:30
Venkatesh Duggirala
2b07397b20 Bug#17234370 LAST_INSERT_ID IS REPLICATED INCORRECTLY IF
REPLICATION FILTERS ARE USED.

Problem:
When Filtered-slave applies Int_var_log_event and when it
tries to write the event to its own binlog, LAST_INSERT_ID
value is written wrongly.

Analysis:
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is a variable which is set when LAST_INSERT_ID() is used by
a statement. If it is set, first_successful_insert_id_in_
prev_stmt_for_binlog will be stored in the statement-based
binlog. This variable is CUMULATIVE along the execution of
a stored function or trigger: if one substatement sets it
to 1 it will stay 1 until the function/trigger ends,
thus making sure that first_successful_insert_id_in_
prev_stmt_for_binlog does not change anymore and is
propagated to the caller for binlogging. This is achieved
using the following code
if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)               
{                                                                           
  /* It's the first time we read it */                                      
  first_successful_insert_id_in_prev_stmt_for_binlog=                       
  first_successful_insert_id_in_prev_stmt;                                
  stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;               
}

Slave server, after receiving Int_var_log_event event from
master, it is setting
stmt_depends_on_first_successful_insert_id_in_prev_stmt
to true(*which is wrong*) and not setting
first_successful_insert_id_in_prev_stmt_for_binlog. Because
of this problem, when the actual DML statement with
LAST_INSERT_ID() is parsed by slave SQL thread,
first_successful_insert_id_in_prev_stmt_for_binlog is not
set. Hence the value zero (default value) is written to
slave's binlog.

Why only *Filtered slave* is effected when the code is
in common place:
-------------------------------------------------------
In Query_log_event::do_apply_event,
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
is reset to zero at the end of the function. In case of
normal slave (No Filters), this variable will be reset. 
In Filtered slave, Slave SQL thread defers all IRU events's
execution until IRU's Query_log event is received. Once it
receives Query_log_event it executes all pending IRU events
and then it executes Query_log_event. Hence the variable is
not getting reset to 0, causing this bug.

Fix: As described above, the root cause was setting 
THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
when Int_var_log_event was executed by a SQL thread. Hence
removing the problematic line from the code.
2013-10-16 22:12:23 +05:30
Libing Song
514b8261b5 Bug#17402313 DUMP THREAD SENDS SOME EVENTS MORE THAN ONCE
Dump thread may encounter an error when reading events from the active binlog
file. However the errors may be temporary, so dump thread will try to read
the event again. But dump thread seeked to an wrong position, it caused some
events was sent twice.

To fix the bug, prev_pos is defined out the while loop and is set the correct
position after reading every event correctly.

This patch also make binlog_can_be_corrupted more accurate, only the binlogs
not closed normally are marked binlog_can_be_corrupted.

Finally, two warnings are added when dump threads encounter the temporary
errors.
2013-09-10 09:35:49 +08:00
Dmitry Lenev
fc2c669297 Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".

The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.

The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.

The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.

The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.

This patch solves these two problems in the following way:

- In case when MDL deadlock is detect transaction rollback is requested
  by setting THD::transaction_rollback_request flag.

- Code performing rollback of transaction if THD::transaction_rollback_request
  is moved out from trans_rollback_stmt(). Now we handle rollback request
  on the same level as we call trans_rollback_stmt() and release statement/
  transaction MDL locks.
2013-08-20 13:12:34 +04:00
Sujatha Sivakumar
9a1d651b0e Merge from mysql-5.1 to mysql-5.5 2013-03-28 14:18:51 +05:30
Sujatha Sivakumar
d054027c4b Bug#14324766:PARTIALLY WRITTEN INSERT STATEMENT IN BINLOG
NO ERRORS REPORTED
      
Problem:
=======
Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
code assumes that 0 returned from my_b_fill always means
end-of-cache, but that is incorrect. It can result in error
and the error is ignored. Other callers of my_b_fill don't
check for error: my_b_copy_to_file, maybe my_b_gets.
      
Fix:
===
An error handler is already present to check the "cache"
error that is reported during "MYSQL_BIN_LOG::write_cache"
call. Hence error handlers are added for "my_b_copy_to_file"
and "my_b_gets".
During my_b_fill() function call, when the cache read fails
info->error= -1 is set. Hence a check for "info->error"
is added for the above to callers upon their return.

mysys/mf_iocache2.c:
  Added a check for "cache->error" and simulation of cache read failure
mysys/my_read.c:
  Simulation of read failure
sql/log_event.cc:
  Added debug simulation
sql/sql_repl.cc:
  Added a check for cache error
2013-03-28 14:14:39 +05:30
Nuno Carvalho
53cebed3b6 BUG#16541422: LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
Merge from mysql-5.1 into mysql-5.5.
2013-03-27 11:22:25 +00:00
Nuno Carvalho
84bd6fec76 BUG#16541422: LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
Fixed possible uninitialized variable.
2013-03-27 11:19:29 +00:00
Andrei Elkin
e8feb8d56a merge from 5.1 2013-03-26 20:52:01 +02:00
Andrei Elkin
9eb64ec5c0 Bug#16541422 LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES
At logging a first Query referring a user var, the slave missed to log the user var.
It appears that at execution of a Uservar event the slaver applier
thought of the variable as already logged.
The reason of misjudgement is in coincidence of query id:s: of one that the thread
holds at Uservar execution and another one that the thread sees at the Query applying.
While the two are naturally different in the regular execution branch (as two computational
events are separated as individual events), in the deferred applying case the User var execution
effectively belongs to its Query processing.

Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id 
to temporarily substitute with it the actual query id at the Uservar execution time
(along with its query).
Such manipulation mimics behaviour of the regular applying branch.

sql/log_event.cc:
  Storing the Uservar parsing time query id into a new member of the event
  to to temporarily substitute
  with it the actual thread id at the Uservar execution time.
sql/log_event.h:
  Storage for keeping query-id in User-var intance is added.
2013-03-26 19:24:01 +02:00
Pedro Gomes
91e043c304 BUG#13545447: RPL_ROTATE_LOGS FAILS DUE TO CONCURRENCY ISSUES IN REP. CODE
Merge from mysql 5.1 to mysql 5.5
2013-02-15 22:18:37 +00:00
Pedro Gomes
7e8c887786 BUG#13545447: RPL_ROTATE_LOGS FAILS DUE TO CONCURRENCY ISSUES IN REP. CODE
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.

Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
2013-02-15 21:57:35 +00:00
Nuno Carvalho
ff0c072913 BUG#16200555: EMPTY NAME FOR USER VARIABLE IS ALLOWED AND BREAKS STATEMENT BINARY LOGGING
Merge from mysql-5.1 into mysql-5.5.
2013-01-28 19:08:50 +00:00
Nuno Carvalho
d1378565bb BUG#16200555: EMPTY NAME FOR USER VARIABLE IS ALLOWED AND BREAKS STATEMENT BINARY LOGGING
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.

Fix this wrong assumption by allowing again user variables with zero
length on binary log.
2013-01-28 19:05:09 +00:00
Venkatesh Duggirala
e4f4e50648 BUG#11752707-SLAVE CRASHES IF RBR HAS AS DESTINATION A VIEW RATHER
THAN A TABLE.
Merging fix from mysql-5.1
2013-01-19 06:07:08 +05:30
Venkatesh Duggirala
bc21e8cd69 Bug#11752707-SLAVE CRASHES IF RBR HAS AS DESTINATION A VIEW
RATHER THAN A TABLE

Problem: In RBR, If a table is converted into a view at slave,
(i.e., "drop table 'object1'" & "create view 'object1'"), then any
DML operations on the table at master are causing crash at slave.

Analysis: Slave prepares tables to be opened for DML list when it
receives Table_map_log_event(s). And the same list will be sent to
open_table function. Open_table logic assumes that if the list
contains a view object, it also contains "select_lex" object of
that view. In the above special case, the table object does not
contain 'select_lex' as it is base table at master. Since it
is a view at slave, open_table logic goes to 'mysql_make_view()'
function which assumes that 'select_lex' exists for the object.

Fix: While preparing 'tables to be opened' list, we should make 
sure that table required type is 'base table'. If it is not 
base table while opening the object, mysql_make_view will throw an 
error similar to 'object is not a base table' 

sql/log_event.cc:
  Restrict that all table_map_log_event's objects should be 
  base tables @ slave also.
2013-01-19 06:01:46 +05:30
Libing Song
775b5c0269 Auto Merge 2012-12-01 08:06:45 +08:00
Libing Song
bdfc4dc6c6 Bug#11764602 ASSERTION IN
FORMAT_DESCRIPTION_LOG_EVENT::CALC_SERVER_VERSION_SPLIT

Problem: When reading a Format_description_log_event, it supposes MySQL
version is always valid and DBUG_ASSERTION is used check the version number.
However, user may give a wrong binlog offset, even give a faked binary event
which includes an invalid MySQL version. This will cause server crash.

Fix: The assertions are removed and an error will be reported if MySQL
version in Format_description_log_event is invalid.
2012-12-01 08:04:33 +08:00
Praveenkumar Hulakund
d912a758b0 Bug#14466617 - INVALID WRITES AND/OR CRASH WITH USER
VARIABLES 

Analysis:
-------------
After executing the query, new value of the user defined
variables are set in the function "select_dumpvar::send_data".
"select_dumpvar::send_data" first calls function 
"Item_func_set_user_var::save_item_result()". This function
checks the nullness of the Item_field passed as parameter 
to it and saves it. The nullness of item is stored with 
arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls
"Item_func_set_user_var::update()" which notices null 
result that was saved and calls "Item_func_set_user_var::
update_hash". But here null_value is not set and args[0]
is different from that given to function "Item_func_set_user_var::
set_item_result()". This causes "Item_func_set_user_var::
update_hash" function to believe that its getting non-null value.
"user_var_entry::length" set to 0 and hence "user_var_entry::value"
is made to point to extra_area allocated in "user_var_entry".
And "Item_func_set_user_var::update_hash" tries to write
at memory beyond extra_area for result type DECIMAL. Because of 
this invalid write issue is reported by Valgrind.

Before this bug was introduced, we avoided this problem by 
creating "Item_func_set_user_var" object with the same 
Item_field as arg[0] and as parameter to 
Item_func_set_user_var::save_item_result(). But now 
they are refering to different args[0]. Because of this
null_value flag set in parameter Item_field in function
"Item_func_set_user_var::save_item_result()" is not
reflected in "Item_func_set_user_var" object.

Fix:
------------
This issue is reported on versions 5.5.24. Issue does not exists
in 5.5.23, 5.1, 5.6 and trunk.

This issue was introduced by
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for
bug #12408412), which was pushed into 5.5 and later releases. This patch
has later been reversed in 5.6 and trunk by
revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for
bug #14664077). Backported this patch in 5.5 also to fix this issue.


sql/item_func.cc:
  here unsigned value is converted to signed value.
sql/item_func.h:
  last_insert_id() gives an auto_incremented value which can be
  positive only,so defined it as a unsigned longlong sets the
  unsigned_flag to 1.
2012-11-07 19:08:33 +05:30
Nuno Carvalho
700b24db62 BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
Merge from mysql-5.1 into mysql-5.5.
2012-10-21 20:34:41 +01:00
Nuno Carvalho
62503f9cf5 BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
Moved explicit instantiation of available_buffer and valid_buffer_range 
template functions to sql/log_event.cc.
2012-10-21 20:28:19 +01:00
Nuno Carvalho
922792fdff BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
Merge from mysql-5.1 into mysql-5.5.
2012-10-12 08:36:09 +01:00
Nuno Carvalho
f1d3b0f190 BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS
This bug had two problems:
 P1) Reads out of bounds;
 P2) Writes out of bounds.

PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.

Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.

PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.

Currently value is allocated on heap memory.
2012-10-12 08:32:10 +01:00
Annamalai Gurusami
378a7d1ef5 Bug #14036214 MYSQLD CRASHES WHEN EXECUTING UPDATE IN TRX WITH
CONSISTENT SNAPSHOT OPTION

A transaction is started with a consistent snapshot.  After 
the transaction is started new indexes are added to the 
table.  Now when we issue an update statement, the optimizer
chooses an index.  When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports 
the error code HA_ERR_TABLE_DEF_CHANGED, with message 
stating that "insufficient history for index".

This error message is propagated up to the SQL layer.  But
the my_error() api is never called.  The statement level
diagnostics area is not updated with the correct error 
status (it remains in Diagnostics_area::DA_EMPTY).  

Hence the following check in the Protocol::end_statement()
fails.

 516   case Diagnostics_area::DA_EMPTY:
 517   default:
 518     DBUG_ASSERT(0);
 519     error= send_ok(thd->server_status, 0, 0, 0, NULL);
 520     break;

The fix is to backport the fix of bugs 14365043, 11761652 
and 11746399. 

14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED

rb://1227 approved by guilhem and mattiasj.
2012-10-08 19:40:30 +05:30
Rohit Kalhans
7c671a7ead BUG#14548159: Followup patch to fix some issues on PB2 2012-09-23 15:45:22 +05:30
Rohit Kalhans
5530c5e38d BUG#14548159: NUMEROUS CASES OF INCORRECT IDENTIFIER
QUOTING IN REPLICATION 

Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.

Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
2012-09-22 17:50:51 +05:30
Rohit Kalhans
2d4b31da9b bug#14548159: upmerge from mysql-5.1 -> mysql-5.5 2012-09-23 15:51:23 +05:30
Rohit Kalhans
4802faad3f upmerge to bug#14548159 2012-09-22 18:07:04 +05:30
Andrei Elkin
f4dc921596 merge from 5.5 repo. 2012-07-10 13:53:19 +03:00
Andrei Elkin
eca29d5f33 merge from 5.1 repo. 2012-07-10 13:00:03 +03:00
Sujatha Sivakumar
2e84cf8674 merge from 5.1 to 5.5 2012-07-10 14:37:35 +05:30
Sujatha Sivakumar
13f7f00288 BUG#11762670:MY_B_WRITE RETURN VALUE IGNORED
Problem:
=======
The return value from my_b_write is ignored by: `my_b_write_quoted',
`my_b_write_bit',`Query_log_event::print_query_header'

Most callers of `my_b_printf' ignore the return value. `log_event.cc' 
has many calls to it. 

Analysis:
========
`my_b_write' is used to write data into a file. If the write fails it
sets appropriate error number and error message through my_error()
function call and sets the IO_CACHE::error == -1.
`my_b_printf' function is also used to write data into a file, it
internally invokes my_b_write to do the write operation. Upon
success it returns number of characters written to file and on error
it returns -1 and sets the error through my_error() and also sets
IO_CACHE::error == -1.  Most of the event specific print functions
for example `Create_file_log_event::print', `Execute_load_log_event::print'
etc are the ones which make several calls to the above two functions and
they do not check for the return value after the 'print' call. All the above 
mentioned abuse cases deal with the client side.

Fix:
===
As part of bug fix a check for IO_CACHE::error == -1 has been added at 
a very high level after the call to the 'print' function.  There are 
few more places where the return value of "my_b_write" is ignored
those are mentioned below.

+++ mysys/mf_iocache2.c    2012-06-04 07:03:15 +0000
@@ -430,7 +430,8 @@
           memset(buffz, '0', minimum_width - length2);
         else
           memset(buffz, ' ', minimum_width - length2);
-        my_b_write(info, buffz, minimum_width - length2);

+++ sql/log.cc	2012-06-08 09:04:46 +0000
@@ -2388,7 +2388,12 @@
     {
       end= strxmov(buff, "# administrator command: ", NullS);
       buff_len= (ulong) (end - buff);
-      my_b_write(&log_file, (uchar*) buff, buff_len);

At these places appropriate return value handlers have been added.

client/mysqlbinlog.cc:
  check for IO_CACHE::error == -1 has been added after the call to
  the event specific print functions
mysys/mf_iocache2.c:
  Added handler to check the written value of `my_b_write'
sql/log.cc:
  Added handler to check the written value of `my_b_write'
sql/log_event.cc:
  Added error simulation statements in `Create_file_log_event::print`
  and `Execute_load_query_log_event::print'
sql/rpl_utility.h:
  Removed the extra ';'
2012-07-10 14:23:17 +05:30
Andrei Elkin
5841af19f7 merge bug14275000 fixes to 5.5 2012-07-05 16:14:47 +03:00
Andrei Elkin
e3c8fb4a1c Bug#14275000
Fixes for BUG11761686 left a flaw that managed to slip away from testing.
Only effective filtering branch was actually tested with a regression test
added to rpl_filter_tables_not_exist.
The reason of the failure is destuction of too early mem-root-allocated memory 
at the end of the deferred User-var's do_apply_event().

Fixed with bypassing free_root() in the deferred execution branch.
Deallocation of created in do_apply_event() items is done by the base code
through THD::cleanup_after_query() -> free_items() that the parent Query
can't miss.



sql/log_event.cc:
  Do not call free_root() in case the deferred User-var event.
  Necessary methods to the User-var class are added, do_apply_event() refined.
sql/log_event.h:
  Necessary methods to avoid destoying mem-root-based memory at
  User-var applying are defined.
2012-07-05 14:37:48 +03:00
Manish Kumar
db982ec87f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Upmerge from mysql-5.1 -> mysql-5.5
2012-06-12 12:59:56 +05:30
Manish Kumar
4a2d65cc31 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-06-12 12:59:13 +05:30
Rohit Kalhans
71c4f94dd5 upmerge from mysql-5.1 branch -> mysql-5.5 branch 2012-05-30 14:27:27 +05:30
Rohit Kalhans
96eb519eb7 Fixing the build failure on Windows debug build. 2012-05-30 13:54:15 +05:30
Manish Kumar
a9ca9403a7 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-05-30 10:10:52 +05:30
Rohit Kalhans
484a79415b upmerge from mysql-5.1 branch -> mysql-5.5 branch 2012-05-29 12:21:17 +05:30
Rohit Kalhans
d8b2d4a069 Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
Problem: mysqlbinlog exits without any error code in case of
file write error. It is because of the fact that the calls
to Log_event::print() method does not return a value and the
thus any error were being ignored.

Resolution: We resolve this problem by checking for the 
IO_CACHE::error == -1 after every call to Log_event:: print()
and terminating the further execution.

client/mysqlbinlog.cc:
  - handled error conditions during event->print() calls
  - added check for error in end_io_cache()
mysys/my_write.c:
  Added debug code to simulate file write error.
  error returned will be ENOSPC=> error no space on the disk
sql/log_event.cc:
  Added debug code to simulate file write error, by reducing the size of io cache.
2012-05-29 12:11:30 +05:30
Manish Kumar
1605b7f68f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.

sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value ,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
2012-05-21 12:57:39 +05:30
Rohit Kalhans
781137c0dd BUG#14005409 - 64624
Problem: After the fix for Bug#12589870, a new field that
stores the length of db name was added in the buffer that
stores the query to be executed. Unlike for the plain user
session, the replication execution did not allocate the
necessary chunk in Query-event constructor. This caused an
invalid read while accessing this field.
      
Solution: We fix this problem by allocating a necessary chunk
in the buffer created in the Query_log_event::Query_log_event()
and store the length of database name.

sql/log_event.cc:
  Added a new field in the buffer created in the
  Query_log_event's constructor and store the length
  of database name.
2012-05-18 14:44:40 +05:30