1. Now test use fake_relay_log primitive
2. Added RESET SLAVE to include/setup_fake_relay_log.inc for removing relay log info file
3. Added RESET SLAVE to include/cleanup_fake_relay_log.inc
4. Test moved to rpl suite as rpl_binlog_auto_inc_bug33029.test
5. Updated result file
--slave-load-tm
The MDL_SHARED lock was introduced for an object in 5.4, but the 'TABLE_LIST'
object was not initialized with the MDL_SHARED lock when applying event with
LOAD DATA INFILE into table. So the failure is caused when checking the
MDL_SHARED lock for the object.
To fix the problem, the 'TABLE_LIST' object was initialized with the MDL_SHARED
lock when applying event with LOAD DATA INFILE into table.
We found that there are some tests that are not cleaning
up properly:
1. rpl_tmp_table_and_DDL
2. rpl_do_grant
3. rpl_sync
For #1 and #2 we found that the slave would not, for some
cases, replicate all the instructions the master processed
in the cleanup section. We fix these by deploying some
synchronization commands in the test cases so that slave
processes all clean up instructions.
As for #3, this is tracked as part of another bug
(BUG@50442).
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/suite/rpl/r/rpl_slow_query_log.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Conflict adding files to server-tools. Created directory.
Conflict because server-tools is not versioned, but has versioned children. Versioned directory.
Conflict adding files to server-tools/instance-manager. Created directory.
Conflict because server-tools/instance-manager is not versioned, but has versioned children. Versioned directory.
Contents conflict in server-tools/instance-manager/options.cc
Text conflict in sql/mysqld.cc
into slow log
While processing a statement, down the mysql_parse execution
stack, the thd->enable_slow_log can be assigned to
opt_log_slow_admin_statements, depending whether one is executing
administrative statements, such as ALTER TABLE, OPTIMIZE,
ANALYZE, etc, or not. This can have an impact on slow logging for
statements that are executed after an administrative statement
execution is completed.
When executing statements directly from the user this is fine
because, the thd->enable_slow_log is reset right at the beginning
of the dispatch_command function, ie, everytime a new statement
is set is set to execute.
On the other hand, for slave SQL thread (sql_thd) the story is a
bit different. When in SBR the sql_thd applies statements by
calling mysql_parse. Right after, it calls log_slow_statement
function to log them if they take too long. Calling mysql_parse
directly is fine, but also means that dispatch_command function
is bypassed. As a consequence, thd->enable_slow_log does not get
a chance to be reset before the next statement to be executed by
the sql_thd. If the statement just executed by the sql_thd was an
administrative statement and logging of admin statements was
disabled, this means that sql_thd->enable_slow_log will be set to
0 (disabled) from that moment on. End result: sql_thd stops
logging slow statements.
We fix this by resetting the value of sql_thd->enable_slow_log to
the value of opt_log_slow_slave_statements right after
log_slow_stement is called by the sql_thd.
To 5.x Release
Notes
=====
This is a backport of BUG#23300 into 5.1 GA.
Original cset revid (in betony):
luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h
Description
===========
When using replication, the slave will not log any slow query
logs queries replicated from the master, even if the
option "--log-slow-slave-statements" is set and these take more
than "log_query_time" to execute.
In order to log slow queries in replicated thread one needs to
set the --log-slow-slave-statements, so that the SQL thread is
initialized with the correct switch. Although setting this flag
correctly configures the slave thread option to log slow queries,
there is an issue with the condition that is used to check
whether to log the slow query or not. When replaying binlog
events the statement contains the SET TIMESTAMP clause which will
force the slow logging condition check to fail. Consequently, the
slow query logging will not take place.
This patch addresses this issue by removing the second condition
from the log_slow_statements as it prevents slow queries to be
binlogged and seems to be deprecated.
fails in PB sporadically)
The IO thread can concurrently access the relay log IO_CACHE
while another thread is performing an FLUSH LOGS procedure.
FLUSH LOGS closes and reopens the relay log and while doing so it
(re)initializes its IO_CACHE. During this procedure the IO_CACHE
mutex is also reinitialized, which can cause problems if some
other thread (namely the IO THREAD) is concurrently accessing it
at the time .
This patch fixes the problem by extending the interface of the
flush_master_info function to also include a second paramater,
"need_relay_log_lock", stating whether the thread should grab the
relay log lock or not before actually flushing the relay log.
Also, IO thread now calls flush_master_info with this flag set
when it flushes master info with in the event read_event loop.
Finally, we also increase loop time in rpl_heartbeat_basic test
case, so that the number of calls to flush logs doubles, stressing
this part of the code a little more.
Fix Bug#50555 "handler commands crash server in my_hash_first()"
as a post-merge fix (the new handler tests are not passing
otherwise).
- in hash.c, don't call calc_hash if ! my_hash_inited().
- add tests and results for the test case for Bug#50555
Original revision:
------------------------------------------------------------
revision-id: li-bing.song@sun.com-20100130124925-o6sfex42b6noyc6x
parent: joro@sun.com-20100129145427-0n79l9hnk0q43ajk
committer: <Li-Bing.Song@sun.com>
branch nick: mysql-5.1-bugteam
timestamp: Sat 2010-01-30 20:49:25 +0800
message:
Bug #48321 CURRENT_USER() incorrectly replicated for DROP/RENAME USER;
REVOKE/GRANT; ALTER EVENT.
The following statements support the CURRENT_USER() where a user is needed.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENT
but, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, All above statements are rewritten when they are binlogged.
The CURRENT_USER() is expanded to the real user's name and host.
------------------------------------------------------------
Linux x86_64 debug
Two test cases fail because the suppression for the unsafe
warning needs to be updated (BUG@39934 refactored this part and
these changes are only in mysql-next-mr - this is why we notice
them now when merging in next-mr). This is the case for
rpl_nondeterministic_functions and rpl_misc_functions test
cases. rpl_stm_binlog_direct test case is not needed in version >
5.1. The rpl_heartbeat_basic test case fails because patch for
BUG@50397 removed the CHANGE MASTER in the slave that would set
it's period to 1/10 of the master. This would cause the test
assertion to fail.
The fixes for the issues described above are:
- rpl_misc_functions - updated suppression message
- rpl_nondeterministic_functions - updated suppression message
- rpl_stm_binlog_direct - removed the test case (it is not
needed in versions > 5.1)
- rpl_heartbeat_basic - deployed instruction:
CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD=0.1;
Conflicts:
- mysql-test/r/mysqld--help-win.result
- sql/sys_vars.cc
Original revsion (in next-mr-bugfixing):
------------------------------------------------------------
revno: 2971 [merge]
revision-id: alfranio.correia@sun.com-20100121210527-rbuheu5rnsmcakh1
committer: Alfranio Correia <alfranio.correia@sun.com>
branch nick: mysql-next-mr-bugfixing
timestamp: Thu 2010-01-21 21:05:27 +0000
message:
BUG#46364 MyISAM transbuffer problems (NTM problem)
It is well-known that due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transaction and
non-transactional tables.
In a nutshell, the current code-base tries to preserve causality among the
statements by writing non-transactional statements to the txn-cache which
is flushed upon commit. However, modifications done to non-transactional
tables on behalf of a transaction become immediately visible to other
connections but may not immediately get into the binary log and therefore
consistency may be broken.
In general, it is impossible to automatically detect causality/dependency
among statements by just analyzing the statements sent to the server. This
happen because dependency may be hidden in the application code and it is
necessary to know a priori all the statements processed in the context of
a transaction such as in a procedure. Moreover, even for the few cases that
we could automatically address in the server, the computation effort
required could make the approach infeasible.
So, in this patch we introduce the option
- "--binlog-direct-non-transactional-updates" that can be used to bypass
the current behavior in order to write directly to binary log statements
that change non-transactional tables.
Besides, it is used to enable the WL#2687 which is disabled by default.
------------------------------------------------------------
revno: 2970.1.1
revision-id: alfranio.correia@sun.com-20100121131034-183r4qdyld7an5a0
parent: alik@sun.com-20100121083914-r9rz2myto3tkdya0
committer: Alfranio Correia <alfranio.correia@sun.com>
branch nick: mysql-next-mr-bugfixing
timestamp: Thu 2010-01-21 13:10:34 +0000
message:
BUG#46364 MyISAM transbuffer problems (NTM problem)
It is well-known that due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transaction and
non-transactional tables.
In a nutshell, the current code-base tries to preserve causality among the
statements by writing non-transactional statements to the txn-cache which
is flushed upon commit. However, modifications done to non-transactional
tables on behalf of a transaction become immediately visible to other
connections but may not immediately get into the binary log and therefore
consistency may be broken.
In general, it is impossible to automatically detect causality/dependency
among statements by just analyzing the statements sent to the server. This
happen because dependency may be hidden in the application code and it is
necessary to know a priori all the statements processed in the context of
a transaction such as in a procedure. Moreover, even for the few cases that
we could automatically address in the server, the computation effort
required could make the approach infeasible.
So, in this patch we introduce the option
- "--binlog-direct-non-transactional-updates" that can be used to bypass
the current behavior in order to write directly to binary log statements
that change non-transactional tables.
Besides, it is used to enable the WL#2687 which is disabled by default.
Add a wait-for graph based deadlock detector to the
MDL subsystem.
Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and
bug #37346 "innodb does not detect deadlock between update and
alter table".
The first bug manifested itself as an unwarranted abort of a
transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER
statement, when this transaction tried to repeat use of a
table, which it has already used in a similar fashion before
ALTER started.
The second bug showed up as a deadlock between table-level
locks and InnoDB row locks, which was "detected" only after
innodb_lock_wait_timeout timeout.
A transaction would start using the table and modify a few
rows.
Then ALTER TABLE would come in, and start copying rows
into a temporary table. Eventually it would stumble on
the modified records and get blocked on a row lock.
The first transaction would try to do more updates, and get
blocked on thr_lock.c lock.
This situation of circular wait would only get resolved
by a timeout.
Both these bugs stemmed from inadequate solutions to the
problem of deadlocks occurring between different
locking subsystems.
In the first case we tried to avoid deadlocks between metadata
locking and table-level locking subsystems, when upgrading shared
metadata lock to exclusive one.
Transactions holding the shared lock on the table and waiting for
some table-level lock used to be aborted too aggressively.
We also allowed ALTER TABLE to start in presence of transactions
that modify the subject table. ALTER TABLE acquires
TL_WRITE_ALLOW_READ lock at start, and that block all writes
against the table (naturally, we don't want any writes to be lost
when switching the old and the new table). TL_WRITE_ALLOW_READ
lock, in turn, would block the started transaction on thr_lock.c
lock, should they do more updates. This, again, lead to the need
to abort such transactions.
The second bug occurred simply because we didn't have any
mechanism to detect deadlocks between the table-level locks
in thr_lock.c and row-level locks in InnoDB, other than
innodb_lock_wait_timeout.
This patch solves both these problems by moving lock conflicts
which are causing these deadlocks into the metadata locking
subsystem, thus making it possible to avoid or detect such
deadlocks inside MDL.
To do this we introduce new type-of-operation-aware metadata
locks, which allow MDL subsystem to know not only the fact that
transaction has used or is going to use some object but also what
kind of operation it has carried out or going to carry out on the
object.
This, along with the addition of a special kind of upgradable
metadata lock, allows ALTER TABLE to wait until all
transactions which has updated the table to go away.
This solves the second issue.
Another special type of upgradable metadata lock is acquired
by LOCK TABLE WRITE. This second lock type allows to solve the
first issue, since abortion of table-level locks in event of
DDL under LOCK TABLES becomes also unnecessary.
Below follows the list of incompatible changes introduced by
this patch:
- From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those
statements that acquire TL_WRITE_ALLOW_READ lock)
wait for all transactions which has *updated* the table to
complete.
- From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE
(i.e. all statements which acquire TL_WRITE table-level lock) wait
for all transaction which *updated or read* from the table
to complete.
As a consequence, innodb_table_locks=0 option no longer applies
to LOCK TABLES ... WRITE.
- DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort
statements or transactions which use tables being dropped or
renamed, and instead wait for these transactions to complete.
- Since LOCK TABLES WRITE now takes a special metadata lock,
not compatible with with reads or writes against the subject table
and transaction-wide, thr_lock.c deadlock avoidance algorithm
that used to ensure absence of deadlocks between LOCK TABLES
WRITE and other statements is no longer sufficient, even for
MyISAM. The wait-for graph based deadlock detector of MDL
subsystem may sometimes be necessary and is involved. This may
lead to ER_LOCK_DEADLOCK error produced for multi-statement
transactions even if these only use MyISAM:
session 1: session 2:
begin;
update t1 ... lock table t2 write, t1 write;
-- gets a lock on t2, blocks on t1
update t2 ...
(ER_LOCK_DEADLOCK)
- Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE
was abandoned.
LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same
priority as the usual LOCK TABLE ... WRITE.
SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in
the wait queue.
- We do not take upgradable metadata locks on implicitly
locked tables. So if one has, say, a view v1 that uses
table t1, and issues:
LOCK TABLE v1 WRITE;
FLUSH TABLE t1; -- (or just 'FLUSH TABLES'),
an error is produced.
In order to be able to perform DDL on a table under LOCK TABLES,
the table must be locked explicitly in the LOCK TABLES list.
The root cause of the crash is that a TranxNode is freed before it is used.
A TranxNode is allocated and inserted into the active list each time
a log event is written and flushed into the binlog file.
The memory for TranxNode is allocated with thd_alloc and will be freed
at the end of the statement. The after_commit/after_rollback callback
was supposed to be called before the end of each statement and remove the node from
the active list. However this assumption is not correct in all cases(e.g. call
'CREATE TEMPORARY TABLE myisam_t SELECT * FROM innodb_t' in a transaction
and delete all temporary tables automatically when a session closed),
and can cause the memory allocated for TranxNode be freed
before it was removed from the active list. So The TranxNode pointer in the active
list would become a wild pointer and cause the crash.
After this patch, We have a class called a TranxNodeAllocate which manages the memory
for allocating and freeing TranxNode. It uses my_malloc to allocate memory.
REVOKE/GRANT; ALTER EVENT.
The following statements support the CURRENT_USER() where a user is needed.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENT
but, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, All above statements are rewritten when they are binlogged.
The CURRENT_USER() is expanded to the real user's name and host.
removed in MySQL 6.0
CREATE TABLE... TYPE= returns the warning "The syntax
'TYPE=storage_engine' is deprecated and will be removed in
MySQL 6.0. Please use 'ENGINE=storage_engine' instead"
This syntax is deprecated already from version 5.4.4, so
the message has been changed.
In addition, the deprecation macro was changed to reflect
the ServerPT decision not to include version number in the
warning message.
A number of test result files have been changed as a
consequence of the change in the deprecation macro.
The 'rpl_get_master_version_and_clock' test verifies if the slave I/O
thread tries to reconnect to master when it tries to get the values of
the UNIX_TIMESTAMP, SERVER_ID from master under network disconnection.
So the master server is restarted for making the transient network
disconnection, during the period the COM_REGISTER_SLAVE failures are
produced in server log file when the slave I/O thread tries to
register on master.
To fix the problem, suppress COM_REGISTER_SLAVE failures in server log
file by mtr suppression, because they are expected.
In RBR, DDL statement will change binlog format to non row-based
format before it is binlogged, but the binlog format was not be
restored, and then manipulating a temporary table can not reset binlog
format to row-based format rightly. So that the manipulated statement
is binlogged with statement-based format.
To fix the problem, restore the state of binlog format after the DDL
statement is binlogged.
cant find record
Some engines return data for the record. Despite the fact that
the null bit is set for some fields, their old value may still in
the row. This can happen when unpacking an AI from the binlog on
top of a previous record in which a field is set to NULL, which
previously contained a value. Ultimately, this may cause the
comparison of records to fail when the slave is doing an index or
range scan.
We fix this by deploying a call to reset() for each field that is
set to null while unpacking a row from the binary log.
Furthermore, we also add mixed mode test case to cover the
scenario where updating and setting a field to null through a
Query event and later searching it through a rows event will
succeed.
Finally, we also change the reset() method, from Field_bit class,
so that it takes into account bits stored among the null bits and
not only the ones stored in the record.
Resetting the master before stopping the slave was generating the message
"[ERROR] Slave I/O: Got fatal error 1236 from master when reading data from
binary log: 'could not find next log', Error_code: 1236". In consequence,
the test case was failing because the message had not been suppressed.
To circumvent the failure, we rewrote the test stopping the slave before
resetting the master. We prefer this alternative rather than suppressing
the message.
It is well-known that due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transaction and
non-transactional tables.
In a nutshell, the current code-base tries to preserve causality among the
statements by writing non-transactional statements to the txn-cache which
is flushed upon commit. However, modifications done to non-transactional
tables on behalf of a transaction become immediately visible to other
connections but may not immediately get into the binary log and therefore
consistency may be broken.
In general, it is impossible to automatically detect causality/dependency
among statements by just analyzing the statements sent to the server. This
happen because dependency may be hidden in the application code and it is
necessary to know a priori all the statements processed in the context of
a transaction such as in a procedure. Moreover, even for the few cases that
we could automatically address in the server, the computation effort
required could make the approach infeasible.
So, in this patch we introduce the option
- "--binlog-direct-non-transactional-updates" that can be used to bypass
the current behavior in order to write directly to binary log statements
that change non-transactional tables.
Besides, it is used to enable the WL#2687 which is disabled by default.
enabled binary
The test case injects an error in the server by deleting the
temporary file that it uses during the load data statement
execution. The error consisted of closing, deleting and setting
the file descriptor to -1 right before calling mysql_file_write.
Although, this error injection seems to work OK in Unix like
environments, in Windows, this would cause the server to hit an
assertion in 'my_get_open_flags':
DBUG_ASSERT(fd >= MY_FILE_MIN && fd < (int)my_file_limit)
We fix this by changing the error injection to just call the
macro my_delete_allow_opened, instead of the close + delete + set
fd=-1. The macro deletes the file and is platform
independent. Additionally, this required some changes to how the
assertion is handled in the test case to make it cope with this
change.
It is well-known that due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transaction and
non-transactional tables in statement and mixed modes.
In a nutshell, the current code-base tries to preserve causality among the
statements by writing non-transactional statements to the txn-cache which
is flushed upon commit. However, modifications done to non-transactional
tables on behalf of a transaction become immediately visible to other
connections but may not immediately get into the binary log and therefore
consistency may be broken.
In general, it is impossible to automatically detect causality/dependency
among statements by just analyzing the statements sent to the server. This
happen because dependency may be hidden in the application code and it is
necessary to know a priori all the statements processed in the context of
a transaction such as in a procedure. Moreover, even for the few cases that
we could automatically address in the server, the computation effort
required could make the approach infeasible.
So, in this patch we introduce the option
- "--binlog-direct-non-transactional-updates" that can be used to bypass
the current behavior in order to write directly to binary log statements
that change non-transactional tables.
The test case was failing because it contained instructions
to close/reopen files, when they were in use. This raises
problems in windows. Example of such instruction:
---exec echo "failure" > $MYSQLD_SLAVE_DATADIR/$file
The test also contains commands that are not platform
agnostic. Example:
--exec cat $MYSQLD_SLAVE_DATADIR/master.backup > \
$MYSQLD_SLAVE_DATADIR/master.info
We fix this by just truncating the necessary file and write
"failure" into it (ie, without closing the file). The
platform specific instruction is removed from the test
case as it seems redundant.
'CREATE TABLE IF NOT EXISTS ... SELECT' statement were causing 'CREATE
TEMPORARY TABLE ...' to be written to the binary log in row-based
mode (a.k.a. RBR), when there was a temporary table with the same name.
Because the 'CREATE TABLE ... SELECT' statement was executed as
'INSERT ... SELECT' into the temporary table. Since in RBR mode no
other statements related to temporary tables are written into binary log,
this sometimes broke replication.
This patch changes behavior of 'CREATE TABLE [IF NOT EXISTS] ... SELECT ...'.
it ignores existence of temporary table with the
same name as table being created and is interpreted
as attempt to create/insert into base table. This makes behavior of
'CREATE TABLE [IF NOT EXISTS] ... SELECT' consistent with
how ordinary 'CREATE TABLE' and 'CREATE TABLE ... LIKE' behave.
variable
The User_var_log_event was not serializing the unsigned
flag. This would cause the slave to always assume signed values.
We fix this by extending the User_var_log_event to also contain
information on the unsigned_flag, meaning that it gets into the
binlog as well, therefore the slave will get this information as
well. Events without information on unsigned flag (old events)
are treated as they were before (always signed: unsigned_flag=
FALSE).
The information on the unsigned_flag, is shipped in an extra byte
appended to the end of the User_var_log_event and added by this
patch. This extra byte holds values for general purpose
User_var_log_event flags which are now packed in the binlog as
well. One of these flags contains information about whether the
value is signed or unsigned (currently this extra byte is only
used to hold data on the unsigned flag, in the future we can use
it to pack extra flags if there is the need to).
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_log.result
Text conflict in mysql-test/t/mysqlbinlog.test
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_servers.cc
Text conflict in sql/sql_update.cc
Text conflict in support-files/mysql.spec.sh
BUG#49481: RBR: MyISAM and bit fields may cause slave to stop on delete:
cant find record
BUG#49482: RBR: Replication may break on deletes when MyISAM tables +
char field are used
When using MyISAM tables, despite the fact that the null bit is
set for some fields, their old value is still in the row. This
can cause the comparison of records to fail when the slave is
doing an index or range scan.
We fix this by avoiding memcmp for MyISAM tables when comparing
records. Additionally, when comparing field by field, we first
check if both fields are not null and if so, then we compare
them. If just one field is null we return failure immediately. If
both fields are null, we move on to the next field.
Problem: When RAND() is binlogged in statement mode, the seed is
binlogged too, so the replication slave generates the same
sequence of random numbers. This makes replication work in many
cases, but not in all cases: the order of rows is not guaranteed
for, e.g., UPDATE or INSERT...SELECT statements, so the row data
will be different if master and slave retrieve the rows in
different orders.
Fix: Mark RAND() as unsafe. It will generate a warning if
binlog_format=STATEMENT and switch to row-logging if
binlog_format=ROW.
- mysqld--help-win
Updated result so that it contains missing
value for slave-type-conversions
- rpl_idempotency
This seems a bad merge. In BUG#39934, the contents of
this file had been split into rpl_row_idempontency and
rpl_idempotency. The patch was pushed to 5.1-rep+3 which
was later merged in rep+2-delivery1 which in turn was
merged in 5.1-rpl-merge. Now while merging next-mr in
5.1-rpl-merge, the file got back it's old content (which
is in rpl_row_idempotency now because of BUG#39934). This
cset reverts the bad merge:
bzr merge -r revid:dao-gang.qu@sun.com-20100112120709-ioxp11yl9bvquaqd..\
before:revid:dao-gang.qu@sun.com-20100112120709-ioxp11yl9bvquaqd\
suite/rpl/t/rpl_idempotency.test
- sys_vars.all_vars:
Added test case for slave_type_conversions variable
- rpl_row_idempotency
Removed ER_SLAVE_AMBIGOUS_EXEC_MODE (which was removed by WL 4738)
from the test case. Using ER_WRONG_VALUE_FOR_VAR instead.
- mysqld--help-win
Added missing help for --slave-type-conversions from the
result file.
The test case did not start with fresh binlogs, so in some
cases, dependending on the order MTR runs the tests, it would
try to show binlog contents from invalid positions (binary log
would contain unexpected events from previous test).
We fix this by deploying a RESET MASTER at the beginning of the
test case.
Manually deleteing one or more entries from 'master-bin.index', will
cause master infinitely loop to send one binlog file.
When starting a dump session, master opens index file and search the binlog file
which is being requested by the slave. The position of the binlog file in the
index file is recorded. it will be used to find the next binlog file when current
binlog file has dumped completely. As only the position is used, it may
not get the correct file if some entries has been removed manually from the index file.
the master will reopen the current binlog file which has been dump completely
and redump it if it can not get the next binlog file's name from index file.
It obviously is a logical error.
Even though it is allowed to manually change index file,
but it is not recommended. so after this patch, master
sends a fatal error to slave and close the dump session if a new binlog file
has been generated and master can not get it from the index file.
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/extra/rpl_tests/rpl_loaddata.test
Text conflict in mysql-test/r/mysqlbinlog2.result
Text conflict in mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
Text conflict in mysql-test/suite/binlog/r/binlog_unsafe.result
Text conflict in mysql-test/suite/rpl/r/rpl_insert_id.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_auto_increment_bug33029.result
Text conflict in mysql-test/suite/rpl/r/rpl_udf.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Text conflict in sql/field.h
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysql_priv.h
Text conflict in sql/share/errmsg.txt
Text conflict in sql/sp.cc
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_base.cc
Text conflict in sql/sql_class.h
Text conflict in sql/sql_db.cc
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_insert.cc
Text conflict in sql/sql_lex.cc
Text conflict in sql/sql_lex.h
Text conflict in sql/sql_load.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_update.cc
Text conflict in sql/sql_view.cc
Conflict adding files to storage/innobase. Created directory.
Conflict because storage/innobase is not versioned, but has versioned children. Versioned directory.
Conflict adding file storage/innobase. Moved existing file to storage/innobase.moved.
Conflict adding files to storage/innobase/handler. Created directory.
Conflict because storage/innobase/handler is not versioned, but has versioned children. Versioned directory.
Contents conflict in storage/innobase/handler/ha_innodb.cc
For tables with metadata sizes ranging from 251 to 255 the size
of the event data (m_data_size) was being improperly calculated
in the Table_map_log_event constructor. This was due to the fact
that when writing the Table_map_log_event body (in
Table_map_log_event::write_data_body) a call to net_store_length
is made for packing the m_field_metadata_size. It happens that
net_store_length uses *one* byte for storing
m_field_metadata_size when it is smaller than 251 but *three*
bytes when it exceeds that value. BUG 42749 had already
pinpointed and fix this fact, but the fix was incomplete, as the
calculation in the Table_map_log_event constructor considers 255
instead of 251 as the threshold to increment m_data_size by
three. Thence, the window for having a mismatch between the
number of bytes written and the number of bytes accounted in the
event length (m_data_size) was left open for
m_field_metadata_size values between 251 and 255.
We fix this by changing the condition in the Table_map_log_event
constructor to match the one in the net_store_length, ie,
increment one byte if m_field_metadata_size < 251 and three if it
exceeds this value.