TO INCONSISTENCY
PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the
table_name.par file and store it in an internal data
structure.Then we delete this file and the data in
the table. If the server crashes after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.
FIX
---
1. We move the part of deleting par file after deleting
all the table data from the storage egine.
2. During drop operation if we detect that the par
file is missing then we delete the .frm file,since
there is no way of recovering without par file.
[Approved by Mattias rb#2576 ]
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
The partitioning engine does not implement index_next for partitions
which return HA_ERR_KEY_NOT_FOUND in index_read_map.
If HA_ERR_KEY_NOT_FOUND was returned by a partition during
index_read_map, that partition would not be included in following
calls to index_next. If no partition returned a row in index_read_map,
then the subsequent call to index_next would try to use a non existing
handler (index out of bound).
Even after fixing the index out of bound if at least one partition
returned.
So it is really two connected bugs
1) crash due to index out of bound (-1 unsigned).
2) not including partitions that returned HA_ERR_KEY_NOT_FOUND.
Fixed by recording the partitions that returned HA_ERR_KEY_NOT_FOUND,
and include them too when doing handle_ordered_next the first time.
Additional patch to remove the part_id -> ref_buffer offset.
The partitioning id and the associate record buffer can
be found without having to calculate it.
By initializing it for each used partition, and then reuse
the key-buffer from the queue, it is not needed to have
such map.
The buffer for the current read row from each partition
(m_ordered_rec_buffer) used for sorted reads was
allocated on open and freed when the ha_partition handler
was closed or destroyed.
For tables with many partitions and big records this could
take up too much valuable memory.
Solution is to only allocate the memory when it is needed
and free it when nolonger needed. I.e. allocate it in
index_init and free it in index_end (and to handle failures
also free it on reset, close etc.)
Also only allocating needed memory, according to
partitioning pruning.
Manually tested that it does not use as much memory and
releases it after queries.
PARTITION STATISTICS
Problem was the fix for bug#11756867; It always used the first
partitions, and stopped after it checked 10 [sub]partitions.
(or until it found a partition which would contain a match).
This results in bad statistics for tables where the first 10 partitions
don't represent the majority of the data (like when the first 10
partitions only contained a few rows in total).
The solution was to take statisics from the partitions containing
the most rows instead:
Added an array of partition ids which is sorted by number of records
in descending order.
this array is used in records_in_range to cover as many records as
possible in as few calls as possible.
Also changed the limit of how many partitions to use for the statistics
from a static max of 10 partitions, into a dynamic model:
Maximum number of partitions is now log2(total number of partitions)
taken from the ordered array.
It will continue calling partitions records_in_range until it has
checked:
(total rows in matching partitions) * (maximum number of partitions)
/ (number of used partitions)
Also reverted the changes for ha_partition::scan_time() and
ha_partition::estimate_rows_upper_bound() to before
the fix of bug#11756867. Since they are not as slow as
records_in_range.
RESULT FROM PREVIOUS TRANSACTION
The current Query Cache API is not fully compatible with
the partitioning engine.
There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.
So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.
(There are some extra changes, due to removal of \r as line break)
SECONDARY INDEX IN INNODB
The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.
This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.
In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.
Large parts of this patch were written by Marko Mäkelä.
Test case added to innodb_mysql_lock.test.
Update for previous patch according to reviewers comments.
Updated the constructors for ha_partitions to use the common
init_handler_variables functions
Added use of defines for size and offset to get better readability for the code that reads
and writes the .par file. Also refactored the get_from_handler_file function.
When executing row-ordered-retrieval index merge,
the handler was cloned, but it used the wrong
memory root, so instead of allocating memory
on the thread/query's mem_root, it used the table's
mem_root, resulting in non released memory in the
table object, and was not freed until the table was
closed.
Solution was to ensure that memory used during cloning
of a handler was allocated from the correct memory root.
This was implemented by fixing handler::clone() to also
take a name argument, so it can be used with partitioning.
And in ha_partition only allocate the ha_partition's ref, and
call the original ha_partition partitions clone() and set at cloned
partitions.
Fix of .bzrignore on Windows with VS 2010
but the statement is written to binlog
TRUNCATE PARTITION was written to the binlog
even if it failed before calling any partition's
truncate function.
Solved by adding an argument to truncate_partition,
to flag if it should be written to the binlog or not.
It should be written to the binlog when a call to any
partitions truncate function is done.
mysql-test/r/partition_binlog.result:
New result file
mysql-test/t/partition_binlog.test:
New test file, including DROP PARTITION binlog test
sql/ha_partition.cc:
Added argument to avoid binlogging failed truncate_partition that
have not yet changed any data.
sql/ha_partition.h:
Added argument to avoid excessive binlogging
sql/sql_partition_admin.cc:
Avoid to binlog TRUNCATE PARTITION if it fails before
any partition has tried to truncate.
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock
- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.
- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.
Problem description:
The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.
Solution:
The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.
Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
mysql-test/suite/innodb/t/innodb-truncate.test:
Add test cases for truncate and foreign key checks.
Also test that InnoDB resets auto-increment on truncate.
mysql-test/suite/innodb/t/innodb.test:
FK is not necessary, test is related to auto-increment.
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/suite/innodb/t/innodb_mysql.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
Use delete instead of truncate, test is used to check
the interaction of FKs, triggers and delete.
mysql-test/suite/parts/inc/partition_check.inc:
Fix typo.
mysql-test/suite/sys_vars/t/foreign_key_checks_func.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/t/mdl_sync.test:
Modify test case to reflect and ensure that truncate takes
a exclusive metadata lock.
mysql-test/t/trigger-trans.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
sql/ha_partition.cc:
Reorganize the various truncate methods. delete_all_rows is now
passed directly to the underlying engines, so as truncate. The
code responsible for truncating individual partitions is moved
to ha_partition::truncate_partition, which is invoked when a
ALTER TABLE t1 TRUNCATE PARTITION p statement is executed.
Since the partition truncate no longer can be invoked via
delete, the bitmap operations are not necessary anymore. The
explicit reset of the auto-increment value is also removed
as the underlying engines are now responsible for reseting
the value.
sql/handler.cc:
Wire up the handler truncate method.
sql/handler.h:
Introduce and document the truncate handler method. It assumes
certain use cases of delete_all_rows.
Add method to retrieve the list of foreign keys referencing a
table. Method is used to avoid truncating tables that are
parent in a foreign key relationship.
sql/share/errmsg-utf8.txt:
Add error message for truncate and FK.
sql/sql_lex.h:
Introduce a flag so that the partition engine can detect when
a partition is being truncated. Used to give a special error.
sql/sql_parse.cc:
Function mysql_truncate_table no longer exists.
sql/sql_partition_admin.cc:
Implement the TRUNCATE PARTITION statement.
sql/sql_truncate.cc:
Change the truncate table implementation to use the new truncate
handler method and to not rely on row-by-row delete anymore.
The truncate handler method is always invoked with a exclusive
metadata lock. Also, it is no longer possible to truncate a
table that is parent in some non-self-referencing foreign key.
storage/archive/ha_archive.cc:
Rename method as the description indicates that in the future
this could be a truncate operation.
storage/blackhole/ha_blackhole.cc:
Implement truncate as no operation for the blackhole engine in
order to remain compatible with older releases.
storage/federated/ha_federated.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/heap/ha_heap.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/ibmdb2i/ha_ibmdb2i.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/innobase/handler/ha_innodb.cc:
Rename delete_all_rows to truncate. InnoDB now does truncate
under a exclusive metadata lock.
Introduce and reorganize methods used to retrieve the list
of foreign keys referenced by a or referencing a table.
storage/myisammrg/ha_myisammrg.cc:
Introduce truncate method that invokes delete_all_rows.
This is required in order to remain compatible with earlier
releases where truncate would resort to a row-by-row delete.
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.
Solution was to also cache the other call and forward it when moving
to a new partition to scan.
mysql-test/r/partition.result:
test result
mysql-test/t/partition.test:
New test from bug report.
sql/ha_partition.cc:
cache the HA_EXTRA_PREPARE_FOR_UPDATE just like HA_EXTRA_CACHE.
sql/ha_partition.h:
Added cache flag for HA_EXTRA_PREPARE_FOR_UPDATE
In bug-28430 HA_PRIMARY_KEY_REQUIRED_FOR_POSITION
was disabled in the partitioning engine in the first patch,
That bug was later fixed a second time, but that flag
was not removed.
No need to disable this flag, as it leads to bad
choise in row replication.
sql/ha_partition.h:
Not disabling HA_PRIMARY_KEY_REQUIRED_FOR_POSITION flag.
Updated comment (has nothing to do with hidden key.
sql/handler.h:
Updated comments to about HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
The handler function for reading one row from a specific index
was not optimized in the partitioning handler since it
used the default implementation.
No test case since it is performance only, verified by hand.
sql/ha_partition.cc:
Implemented a optimized version of index_read_idx_map
for the case when find flag == HA_READ_KEY_EXACT,
which is the common case.
sql/ha_partition.h:
Declared ha_partition::index_read_idx_map
This patch:
- Moves all definitions from the mysql_priv.h file into
header files for the component where the variable is
defined
- Creates header files if the component lacks one
- Eliminates all include directives from mysql_priv.h
- Eliminates all circular include cycles
- Rename time.cc to sql_time.cc
- Rename mysql_priv.h to sql_priv.h
into partitioned MyISAM table
Problem was that the ha_data structure was introduced in 5.1
and only used for partitioning first, but with the intention
of be of use for others engines as well, and when used by other
engines it would clash if it also was partitioned.
Solution is to move the partitioning specific data to a separate
structure, with its own mutex (which is used for auto_increment).
Also did rename PARTITION_INFO to PARTITION_STATS since there
already exist a class named partition_info, also cleaned up
some related variables.
mysql-test/r/partition_binlog_stmt.result:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
New result file
mysql-test/t/partition_binlog_stmt.test:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
New result file
sql/ha_ndbcluster.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
sql/ha_ndbcluster.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
sql/ha_partition.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed the partitioning engines use of ha_data in
TABLE_SHARE and added ha_part_data instead, since
they collide if used in the same time.
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
Removed some dead code.
sql/ha_partition.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed some dead code.
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
Removed the partitioning engines use of ha_data in
TABLE_SHARE and added ha_part_data instead, since
they collide if used in the same time.
sql/handler.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
sql/handler.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
sql/mysql_priv.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed the partitioning engines use of ha_data in
TABLE_SHARE and added ha_part_data instead, since
they collide if used in the same time.
Added key_PARTITION_LOCK_auto_inc for instrumentation.
sql/mysqld.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed the partitioning engines use of ha_data in
TABLE_SHARE and added ha_part_data instead, since
they collide if used in the same time.
Added key_PARTITION_LOCK_auto_inc for instrumentation.
sql/partition_info.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed part_state* since it was not in use.
sql/sql_partition.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed part_state* since it was not in use.
sql/sql_partition.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Cleaned up old commented out code.
Removed part_state* since it was not in use.
sql/sql_show.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Rename of PARTITION_INFO to PARTITION_STATS to better
match the use (and there is also a class named
partition_info...)
Renamed partition_info to partition_info_str, since
partition_info is a name of a class.
sql/sql_table.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Renamed partition_info to partition_info_str, since
partition_info is a name of a class.
sql/table.cc:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed the partitioning engines use of ha_data in
TABLE_SHARE and added ha_part_data instead, since
they collide if used in the same time.
Renamed partition_info to partition_info_str, since
partition_info is a name of a class.
removed part_state* since it was not in use.
sql/table.h:
Bug#51851: Server with SBR locks mutex twice on LOAD DATA
into partitioned MyISAM table
Removed the partitioning engines use of ha_data in
TABLE_SHARE and added ha_part_data instead, since
they collide if used in the same time.
Renamed partition_info to partition_info_str, since
partition_info is a name of a class.
removed part_state* since it was not in use.
mysql-test/t/disabled.def:
Restore disabled ssl tests: SSL certificates were updated.
Disable sp_sync.test, the test case can't work in next-4284.
mysql-test/t/partition_innodb.test:
Disable parsing of the test case for Bug#47343,
the test can not work in next-4284.
mysql-test/t/ps_ddl.test:
Update results (CREATE TABLE IF NOT EXISTS takes
into account existence of the temporary table).
-------------------------------------------------------------------
revno: 2630.6.6
committer: Konstantin Osipov <konstantin@mysql.com>
branch nick: mysql-6.0-3726
timestamp: Tue 2008-05-27 16:15:44 +0400
message:
Implement code review fixes for WL#3726 "DDL locking for all
metadata objects": cleanup the code from share->mutex
acquisitions, which are now obsolete.
sql/ha_partition.cc:
Rename share->mutex to share->LOCK_ha_data. The mutex never
protected the entire share. The right way to protect a share
is to acquire an MDL lock.
sql/ha_partition.h:
Rename share->mutex to share->LOCK_ha_data.
sql/sql_base.cc:
Remove LOCK_table_share. Do not acquire share->mutex when
deleting a table share or incrementing its ref_count.
All these operations are and must continue to be protected by LOCK_open
and respective MDL locks.
sql/sql_view.cc:
Remove acquisition of share->mutex when incrementing share->ref_count.
sql/table.cc:
Simplify release_table_shar() by removing dead code related
to share->mutex from it.
sql/table.h:
Rename TABLE_SHARE::mutex to TABLE_SHARE::LOCK_ha_data
to better reflect its purpose.
storage/myisam/ha_myisam.cc:
Rename share->mutex to share->LOCK_ha_data.
Problem was that ha_partition::records_in_range called
records_in_range for all non pruned partitions, even if
an estimate should be given.
Solution is to only use 1/3 of the partitions (up to 10) for
records_in_range and estimate the total from this subset.
(And continue until a non zero return value from the called
partitions records_in_range is given, since 0 means no rows
will match.)
sql/ha_partition.cc:
Bug#48846: Too much time spent in ha_partition::records_in_range if not able to prune
estimate_rows_upper_bound and records_in_range are very similar
(the only difference is the function and its parameters to use)
so I created a common function for this.
Since these calls from the optimizer are only estimates, it is
not neccesary to call them for every partition, it can use
a much smaller subset of the used partitions instead,
which improves performance for selects.
sql/ha_partition.h:
Bug#48846: Too much time spent in ha_partition::records_in_range if not able to prune
Added two private functions to help some
optimizer calls.
Conflicts
=========
Text conflict in .bzr-mysql/default.conf
Text conflict in libmysqld/CMakeLists.txt
Text conflict in libmysqld/Makefile.am
Text conflict in mysql-test/collections/default.experimental
Text conflict in mysql-test/extra/rpl_tests/rpl_row_sp006.test
Text conflict in mysql-test/suite/binlog/r/binlog_tmp_table.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
Text conflict in mysql-test/suite/rpl/r/rpl_row_create_table.result
Text conflict in mysql-test/suite/rpl/r/rpl_row_sp006_InnoDB.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_log.result
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_circular_simplex.result
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_sp006.result
Text conflict in mysql-test/t/mysqlbinlog.test
Text conflict in sql/CMakeLists.txt
Text conflict in sql/Makefile.am
Text conflict in sql/log_event_old.cc
Text conflict in sql/rpl_rli.cc
Text conflict in sql/slave.cc
Text conflict in sql/sql_binlog.cc
Text conflict in sql/sql_lex.h
21 conflicts encountered.
NOTE
====
mysql-5.1-rpl-merge has been made a mirror of mysql-next-mr:
- "mysql-5.1-rpl-merge$ bzr pull ../mysql-next-mr"
This is the first cset (merge/...) committed after pulling
from mysql-next-mr.