mariadb/sql/rpl_utility.h

278 lines
8.6 KiB
C
Raw Normal View History

/* Copyright (C) 2006 MySQL AB
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */
#ifndef RPL_UTILITY_H
#define RPL_UTILITY_H
#ifndef __cplusplus
#error "Don't include this C++ header file from a non-C++ file!"
#endif
#include "sql_priv.h"
#include "m_string.h" /* bzero, memcpy */
#ifdef MYSQL_SERVER
#include "table.h" /* TABLE_LIST */
#endif
#include "mysql_com.h"
class Relay_log_info;
/**
A table definition from the master.
The responsibilities of this class is:
- Extract and decode table definition data from the table map event
- Check if table definition in table map is compatible with table
definition on slave
*/
class table_def
{
public:
/**
Constructor.
@param types Array of types, each stored as a byte
@param size Number of elements in array 'types'
@param field_metadata Array of extra information about fields
@param metadata_size Size of the field_metadata array
@param null_bitmap The bitmap of fields that can be null
*/
table_def(unsigned char *types, ulong size, uchar *field_metadata,
int metadata_size, uchar *null_bitmap, uint16 flags);
2007-08-03 17:12:00 +02:00
~table_def();
/**
Return the number of fields there is type data for.
@return The number of fields that there is type data for.
*/
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
ulong size() const { return m_size; }
/*
Return a representation of the type data for one field.
@param index Field index to return data for
@return Will return a representation of the type data for field
<code>index</code>. Currently, only the type identifier is
returned.
*/
enum_field_types type(ulong index) const
{
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
DBUG_ASSERT(index < m_size);
/*
If the source type is MYSQL_TYPE_STRING, it can in reality be
either MYSQL_TYPE_STRING, MYSQL_TYPE_ENUM, or MYSQL_TYPE_SET, so
we might need to modify the type to get the real type.
*/
enum_field_types source_type= static_cast<enum_field_types>(m_type[index]);
uint16 source_metadata= m_field_metadata[index];
switch (source_type)
{
case MYSQL_TYPE_STRING:
{
int real_type= source_metadata >> 8;
if (real_type == MYSQL_TYPE_ENUM || real_type == MYSQL_TYPE_SET)
source_type= static_cast<enum_field_types>(real_type);
break;
}
/*
This type has not been used since before row-based replication,
so we can safely assume that it really is MYSQL_TYPE_NEWDATE.
*/
case MYSQL_TYPE_DATE:
source_type= MYSQL_TYPE_NEWDATE;
break;
default:
/* Do nothing */
break;
}
return source_type;
}
/*
This function allows callers to get the extra field data from the
table map for a given field. If there is no metadata for that field
or there is no extra metadata at all, the function returns 0.
The function returns the value for the field metadata for column at
position indicated by index. As mentioned, if the field was a type
that stores field metadata, that value is returned else zero (0) is
returned. This method is used in the unpack() methods of the
corresponding fields to properly extract the data from the binary log
in the event that the master's field is smaller than the slave.
*/
uint16 field_metadata(uint index) const
{
DBUG_ASSERT(index < m_size);
if (m_field_metadata_size)
return m_field_metadata[index];
else
return 0;
}
/*
This function returns whether the field on the master can be null.
This value is derived from field->maybe_null().
*/
my_bool maybe_null(uint index) const
{
DBUG_ASSERT(index < m_size);
return ((m_null_bits[(index / 8)] &
(1 << (index % 8))) == (1 << (index %8)));
}
/*
This function returns the field size in raw bytes based on the type
and the encoded field data from the master's raw data. This method can
be used for situations where the slave needs to skip a column (e.g.,
WL#3915) or needs to advance the pointer for the fields in the raw
data from the master to a specific column.
*/
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
2007-08-26 14:31:10 +02:00
uint32 calc_field_size(uint col, uchar *master_data) const;
/**
Decide if the table definition is compatible with a table.
Compare the definition with a table to see if it is compatible
with it.
A table definition is compatible with a table if:
- The columns types of the table definition is a (not
necessarily proper) prefix of the column type of the table.
- The other way around.
- Each column on the master that also exists on the slave can be
converted according to the current settings of @c
SLAVE_TYPE_CONVERSIONS.
@param thd
@param rli Pointer to relay log info
@param table Pointer to table to compare with.
@param[out] tmp_table_var Pointer to temporary table for holding
conversion table.
@retval 1 if the table definition is not compatible with @c table
@retval 0 if the table definition is compatible with @c table
*/
#ifndef MYSQL_CLIENT
bool compatible_with(THD *thd, Relay_log_info *rli, TABLE *table,
TABLE **conv_table_var) const;
/**
Create a virtual in-memory temporary table structure.
The table structure has records and field array so that a row can
be unpacked into the record for further processing.
In the virtual table, each field that requires conversion will
have a non-NULL value, while fields that do not require
conversion will have a NULL value.
Some information that is missing in the events, such as the
character set for string types, are taken from the table that the
field is going to be pushed into, so the target table that the data
eventually need to be pushed into need to be supplied.
@param thd Thread to allocate memory from.
@param rli Relay log info structure, for error reporting.
@param target_table Target table for fields.
@return A pointer to a temporary table with memory allocated in the
thread's memroot, NULL if the table could not be created
*/
TABLE *create_conversion_table(THD *thd, Relay_log_info *rli, TABLE *target_table) const;
#endif
private:
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
ulong m_size; // Number of elements in the types array
unsigned char *m_type; // Array of type descriptors
uint m_field_metadata_size;
uint16 *m_field_metadata;
uchar *m_null_bits;
uint16 m_flags; // Table flags
uchar *m_memory;
};
#ifndef MYSQL_CLIENT
/**
Extend the normal table list with a few new fields needed by the
slave thread, but nowhere else.
*/
struct RPL_TABLE_LIST
: public TABLE_LIST
{
bool m_tabledef_valid;
table_def m_tabledef;
TABLE *m_conv_table;
};
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
2007-08-26 14:31:10 +02:00
/* Anonymous namespace for template functions/classes */
namespace {
/*
Smart pointer that will automatically call my_afree (a macro) when
the pointer goes out of scope. This is used so that I do not have
to remember to call my_afree() before each return. There is no
overhead associated with this, since all functions are inline.
I (Matz) would prefer to use the free function as a template
parameter, but that is not possible when the "function" is a
macro.
*/
template <class Obj>
class auto_afree_ptr
{
Obj* m_ptr;
public:
auto_afree_ptr(Obj* ptr) : m_ptr(ptr) { }
~auto_afree_ptr() { if (m_ptr) my_afree(m_ptr); }
void assign(Obj* ptr) {
/* Only to be called if it hasn't been given a value before. */
DBUG_ASSERT(m_ptr == NULL);
m_ptr= ptr;
}
Obj* get() { return m_ptr; }
};
}
#endif
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
2007-08-26 14:31:10 +02:00
// NB. number of printed bit values is limited to sizeof(buf) - 1
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
2007-08-26 14:31:10 +02:00
#define DBUG_PRINT_BITSET(N,FRM,BS) \
do { \
char buf[256]; \
uint i; \
for (i = 0 ; i < min(sizeof(buf) - 1, (BS)->n_bits) ; i++) \
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
2007-08-26 14:31:10 +02:00
buf[i] = bitmap_is_set((BS), i) ? '1' : '0'; \
buf[i] = '\0'; \
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
2007-08-26 14:31:10 +02:00
DBUG_PRINT((N), ((FRM), buf)); \
} while (0)
#endif /* RPL_UTILITY_H */