Rename method as to not hide a base.
Reorder attributes initialization.
Remove unused variable.
Rework code to silence a warning due to assignment used as truth value.
cant find record
Some engines return data for the record. Despite the fact that
the null bit is set for some fields, their old value may still in
the row. This can happen when unpacking an AI from the binlog on
top of a previous record in which a field is set to NULL, which
previously contained a value. Ultimately, this may cause the
comparison of records to fail when the slave is doing an index or
range scan.
We fix this by deploying a call to reset() for each field that is
set to null while unpacking a row from the binary log.
Furthermore, we also add mixed mode test case to cover the
scenario where updating and setting a field to null through a
Query event and later searching it through a rows event will
succeed.
Finally, we also change the reset() method, from Field_bit class,
so that it takes into account bits stored among the null bits and
not only the ones stored in the record.
Backporting BUG#43789 to mysql-5.1-bugteam
The replication was generating corrupted data, warning messages on Valgrind
and aborting on debug mode while replicating a "null" to "not null" field.
Specifically the unpack_row routine, was considering the slave's table
definition and trying to retrieve a field value, where there was nothing to be
retrieved, ignoring the fact that the value was defined as "null" by the master.
To fix the problem, we proceed as follows:
1 - If it is not STRICT sql_mode, implicit default values are used, regardless
if it is multi-row or single-row statement.
2 - However, if it is STRICT mode, then a we do what follows:
2.1 If it is a transactional engine, we do a rollback on the first NULL that is
to be set into a NOT NULL column and return an error.
2.2 If it is a non-transactional engine and it is the first row to be inserted
with multi-row, we also return the error. Otherwise, we proceed with the
execution, use implicit default values and print out warning messages.
Unfortunately, the current patch cannot mimic the behavior showed by the master
for updates on multi-tables and multi-row inserts. This happens because such
statements are unfolded in different row events. For instance, considering the
following updates and strict mode:
(master)
create table t1 (a int);
create table t2 (a int not null);
insert into t1 values (1);
insert into t2 values (2);
update t1, t2 SET t1.a=10, t2.a=NULL;
t1 would have (10) and t2 would have (0) as this would be handled as a
multi-row update. On the other hand, if we had the following updates:
(master)
create table t1 (a int);
create table t2 (a int);
(slave)
create table t1 (a int);
create table t2 (a int not null);
(master)
insert into t1 values (1);
insert into t2 values (2);
update t1, t2 SET t1.a=10, t2.a=NULL;
On the master t1 would have (10) and t2 would have (NULL). On
the slave, t1 would have (10) but the update on t1 would fail.
Backporting BUG#38173 to mysql-5.1-bugteam
The reason of the bug was incompatibile with the master side behaviour.
INSERT query on the master is allowed to insert into a table without specifying
values of DEFAULT-less fields if sql_mode is not strict.
Fixed with checking sql_mode by the sql thread to decide how to react.
Non-strict sql_mode should allow Write_rows event to complete.
todo: warnings can be shown via show slave status, still this is a
separate rather general issue how to show warnings for the slave threads.
When using mixed mode the record values stored inside the storage
engine differed from the ones computed from the row event. This
happened because the prepare_record function was calling
empty_record macro causing some don't care bits to be left set.
Replacing the empty_record plus explicitly setting defaults with
restore_record to restore the record default values fixes this.
The error message due to lack of the default value for an extra field
was not as informative as it should be.
Fixed with improving the scheme of gathering, propagating and reporting
errors in applying rows events.
The scheme is in the following.
Any kind of error of processing of a row event incidents are to be
registered with my_error().
In the end Rows_log_event::do_apply_event() invokes rli->report() with the
message to display consisting of all the errors.
This mimics `show warnings' displaying.
A simple test checks three errors in processing an event.
Two hunks - a user level error and pushing it into the list -
have been devoted to already fixed Bug@31702.
Some open issues relating to this artifact listed on BUG@21842 page and
on WL@3679.
Todo: to synchronize the statement in the tests comments on Update and Delete
events may not stop when an extra field does not have a default with wl@3228 spec.
storage engines do not set the unused null bits (i.e., the
filler bits and the X bit) correctly. Also adding some casts
to debug printouts to eliminate compiler warnings.
Refactoring code to add parameter to pack() and unpack() functions with
purpose of indicating if data should be packed in little-endian or
native order. Using new functions to always pack data for binary log
in little-endian order. The purpose of this refactoring is to allow
proper implementation of endian-agnostic pack() and unpack() functions.
Eliminating several versions of virtual pack() and unpack() functions
in favor for one single virtual function which is overridden in
subclasses.
Implementing pack() and unpack() functions for some field types that
packed data in native format regardless of the value of the
st_table_share::db_low_byte_first flag.
The field types that were packed in native format regardless are:
Field_real, Field_decimal, Field_tiny, Field_short, Field_medium,
Field_long, Field_longlong, and Field_blob.
Before the patch, row-based logging wrote the rows incorrectly on
big-endian machines where the storage engine defined its own
low_byte_first() to be FALSE on big-endian machines (the default
is TRUE), while little-endian machines wrote the fields in correct
order. The only known storage engine that does this is NDB. In effect,
this means that row-based replication from or to a big-endian
machine where the table was using NDB as storage engine failed if the
other engine was either non-NDB or on a little-endian machine.
With this patch, row-based logging is now always done in little-endian
order, while ORDER BY uses the native order if the storage engine
defines low_byte_first() to return FALSE for big-endian machines.
In addition, the max_data_length() function available in Field_blob
was generalized to the entire Field hierarchy to give the maximum
number of bytes that Field::pack() will write.
using TPC-B):
Problem: A RBR event can contain incomplete row data (only key value and
fields which have been changed). In that case, when the row is unpacked
into record and written to a table, the missing fields get incorrect NULL
values leading to master-slave inconsistency.
Solution: Use values found in slave's table for columns which are not given
in the rows event. The code for writing a single row uses the following
algorithm:
1. unpack row_data into table->record[0],
2. try to insert record,
3. if duplicate record found, fetch it into table->record[0],
4. unpack row_data into table->record[0],
5. write table->record[0] into the table.
Where row_data is the row as stored in the data area of a rows event.
Thus:
a) unpacking of row_data happens at the time when row is written into
a table,
b) when unpacking (in step 4), only columns present in row_data are
overwritten - all other columns remain as they were found in the table.
Since all data needed for the above algorithm is stored inside
Rows_log_event class, functions which locate and write rows are turned
into methods of that class.
replace_record() -> Rows_log_event::write_row()
find_and_fetch_row() -> Rows_log_event::find_row()
Both methods take row data from event's data buffer - the row being
processed is pointed by m_curr_row. They unpack the data as needed into
table's record buffers record[0] or record[1]. When row is unpacked,
m_curr_row_end is set to point at next row in the data buffer.
Other changes introduced in this changeset:
- Change signature of unpack_row(): don't report errors and don't
setup table's rw_set here. Errors can happen only when setting default
values in prepare_record() function and are detected there.
- In Rows_log_event and derived classes, don't pass arguments to
the execution primitives (do_...() member functions) but use class
members instead.
- Move old row handling code into log_event_old.cc to be used by
*_rows_log_event_old classes.
Also, a new test rpl_ndb_2other is added which tests basic replication
from master using ndb tables to slave storing the same tables using
(possibly) different engine (myisam,innodb).
Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb.
However, these tests doesn't work for various reasons and currently are
disabled (see BUG#19227).
The new test differs from the ones it is based on as follows:
1. Single test tests replication with different storage engines on slave
(myisam, innodb, ndb).
2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing
original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test
which doesn't contain tests using partitioned tables as these don't work
currently. Instead, it tests replication to a slave which has more or
less columns than master.
3. Include file include/rpl_multi_engine3.inc is replaced with
include/rpl_multi_engine2.inc. The later differs by performing slightly
different operations (updating more than one row in the table) and
clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM"
as replication of "DELETE" doesn't work well in this setting.
4. Slave must use option --log-slave-updates=0 as otherwise execution of
replication events generated by ndb fails if table uses a different
storage engine on slave (see BUG#29569).
This patch corrects a problem found during testing on Solaris. The code
changes how length values are retrieved on big endian machines. The
patch allows the rpl_extraColmaster tests to run on these machines.
This patch adds the ability to store extra field metadata in the table
map event. This data can include pack_length() or field_lenght() for
fields such as CHAR or VARCHAR enabling developers to add code that
can check for compatibilty between master and slave columns. More
importantly, the extra field metadata can be used to store data from the
master correctly should a VARCHAR field on the master be <= 255 bytes
while the same field on the slave is > 255 bytes.
The patch also includes the needed changes to unpack to ensure that data
which is smaller on the master can be unpacked correctly on the slave.
WL#3915 : (NDB) master's cols > slave
Slave starts accepting and handling rows of master's tables which have more columns.
The most important part of implementation is how to caclulate the amount of bytes to
skip for unknown by slave column.
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
done in previous patches.
There is an error in the Sun CC compiler that treats parameters that
differ in only qualifier as different, even though this is not
allowed by the standard (ISO/IEC 14882:2003, Section 13.1).