"Rows not deleted from innodb partitioned tables if --innodb_autoinc_lock_mode=0"
Due to a previous bugfix which initializes a previously uninitialized
variable, ha_partition::get_auto_increment() may fail to operate
correctly when the storage engine reports that it is only reserving
one value and one or more partitions have a different 'next-value'.
Currently, only affects Innodb's new-style auto-increment code which
reserves larger blocks of values and has less inter-thread contention.
mysql-test/suite/rpl/r/rpl_innodb_bug28430.result:
Fix results - previous results shows symptoms of Bug30919
sql/ha_partition.cc:
Bug30919
ha_partition::write_row()
Do not insert a row if a failure occurred while generating
auto-increment value.
ha_partition::get_auto_increment()
If there is an empty 'intersection' of auto-increment values, perform
a second pass before failing because partitions may have different
auto-increment 'next-value' attributes.
storage/innobase/handler/ha_innodb.cc:
Bug30919
Only set *first_value if it is less than autoinc value. This allows
a higher value to be hinted when operating as a partitioned table.
mysql-test/suite/rpl/r/rpl_innodb_bug30919.result:
New BitKeeper file ``mysql-test/suite/rpl/r/rpl_innodb_bug30919.result''
mysql-test/suite/rpl/t/rpl_innodb_bug30919-master.opt:
New BitKeeper file ``mysql-test/suite/rpl/t/rpl_innodb_bug30919-master.opt''
mysql-test/suite/rpl/t/rpl_innodb_bug30919.test:
New BitKeeper file ``mysql-test/suite/rpl/t/rpl_innodb_bug30919.test''
- problem is the database name accessed in Rows_log_event write... get_db() which is a pointer to the share string...
- point to table map instead?
- or copy it?
- or make sure that anything interacting with the share happens _after_ the epoch
Problem with flush is that STMT_END_F may not be included as it shoudld...
sql/ha_ndbcluster.cc:
Bug#20872 master*.err: miscellaneous error messages
- only allocate share if fully successfull
sql/ha_ndbcluster_binlog.cc:
Bug#20872 master*.err: miscellaneous error messages
- only allocate share if fully successfull
- no need to print error, my_errno is set
sql/ha_ndbcluster_binlog.h:
Bug#20872 master*.err: miscellaneous error messages
- only allocate share if fully successfull
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.1-new-ndb
sql/ha_ndbcluster.cc:
Auto merged
storage/ndb/src/kernel/blocks/dbtc/Dbtc.hpp:
Auto merged
storage/ndb/src/kernel/blocks/dbtc/DbtcMain.cpp:
Auto merged
storage/ndb/src/ndbapi/NdbBlob.cpp:
Auto merged
storage/ndb/src/ndbapi/NdbTransaction.cpp:
Auto merged
storage/ndb/test/ndbapi/testIndex.cpp:
Auto merged
storage/ndb/src/kernel/blocks/ERROR_codes.txt:
manual merge
storage/ndb/src/ndbapi/ndberror.c:
manual merge
storage/ndb/test/run-test/daily-basic-tests.txt:
manual merge
sql/ha_ndbcluster.cc:
remove warning for table exists in mysqld error log
sql/ha_ndbcluster_binlog.cc:
remove warning for table exists in mysqld error log
into dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.1/mysql-5.1-new-ndb-bj.merge
mysql-test/Makefile.am:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
storage/ndb/tools/restore/Restore.cpp:
Auto merged
storage/ndb/tools/restore/restore_main.cpp:
Auto merged
mysql-test/suite/ndb/r/ndb_restore_compat.result:
Auto merged
mysql-test/suite/ndb/t/ndb_restore_compat.test:
Auto merged
Removing unguarded read of slave_running field from inside
terminate_slave_threads(). This could cause premature exit in the event
that the slave thread already were shutting down, but isn't finished yet.
The fields slave_running, io_thd, and sql_thread are guarded by an
associated run_lock. A read of these fields were not guarded inside
terminate_slave_threads(), which caused an assertion to fire. The
assertion was removed, and the code reorganized slightly.
sql/slave.cc:
Changing signature of terminate_slave_thread() to accept a skip_lock
parameter instead of two mutexes. This mimics the signature of the
terminate_slave_threads() function. Code is also changed as a result
of this.
Removing unguarded check of slave_running field in the master info and
relay log info structure since that could cause premature exit of
terminate_slave_threads().
The thread variable for each of the slave threads can change before
acquiring the run_lock mutex inside terminate_slave_thread(). Hence
an assertion was removed that read the variable without guarding it
with run_lock.
Code that checked *slave_running status inside terminate_slave_thread()
was reorganized slightly.
sql/slave.h:
Moving terminate_slave_thread() to use internal linkage.
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.1-new-ndb-merge
mysql-test/suite/ndb/r/ndb_restore.result:
Auto merged
mysql-test/suite/ndb/t/ndb_restore.test:
Auto merged
sql/ha_ndbcluster.cc:
manual merge
- the listed file_names are not necessarily on disk, so we need to discover them if they aren't
mysql-test/t/ndb_restore.test:
Bug #30667 ndb table discovery does not work correcly with information schema
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.1-new-ndb-merge
sql/field.cc:
Auto merged
sql/log_event.cc:
Auto merged
sql/log_event.h:
Auto merged
sql/rpl_record.cc:
Auto merged
sql/rpl_utility.cc:
Auto merged
sql/rpl_utility.h:
Auto merged
not compiled as a replication server - a fix from rpl clone now applied
to 5.1.22 tree.
sql/log_event.cc:
Exclude Rows_log_event members used in event application if
not compiled as a replication server.
sql/log_event.h:
Don't initialize Rows_log_event members used in event application if
not compiled as a replication server.
Initialize thd->variables.pseudo_thread_id when a new embedded
thd is created.
libmysqld/lib_sql.cc:
Add comment regarding duplication of code in create_embedded_thd()
vs. create_new_thread() and prepare_new_connection_state(). This
was a cause for not properly initializing the pseudo_thread_id variable.
mysql-test/r/func_misc.result:
Add test case to ensure connection_id() returns a sane value
mysql-test/t/func_misc.test:
Add test case to ensure connection_id() returns a sane value
sql/mysqld.cc:
Add comment warning of the duplication of code between create_new_thread()
and create_embedded_thd()
sql/sql_connect.cc:
Add comment warning of the duplication of code between
prepare_new_connection_state() and create_embedded_thd()
table->record[1] buffers inside Rows_log_event::find_row() function.
The patch fixes this.
sql/log_event.cc:
Use table->record[0] to read records from table and table->record[1] to
store a copy of the original record for comparisons.
A local variable may be used uninitialized in
ha_partition::get_auto_increment(). Initialize it properly.
sql/ha_partition.cc:
Initialize first_value_part in ha_partition::get_auto_increment() with *first_value before
it's used in the underlying table handler. Thanks to Antony for digging up this fix.
sql/field.cc:
- always pack in little endian, irrespective of starage engine native format
- always unpack as if it is atored in little endian, and unpack it to storage engine native format
MySQL replicates the time zone only when operations that involve
it are performed. This is controlled by a flag. But this flag
is set only on successful operation.
The flag must be set also when there is an error that involves
a timezone (so the master would replicate the error to the slaves).
mysql-test/suite/rpl/r/rpl_timezone.result:
repush of Bug 29536 for 5.1.22 tree: test case
mysql-test/suite/rpl/t/rpl_timezone.test:
repush of Bug 29536 for 5.1.22 tree: test case
sql/field.cc:
re-push of Bug 29536 for 5.1.22: move setting of the flag before the operation
(so it apples to errors as well).
sql/time.cc:
re-push of Bug 29536 for 5.1.22: move setting of the flag before the operation
(so it apples to errors as well).
using TPC-B):
Problem: A RBR event can contain incomplete row data (only key value and
fields which have been changed). In that case, when the row is unpacked
into record and written to a table, the missing fields get incorrect NULL
values leading to master-slave inconsistency.
Solution: Use values found in slave's table for columns which are not given
in the rows event. The code for writing a single row uses the following
algorithm:
1. unpack row_data into table->record[0],
2. try to insert record,
3. if duplicate record found, fetch it into table->record[0],
4. unpack row_data into table->record[0],
5. write table->record[0] into the table.
Where row_data is the row as stored in the data area of a rows event.
Thus:
a) unpacking of row_data happens at the time when row is written into
a table,
b) when unpacking (in step 4), only columns present in row_data are
overwritten - all other columns remain as they were found in the table.
Since all data needed for the above algorithm is stored inside
Rows_log_event class, functions which locate and write rows are turned
into methods of that class.
replace_record() -> Rows_log_event::write_row()
find_and_fetch_row() -> Rows_log_event::find_row()
Both methods take row data from event's data buffer - the row being
processed is pointed by m_curr_row. They unpack the data as needed into
table's record buffers record[0] or record[1]. When row is unpacked,
m_curr_row_end is set to point at next row in the data buffer.
Other changes introduced in this changeset:
- Change signature of unpack_row(): don't report errors and don't
setup table's rw_set here. Errors can happen only when setting default
values in prepare_record() function and are detected there.
- In Rows_log_event and derived classes, don't pass arguments to
the execution primitives (do_...() member functions) but use class
members instead.
- Move old row handling code into log_event_old.cc to be used by
*_rows_log_event_old classes.
Also, a new test rpl_ndb_2other is added which tests basic replication
from master using ndb tables to slave storing the same tables using
(possibly) different engine (myisam,innodb).
Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb.
However, these tests doesn't work for various reasons and currently are
disabled (see BUG#19227).
The new test differs from the ones it is based on as follows:
1. Single test tests replication with different storage engines on slave
(myisam, innodb, ndb).
2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing
original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test
which doesn't contain tests using partitioned tables as these don't work
currently. Instead, it tests replication to a slave which has more or
less columns than master.
3. Include file include/rpl_multi_engine3.inc is replaced with
include/rpl_multi_engine2.inc. The later differs by performing slightly
different operations (updating more than one row in the table) and
clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM"
as replication of "DELETE" doesn't work well in this setting.
4. Slave must use option --log-slave-updates=0 as otherwise execution of
replication events generated by ndb fails if table uses a different
storage engine on slave (see BUG#29569).
sql/log_event.cc:
- Initialization of new Rows_log_event members.
- Fixing some typos in documentation.
In Rows_log_event::do_apply_event:
- Set COMPLETE_ROWS_F flag (when master and slave have the same number of
columns and all colums are present in the row)
- Move initialization of tables write/read sets here, outside the rows
processing loop (and out of unpack_row() function).
- Remove calls to do_prepare_row() - no longer needed.
- Add code managing m_curr_row and m_curr_row_end pointers.
- Change signatures of row processing methods of Rows_log_event and it
descendants - now most arguments are taken from class members.
- Remove do_prepare_row() methods which are no longer used.
- The auto_afree_ptr template is moved to rpl_utility.h (so that it can
be used in log_event_old.cc).
- Removed copy_extra_fields() function - no longer used.
In Rows_log_event::write_row (former replace_record):
- The old code is moved to log_event_old.cc.
- Use prepare_record() and non-destructive unpack_current_row() to fill record
with data.
- In case a record being inserted already exists on slave and row data is
incomplete use the record found and non-destructive unpack_current_row() to
combine new column values with existing ones.
- More debug info added.
In Rows_log_event::find_row (former find_and_fetch_row function):
- The old code is moved to log_event_old.cc.
- Unpacking of the row is moved here.
- In case of search using PK, the key data is prepared here.
- More debug info added.
- Remove initialization of Rows_log_event::m_after_image buffer which is no
longer used.
- Use new row unpacking methods in Update_rows_log_event::do_exec_row() to
create before and after image.
Note: all existing code used by Rows_log_event::do_apply_event() has been moved
to log_event_old.cc to be used by *_rows_log_event_old classes.
sql/log_event.h:
- Add new COMPLETE_ROWS_F flag in Rows_log_event.
- Add Rows_log_event members describing the row being processed.
- Add a pointer to key buffer which is used in derived classes.
- Add new methods: find__row(), write_row() and unpack_current_row().
- Change signatures of do_...() methods (replace method arguments by
class members).
- Remove do_prepare_row() method which is no longer used.
- Update method documentation.
- Add Old_rows_log_event class, which contains the old row processing code, as
a friend of Rows_log_event so that it can access all members of an event
instance.
sql/log_event_old.cc:
Move here old implementation of Rows_log_event::do_apply_event() and
helper methods.
sql/log_event_old.h:
- Define new class Old_rows_log_event encapsulating old version of
Rows_log_event::do_apply_event() and the helper methods.
- Add the Old_rows_log_event class as a base for *_old versions of RBR event
classes, ensure that the old version of do_apply_event() is called.
- For *_old classes, declare the helper methods used in the old version of
do_apply_event().
sql/rpl_record.cc:
- Make unpack_row non-destructive for columns not present in the row.
- Don't fill read/write set here as it is done outside these functions.
- Move initialization of a record with default values to a separate
function prepare_record().
sql/rpl_record.h:
- Change signature of unpack_row().
- Declare function prepare_record().
sql/rpl_utility.cc:
Make tabe_def::calc_field_size() a const method.
sql/rpl_utility.h:
Make table_def::calc_field_size() a const method.
Move auto_afree_ptr template here so that it can be re-used (currently
in log_event.cc and log_event_old.cc). Similar with DBUG_PRINT_BITSET
macro.
mysql-test/extra/rpl_tests/rpl_ndb_2multi_basic.test:
Modification of rpl_ndb_2multi_eng test. Tests with partitioned tables
are removed and a setup with slave having different number of columns
than master is added.
mysql-test/include/rpl_multi_engine2.inc:
Modification of rpl_multi_engine3.inc which operates on more rows and
replaces "DELETE FROM t1" with "TRUNCATE TABLE t1" as the first form
doesn't replicate in NDB -> non-NDB setting (BUG#28538).
mysql-test/suite/rpl_ndb/r/rpl_ndb_2other.result:
Results of the test.
mysql-test/suite/rpl_ndb/t/rpl_ndb_2other-slave.opt:
Test options. --log-slave-updates=0 is compulsory as otherwise non-NDB
slave applying row events from NDB master will fail when trying to log
them.
mysql-test/suite/rpl_ndb/t/rpl_ndb_2other.test:
Test replication of NDB table to slave using other engine. The main test
is in extra/rpl_tests/rpl_ndb_2multi_basic.test. It is included here
several times with different settings of default storage engine on slave.
Recommit to 5.1.22.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
mysql-test/r/select.result:
Added a test case for bug #30396.
Recommit to 5.1.22.
mysql-test/t/select.test:
Added a test case for bug #30396.
Recommit to 5.1.22.
sql/item_cmpfunc.h:
Removed max_members from the COND_EQUAL class as not useful anymore.
Recommit to 5.1.22.
sql/sql_base.cc:
Added the max_equal_elems field to the st_select_lex structure.
Recommit to 5.1.22.
sql/sql_lex.cc:
Added the max_equal_elems field to the st_select_lex structure.
Recommit to 5.1.22.
sql/sql_lex.h:
Added the max_equal_elems field to the st_select_lex structure.
The field contains the maximal number of elements in multiple equalities
built for the query conditions.
Recommit to 5.1.22.
sql/sql_select.cc:
Fixed bug #30396.
Recommit to 5.1.22.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
The max_equal_elems field to the st_select_lex structure is used now
to calculate the above mentioned upper bound. The field contains the
maximal number of elements in multiple equalities built for the query
conditions.
Recommit to 5.1.22.
Killing a SELECT query with KILL QUERY or KILL CONNECTION
causes a server crash if the query cache is enabled.
Normal evaluation of a query may be interrupted by the
KILL QUERY/CONNECTION statement, in this case the mysql_execute_command
function returns TRUE, and the thd->killed flag has true value.
In this case the result of the query may
be cached incompletely (omitting call to query_cache_insert inside
the net_real_write function), and next call to query_cache_end_of_result
may lead to server crash.
Thus, the query_cache_end_of_result function has been modified to abort
query cache in the case of killed thread.
sql/sql_cache.cc:
Fixed bug #30201.
Recommit to 5.1.22.
The query_cache_end_of_result function has been modified to abort query
cache in the case of query execution failure. Also this function has been
modified to remove incomplete query block.
Recommit to 5.1.22.
The server created temporary tables for filesort in the working directory
instead of the specified tmpdir directory.
sql/item.cc:
Fixed bug #30287.
Recommit to 5.1.22.
The Item_field::set_field method has been modified to reset the any_privileges
flag to false in case of system temporary table. This modification prevents the
server from unnecessary checking of user privileges to access system temporary
tables.
sql/sql_select.cc:
Fixed bug #30287.
Recommit to 5.1.22.
Bugfix for #29015 has been removed: TABLE_SHARE::table_name of system
temporary tables contains full path to table file basename again.
sql/sql_view.cc:
Fixed bug #30287.
Recommit to 5.1.22.
Commentary has been added.
- let the receiving injector thread decide what to do
(recommit for 5.1.22 target)
sql/ha_ndbcluster.cc:
BUG#30017 log-slave-updates incorrect behavior for cluster
- let the receiving injector thread decide what to do
sql/ha_ndbcluster_binlog.cc:
BUG#30017 log-slave-updates incorrect behavior for cluster
- let the receiving injector thread decide what to do
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.1-new-ndb-merge
sql/field.cc:
Auto merged
sql/log_event.cc:
Auto merged
sql/log_event.h:
Auto merged
sql/set_var.cc:
Auto merged