into poseidon.:/home/tomas/mysql-5.1-new-ndb
storage/ndb/tools/restore/consumer_restore.cpp:
Auto merged
storage/ndb/tools/restore/restore_main.cpp:
Auto merged
into perch.ndb.mysql.com:/home/jonas/src/mysql-5.1-new-ndb
storage/ndb/src/kernel/blocks/dbacc/Dbacc.hpp:
Auto merged
storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp:
Auto merged
Allow readTablePk to stumble on scan+deleted tuple,
reporting no-match instead of crash (in case scan is lock-owner)
storage/ndb/src/kernel/blocks/dbacc/Dbacc.hpp:
Allow readTablePk to stumble on scan+deleted tuple,
reporting no-match instead of crash
storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp:
Allow readTablePk to stumble on scan+deleted tuple,
reporting no-match instead of crash
into dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.1/mysql-5.1-new-ndb-merge
storage/ndb/src/ndbapi/NdbBlob.cpp:
Auto merged
storage/ndb/test/ndbapi/testBlobs.cpp:
Auto merged
into perch.ndb.mysql.com:/home/jonas/src/mysql-5.1-new-ndb
storage/ndb/src/kernel/blocks/suma/Suma.cpp:
Auto merged
storage/ndb/test/run-test/daily-basic-tests.txt:
Auto merged
Fix bug in SUMA::resend_bucket which could cause mysqld to crash
storage/ndb/src/kernel/blocks/suma/Suma.cpp:
Remove *len* part from sz,
or an extra word will be sent (sometimes) which will cause event-api barf
storage/ndb/test/ndbapi/test_event.cpp:
test prg for bug#27169
storage/ndb/test/run-test/daily-basic-tests.txt:
test prg for bug#27169
TABLE ... WRITE".
Memory and CPU hogging occured when connection which had to wait for table
lock was serviced by thread which previously serviced connection that was
killed (note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
The problem also occured when one killed particular query in connection
(using KILL QUERY) and later this connection had to wait for some table
lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
sql/mysqld.cc:
We should not forget to reset THD::mysys_var::abort after kill operation
if we are going to use thread to which this operation was applied for
handling of other connections.
sql/sp_head.cc:
We should not forget to reset THD::mysys_var::abort after kill operation
if we are going to use thread to which this operation was applied for
handling of further statements.
sql/sql_parse.cc:
We should not forget to reset THD::mysys_var::abort after kill operation
if we are going to use thread to which this operation was applied for
handling of further statements.
TABLE ... WRITE".
CPU hogging occured when connection which had to wait for table lock was
serviced by thread which previously serviced connection that was killed
(note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
In 5.* versions memory hogging was added to CPU hogging. Moreover in
those versions the problem also occured when one killed particular query
in connection (using KILL QUERY) and later this connection had to wait for
some table lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
sql/mysqld.cc:
We should not forget to reset THD::mysys_var::abort after kill operation
if we are going to use thread to which this operation was applied for
handling of other connections.
into perch.ndb.mysql.com:/home/jonas/src/mysql-5.1-new-ndb
storage/ndb/test/tools/listen.cpp:
Auto merged
storage/ndb/src/kernel/blocks/suma/Suma.cpp:
merge
Handle API failure during resend
API failure could cause release of table object, which will make resend crash
when dereferencing table object
Solution, use table_id+hash+schemaversion instead of *raw* pointer in resend
storage/ndb/src/kernel/blocks/suma/Suma.cpp:
Handle API failure during resend
API failure could cause release of table object, which will make resend crash
when dereferencing table object
Solution, use table_id+hash+schemaversion instead of *raw* pointer in resend
storage/ndb/test/tools/listen.cpp:
add new events
into mysql.com:/home/kent/bk/tmp/mysql-5.1-build
configure.in:
Auto merged
storage/ndb/src/ndbapi/NdbBlob.cpp:
Auto merged
storage/ndb/test/ndbapi/testBlobs.cpp:
Auto merged
Make sure not to handle API_FAILREQ if it's already handled
storage/ndb/src/kernel/blocks/suma/Suma.cpp:
Make sure not to handle API_FAILREQ if it's already handled
Make sure head after undo execute does not point to last page of file
As this will confuse next write to group
storage/ndb/src/kernel/blocks/lgman.cpp:
Make sure head after undo execute does not point to last page of file
As this will confuse next write to group
The previous two patches for this bug worked together so that
no permanent table was memory mapped. The first patch tried to
avoid mapping while a table is in use. It allowed mapping only
if there was exactly one lock on the table, assuming that the
calling thread owned it. During mi_open(), a different call to
memory mapping was coded, which did not have this limitation.
The second patch tried to remove the code duplication and just
called mi_extra() from mi_open() an thus inherited the limitation.
But on open, a thread does not have a lock on the table...
A possible solution would be to check for zero or one lock.
But since I learned that it is safe to memory map a file while
normal file I/O is done on it, I removed the restriction altogether
and allow to memory map while a table is in use.
No test case. I do not see a chance to verify with the test suite
which kind of I/O is used on a table.
storage/myisam/mi_extra.c:
Bug#25460 - High concurrency MyISAM access causes severe mysqld crash.
Allow to memory map while table is in use.