The patch for Bug 26379 (Combination of FLUSH TABLE and
REPAIR TABLE corrupts a MERGE table) fixed this bug too.
However it revealed a new bug that crashed the server.
Flushing a merge table at the moment when it is between open
and attach of children crashed the server.
The flushing thread wants to abort locks on the flushed table.
It calls ha_myisammrg::lock_count() and ha_myisammrg::store_lock()
on the TABLE object of the other thread.
Changed ha_myisammrg::lock_count() and ha_myisammrg::store_lock()
to accept non-attached children. ha_myisammrg::lock_count() returns
the number of MyISAM tables in the MERGE table so that the memory
allocation done by get_lock_data() is done correctly, even if the
children become attached before ha_myisammrg::store_lock() is
called. ha_myisammrg::store_lock() will not return any lock if the
children are not attached.
This is however a change in the handler interface. lock_count()
can now return a higher number than store_lock() stores locks.
This is more safe than the reverse implementation would be.
get_lock_data() in the SQL layer is adjusted accordingly. It sets
MYSQL_LOCK::lock_count based on the number of locks returned by
the handler::store_lock() calls, not based on the numbers returned
by the handler::lock_count() calls. The latter are only used for
allocation of memory now.
No test case. The test suite cannot reliably run FLUSH between
lock_count() and store_lock() of another thread. The bug report
contains a program that can repeat the problem with some
probability.
ha_partition::update_create_info() just calls update_create_info
of a first partition, so only get the autoincrement maximum
of the first partition, so SHOW CREATE TABLE can show
small AUTO_INCREMENT parameters.
Fixed by implementing ha_partition::update_create_info() in a way
other handlers work.
HA_ARCHIVE:stats.auto_increment handling made consistent with other engines
INSERT/UPDATE against CSV table created by MySQL earlier than 5.1.23
with NULL-able column results in server crash in debug builds.
Starting with 5.1.23 CSV doesn't permit creation of NULL-able columns,
but it is still possible to get such table from older MySQL versions.
Fixed by removing excessive DBUG_ASSERT().
table cache is full
After reading last record from freshly opened archive table
(e.g. after flush table, or if there is no room in table cache),
the table is reported as crashed.
The problem was that azio wrongly invalidated azio_stream when it
meets EOF.
"Table is already up to date" vs. "OK"
On MacOSX 10.5 when you cast something to "bool"
(the built in C type) it takes values 0 or 1
instead of 0-255 as it seems to be on older compilers.
Fixed by removing the typecast (not needed).
No test case needed : there are tests that test it.
Fixes the following bug:
- Bug #32125: Database crash due to ha_innodb.cc:3896: ulint convert_search_mode_to_innobase
When unknown find_flag is encountered in convert_search_mode_to_innobase()
do not call assert(0); instead queue a MySQL error using my_error() and
return the error code PAGE_CUR_UNSUPP. Change the functions that call
convert_search_mode_to_innobase() to handle that error code by "canceling"
execution and returning appropriate error code further upstream.