ha_partition::update_create_info() just calls update_create_info
of a first partition, so only get the autoincrement maximum
of the first partition, so SHOW CREATE TABLE can show
small AUTO_INCREMENT parameters.
Fixed by implementing ha_partition::update_create_info() in a way
other handlers work.
HA_ARCHIVE:stats.auto_increment handling made consistent with other engines
In the patch for BUG#21842, the code for handling old rows events were
refactored. There were a bug in the refactored code (possibly introduced
after the patch for BUG#21842) that caused caused the refactored old events
to read a columns bitmap after image even though there is no such bitmap
for old events. As a result, the reading got out of sync, and started reading
invalid data.
This patch removes all trace of the after image column bitmap from the refactored
old events and removes functions that are no longer needed because they are empty.
Removed the auto detection and use of Solaris "libmtmalloc", as it
cause regression on bug#18322. The code removed also prevented
a build without using this library. Users can still compile with
"libmtmalloc", if configuring with "--with-mysqld-libs=-lmtmalloc"
additional fixes for 64-bit
---
Merge mysql.com:/misc/mysql/31177/50-31177
into mysql.com:/misc/mysql/31177/51-31177
---
Bug#31177: Server variables can't be set to their current values
additional 5.1 fixes (for plugins)
(also fixes the bugs: Bug#29320, Bug#29493 and Bug#30536)
Problem: Partitioning did not handle unordered scans correctly
for engines with unordered read order.
Solution: do not stop scanning fi a recored is out of range, since
there can be more records within the range afterwards.
Note: this is the patch that fixes the bug, but since there are no
storage engines shipped with mysql 5.1 (falcon comes in 6.0) there
are no test cases (it is a separate patch that only goes into 6.0)
Anti-patch. This patch undoes the previously pushed patch. It is
null-merged in versions 5.1 and above since there the original
patch is still desired.
When executing drop view statement on the master, the statement is not written into bin-log if any error occurs, this could cause master slave inconsistence if any view has been dropped.
If some error occured and no view has been dropped, don't bin-log the statement, if at least one view has been dropped the query is bin-logged possible with an error.
When executing drop view statement on the master, the statement is written
into bin-log without checking for possible errors, so the statement would
always be bin-logged with error code cleared even if some error might occur,
for example, some of the views being dropped does not exist. This would cause
failure on the slave.
Writing bin-log after check for errors, if at least one view has been dropped
the query is bin-logged possible with an error.