In early development of delete buffering, we did allow B-tree pages
to become empty as a result of buffered deletes. That caused fundamental
problems. The fix was to refuse buffering purge operations unless
the page can be guaranteed to be nonempty. Remove an attempt to
cope with empty pages when merging inserts.
Remove non applicable licensing files
storage/innobase/COPYING is in the MySQL top level directory and
storage/innobase/COPYING.Sun_Microsystems is not applicable anymore
now that Oracle and Sun are one company.
------------------------------------------------------------
revno: 3550
revision-id: marko.makela@oracle.com-20100824081003-v4ecy0tga99cpxw2
parent: marko.makela@oracle.com-20100823102854-t1clrojqis2ley36
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Tue 2010-08-24 11:10:03 +0300
message:
Bug#55832: selects crash too easily when innodb_force_recovery>3
dict_update_statistics_low(): Create bogus statistics for those
indexes that cannot be accessed because of the innodb_force_recovery
setting.
ha_innobase::info(): Calculate statistics for each index, even if
innodb_force_recovery is set. Fill in bogus data for those indexes
that are not accessed because of the innodb_force_recovery setting.
and above the general requirement free. We call them
BUF_FLUSH_EXTRA_MARGIN. With multiple buffer pools we may end up keeping
this amount of pages for each buffer pool. This patch, diagnosed and fixed
by Michael, throttles flushing in such cases.
rb://435
bug#54346
Issue an error message to the error log when
trx->dict_operation_lock_mode == RW_X_LATCH in
srv_suspend_mysql_thread(). Transactions that modify InnoDB
data dictionary tables must be free of lock waits, because they
must be holding the data dictionary latch in exclusive mode.
The transactions must not be accessing any other tables other than
the data dictionary tables.
The handling of RW_X_LATCH was accidentally added in the InnoDB Plugin,
as a wrong fix of an assertion failure. (Fast index creation was accessing
both data dictionary tables and user tables in the same transaction.)
revno: 3545
revision-id: marko.makela@oracle.com-20100818110110-zfs0i1vfrccfb4yw
parent: vasil.dimov@oracle.com-20100817193934-1yl7zz2odikxauf8
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Wed 2010-08-18 14:01:10 +0300
message:
Bug#55626: MIN and MAX reading a delete-marked record from secondary index
Remove a bogus debug assertion that triggered the bug.
Add assertions precisely where records must not be delete-marked.
And a comment to clarify when the record is allowed to be delete-marked.
Added InnoDB to the 'default' plugin group, and modified
the autoconf script so the 'default' group is actually
built by default.
(i.e ./configure.am == ./configure.am --with-plugins=default ,
instead of being ./configure.am --with-plugins=none )
Improve the range estimation algorithm.
Previously:
For a given level the algo knows the number of pages in the requested range and the n
With this change:
Same idea, but peek a few (10) of the intermediate pages to get a better estimate of
In the bug report one of the examples has a btree with a snippet of the leaf level li
page1(899 records), page2(1 record), page3(1 record), page4(1 record)
so when trying to estimate, the previous algo, assumed there are average (899+1)/2=45
Fix Bug#53761 RANGE estimation for matched rows may be 200 times different
Improve the range estimation algorithm.
Previously:
For a given level the algo knows the number of pages in the requested range
and the number of records on the leftmost and the rightmost page. Then it
assumes all pages in between contain the average between the two border pages
and multiplies this average number by the number of intermediate pages.
With this change:
Same idea, but peek a few (10) of the intermediate pages to get a better
estimate of the average number of records per page. If there are less than 10
intermediate pages then all of them will be scanned and the result will be
precise, not an estimation.
In the bug report one of the examples has a btree with a snippet of the leaf
level like this:
page1(899 records), page2(1 record), page3(1 record), page4(1 record)
so when trying to estimate, the previous algo, assumed there are average
(899+1)/2=450 records per page which went terribly wrong. With this change
page2 and page3 will be read and the exact number of records will be returned.
Approved by: Sunny (rb://401)