Problem: trailing spaces were stripped using 8-bit code,
so the truncation result length was incorrect, which led
to an assertion failure.
Fix: using multi-byte safe code.
Before this fix, some tests failed due to lack of instrumentation slots
in the performance schema, because the default sizing was too low.
Now that more code has been instrumented, the default sizing has to be adjusted
to match the current instrumentation consumption.
This change:
- increases the number of rwlock classes from 20 to 30,
- increases the number of rwlock and mutex instances to 1 million.
Both are to account for the volume of data instrumented
when the innodb storage engine is used (because of the innodb buffer pool).
Adjusted the test output accordingly.
Added InnoDB to the 'default' plugin group, and modified
the autoconf script so the 'default' group is actually
built by default.
(i.e ./configure.am == ./configure.am --with-plugins=default ,
instead of being ./configure.am --with-plugins=none )
Improve the range estimation algorithm.
Previously:
For a given level the algo knows the number of pages in the requested range and the n
With this change:
Same idea, but peek a few (10) of the intermediate pages to get a better estimate of
In the bug report one of the examples has a btree with a snippet of the leaf level li
page1(899 records), page2(1 record), page3(1 record), page4(1 record)
so when trying to estimate, the previous algo, assumed there are average (899+1)/2=45
Fix Bug#53761 RANGE estimation for matched rows may be 200 times different
Improve the range estimation algorithm.
Previously:
For a given level the algo knows the number of pages in the requested range
and the number of records on the leftmost and the rightmost page. Then it
assumes all pages in between contain the average between the two border pages
and multiplies this average number by the number of intermediate pages.
With this change:
Same idea, but peek a few (10) of the intermediate pages to get a better
estimate of the average number of records per page. If there are less than 10
intermediate pages then all of them will be scanned and the result will be
precise, not an estimation.
In the bug report one of the examples has a btree with a snippet of the leaf
level like this:
page1(899 records), page2(1 record), page3(1 record), page4(1 record)
so when trying to estimate, the previous algo, assumed there are average
(899+1)/2=450 records per page which went terribly wrong. With this change
page2 and page3 will be read and the exact number of records will be returned.
Approved by: Sunny (rb://401)
Reduce ibuf_mutex and ibuf_pessimistic_insert_mutex contention further.
Protect ibuf->empty by the insert buffer root page latch, not ibuf_mutex.
ibuf_tree_root_get(): Assert that ibuf_mutex is owned by the
caller. Assert that the stamped page number is correct. Assert that
ibuf->empty agrees with the root page.
ibuf_size_update(): Do not update ibuf->empty.
ibuf_init_at_db_start(): Update ibuf->empty while holding the root page latch.
ibuf_add_free_page(): Return TRUE/FALSE instead of DB_SUCCESS/DB_STRONG_FAIL.
ibuf_remove_free_page(): Release ibuf_pessimistic_insert_mutex as
early as possible.
ibuf_contract_ext(): Rely on a dirty read of ibuf->empty, unless the
server is being shut down. Never acquire ibuf_mutex. Eliminate n_stored.
ibuf_contract_after_insert(): Never acquire ibuf_mutex. Perform dirty
reads of ibuf->size and ibuf->max_size.
ibuf_insert_low(): Only acquire ibuf_mutex for mode==BTR_MODIFY_TREE.
Perform dirty reads of ibuf->size and ibuf->max_size. Update
ibuf->empty while holding the root page latch.
ibuf_delete_rec(): Update ibuf->empty while holding the root page latch.
ibuf_is_empty(): Release ibuf_mutex earlier.
regression in Bug #54914, but it does speed up the execution for
innodb_change_buffering=inserts.
ibuf_add_ops(), ibuf_merge_or_delete_for_page(),
ibuf_delete_for_discarded_space(): Use atomic built-ins instead of
ibuf_mutex, when available.
ibuf_add_free_page(), ibuf_remove_free_page(), ibuf_contract_ext():
Release ibuf_mutex earlier.
ibuf_free_excess_pages(): Release ibuf_mutex before a conditional branch.
ibuf_insert_low(): Release ibuf_mutex before a conditional
branch. Create ibuf_entry before re-acquiring ibuf_mutex. Simplify a
loop to reduce code footprint. Release ibuf_mutex before mtr_commit()
[btr_pcur_close()].
ibuf_is_empty(): Release ibuf_mutex before mtr_commit().
Reverted the ulong->uint diff
Re-applied the first diff.
The original commit message follows:
enum plugin system variables are ulong internally, not int.
On systems where long is not the same as an int it causes
problems.
Fixed by correct typecasting. Removed the test from the
experimental list.
The enum system variables were handled inconsistently
as ints, unsigned int and unsigned long on various places.
This caused problems on platforms on which
sizeof(int) != sizeof(long).
Fixed by homogenizing the type of the enum variables
to unsigned int, since it's size compatible with the C enum
type.
Removed the test from the experimental list.
pages that it wants to flush then we should honor that value as in
not going beyond that in our eagerness to flush the neighbors of
the selected victim.
The problem was that the optimize method of the ARCHIVE storage
engine was not preserving the FRM embedded in the ARZ file when
rewriting the ARZ file for optimization. The ARCHIVE engine stores
the FRM in the ARZ file so it can be transferred from machine to
machine without also copying the FRM -- the engine restores the
embedded FRM during discovery.
The solution is to copy over the FRM when rewriting the ARZ file.
In addition, some initial error checking is performed to ensure
garbage is not copied over.