(Backport to 5.3)
(Attempt #2)
- Don't attempt to use BKA for materialized derived tables. The
table is neither filled nor fully opened yet, so attempt to
call handler->multi_range_read_info() causes crash.
(Backport to 5.3)
(variant #2, with fixed coding style)
- Make Mrr_ordered_index_reader::resume_read() restore index position
only if it was saved before with Mrr_ordered_index_reader::interrupt_read().
- TABLE::create_key_part_by_field() should not set PART_KEY_FLAG in field->flags
= The reason is that it is used by hash join code which calls it to create a hash
table lookup structure. It doesn't create a real index.
= Another caller of the function is TABLE::add_tmp_key(). Made it to set the flag itself.
- The differences in join_cache.result could also be observed before this patch: one
could put "FLUSH TABLES" before the queries and get exactly the same difference.
Merged Facebook commit ec1aac68c74f3c1e558d057c4c9fcfe6edbbea93
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
In C++11, "" is not parsed as before. So "A""B" is not the same as "AB".
Instead, whitespace is required, like: "A" "B"
Merged Facebook commit dd2d11be7aaf3be270e740fb95cbc4eacb52f4d7
authored by Rongrong Zhong from https://github.com/facebook/mysql-5.6
This fixes MySQL Bug #68220 innodb_rows_updated is misleading on slave
http://bugs.mysql.com/bug.php?id=68220
Added innodb_system_rows_read/inserted/updated/deleted counters
that are the equivalent of innodb_rows_* but that only account for
changes made to system databases (mysql, information_schame and
preformance_schema). These counters will be used on slaves to
differentiated the updates made on system databases from those made on
user databases.
innodb_rows_* status counters are not updated when innodb_system_rows_*
are updated.
dd2d11be7a
Merged Facebook commit ecff018632c6db49bad73d9233c3cdc9f41430e9
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
This change is to fix: http://bugs.mysql.com/62534
This makes innodb_max_dirty_pages_pct a double with min,default,max values
0.001, 75, 99.999.
This also makes innodb_max_dirty_pages_pct_lwm and adaptive_flushing_lwm
doubles, as these sysvars are inter-dependent.
Added more to the BUFFER POOL AND MEMORY section of SHOW INNODB STATUS:
Percent pages dirty: X.X
This is all n_dirty_pages / used_pages
Percent all pages dirty: X.X
This is all n_dirty_pages / all-pages
Max dirty pages percent: X.X
This is innodb_max_dirty_pages_pct
Also changed all of buf from 2 to 3 digits of precision (%.2f -> %.3f).
Merge Facebook commit f981a51a47519b0ba527917887f8adc6df9ae147
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6.
This just moves some structure definitions from inside a
single .cc file to a shared .h file, with a few tweaks to
allow these structures to be shared.
On its own, it should have no actual effect. This is needed later.
Merge Facebook commit 25295d003cb0c17aa8fb756523923c77250b3294
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
This adds a pointer to the trx to each mtr.
This allows the trx to be accessed in parts of the code
where it was otherwise not available. This is needed later.
compressed pages
buf_flush_LRU() returns the number of pages processed. There are
two types of processing that can happen. A page can get evicted or
a page can get flushed. These two numbers are quite distinct and
should not be mixed.
was not reset causing rows to be lost.
modified:
storage/connect/connect.cc
storage/connect/ha_connect.cc
storage/connect/tabxcl.cpp
- Typo
modified:
storage/connect/connect.cc
storage/connect/ha_connect.cc
Merge Facebook commit 926a077b14b73c14094de7fc7aa913241b801b4d
authored by Inaam Rana from https://github.com/facebook/mysql-5.6.
This is fix for upstream bugs
http://bugs.mysql.com/bug.php?id=71988http://bugs.mysql.com/bug.php?id=70500
page_cleaner should work whether or not there is server activity.
Its iterations become a noop when there is no work to do but we
should not tie it to the server activity.
The page_cleaner thread does spurious background flushing
because of conditional sleep between iterations. The solution
is not to make sleep dependent on server activity etc.
Merged Facebooks commit 6e06bbfa315ffb97d713dd6e672d6054036ddc21
authored by Inaam Rana from https://github.com/facebook/mysql-5.6.
Fixes MySQL bug http://bugs.mysql.com/bug.php?id=72123
lock_timeout thread works in a tight loop waking up every second
and checking for lock_wait_timeout. In addition, when a mysql
thread is forced to wait on a lock, it signals the lock_timeout thread
as well. This call is not required. In a heavily contended workload
each thread going to wait will signal the lock_timeout thread making
it work all the time. As lock_timeout thread scans the array of
waiting threads under lock_sys::wait_mutex which is already very
hot in contneded loads, these extra scans can cause significanct
performance regression.
Also, in various codepaths lock_timeout thread is signalled where
actual intention was to signal the innodb monitor thread.
Merged Facebook commit bdab302a7e3c37da21a1bffe1550cdbe6c906695
by Inaam Rana from https://github.com/facebook/mysql-5.6.
In os_event_wait_time_low() the logic to calculate abs_time
for wait is broken. The bug has been present at least since
5.5. It gets acutely sensitized when sub-second wait intervals
are passed. It is particularly relevant to us because the
page_cleaner thread will mostly request sub-second wait
intervals. This can potentially lead to a near tight loop
behaviour of page_cleaner with much less sleep then what we'd
actually expect.
the beginning. Defining the STRING class and begining to use it (MYSQL)
2) Change the xtrace, use_tempfile and exact_info connect variables from
GLOBAL to SESSION. Remaining GLOBAL variables have been made readonly.
3) Take care of LEX_STRING variables. The .str should not be regarded as
allways being 0 terminated. This is handled by the Strz functions that
make sure to return 0 terminated strings.
Bug fix:
- When inserting in MYSQL table with special column(s) a query such as:
insert into t2 values(0,4,'new04'),(0,5,'new05');
failed saying: column id (the special column) not found in t2.
It is now accepted but must be counted in values (these 0 are ignored)
- ROWID was returning row numbers based 0. Now it is from base 1.
modified:
storage/connect/array.cpp
storage/connect/blkfil.cpp
storage/connect/colblk.cpp
storage/connect/connect.cc
storage/connect/filamap.cpp
storage/connect/filamdbf.cpp
storage/connect/filamfix.cpp
storage/connect/filamtxt.cpp
storage/connect/filamvct.cpp
storage/connect/filamzip.cpp
storage/connect/filamzip.h
storage/connect/filter.cpp
storage/connect/global.h
storage/connect/ha_connect.cc
storage/connect/ha_connect.h
storage/connect/libdoc.cpp
storage/connect/mycat.cc
storage/connect/myconn.cpp
storage/connect/odbconn.cpp
storage/connect/plgdbutl.cpp
storage/connect/plugutil.c
storage/connect/reldef.cpp
storage/connect/tabcol.cpp
storage/connect/tabdos.cpp
storage/connect/tabfix.cpp
storage/connect/tabfmt.cpp
storage/connect/table.cpp
storage/connect/tabmul.cpp
storage/connect/tabmysql.cpp
storage/connect/tabmysql.h
storage/connect/taboccur.cpp
storage/connect/tabodbc.cpp
storage/connect/tabpivot.cpp
storage/connect/tabsys.cpp
storage/connect/tabtbl.cpp
storage/connect/tabutil.cpp
storage/connect/tabvct.cpp
storage/connect/tabwmi.cpp
storage/connect/tabwmi.h
storage/connect/tabxcl.cpp
storage/connect/tabxml.cpp
storage/connect/user_connect.cc
storage/connect/valblk.cpp
storage/connect/value.cpp
storage/connect/value.h
storage/connect/xindex.cpp
storage/connect/xobject.cpp
storage/connect/xobject.h
storage/connect/xtable.h
(Attempt #2)
- Don't attempt to use BKA for materialized derived tables. The
table is neither filled nor fully opened yet, so attempt to
call handler->multi_range_read_info() causes crash.
The test case waits for other threads to complete, but the wait is only 2
seconds. This is likely to sometimes be too little on our heavily loaded
buildbot VMs, that can easily stall for more than 2 seconds from time to time.
So let's try to increase the timeout (to about 40 seconds) and see if it
helps.
The test case runs SHOW SLAVE HOSTS. The output of this is only stable after
all slaves have had time to register with the master; this happens
asynchroneously.
The test was waiting for the slave with server_id=3 to appear in the output,
but it was missing a similar wait for server_id=2. Thus, if server_id=2 was
much slower to connect for some reason, it could be missing from the output,
causing the test to fail.