subquery when ORDER BY is present
Currently for setting r_limit we divide with the number of iterations we invoke the dependent subquery.
This is not needed for the case of limit. For varying limits we produce the output that the limit varies with
execution.
Also there is a type for filtered , we forgot to multiply by 100 as it is represented as a percent.
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
Fixes over the original patch:
- Fix variable/class/other names
- Fix the JSON output to be in line with the output of other JSON
constructs we produce
Added THD argument to select_result and all derivative classes.
This reduces number of pthread_getspecific calls from 796 to 776 per OLTP RO
transaction.
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
(Based on original patch by Sanja Byelkin)
Make the code that produces JSON output handle LooseScan quick select.
The output we produce is compatible with MySQL 5.6.
- Remove ANALYZE's timing code off the the execution path of regular
SELECTs.
- Improve the tracker that tracks counts/execution times of SELECTs or
DML statements:
= regular execution just increments counters
= ANALYZE will also collect timings.
Show total execution time (r_total_time_ms) for various parts of the
query:
1. time spent in SELECTs
2. time spent reading rows from storage engines
#2 currently gets the data from P_S.
- Changed 0x%lx -> %p
array.c:
- Static (preallocated) buffer can now be anywhere
my_sys.h
- Define MY_INIT_BUFFER_USED
sql_delete.cc & sql_lex.cc
- Use memroot when allocating classes (avoids call to current_thd)
sql_explain.h:
- Use preallocated buffers
sql_explain.cc:
- Use preallocated buffers and memroot
sql_select.cc:
- Use multi_alloc_root() instead of many alloc_root()
- Update calls to Explain
- Switch Explain data structure from "flat" representation of
SJ-Materialization into nested one.
- Update functions that print tabular output to operate on the
nested structure.
- Add function to generate JSON output.
- Basic support for JOIN buffering
- The output is not polished but catches the main point:
tab->select_cond and tab->cache_select->cond are printed separately.
- Hash join support is poor still.
- Also fixed identation in JOIN_TAB::save_explain_data
Writing JSON:
- Fix a bug in Single_line_formatting_helper
- Add Json_writer_nesting_guard - safety class
EXPLAIN JSON support
- Add basic subquery support
- Add tests for UNION/UNION ALL.