"Range Checked for Each Record" should be only employed when the other
option would be cross-product join (i.e. the other option is so bad that
we hardly risk anything).
Previous logic was: use RCfER if there are no possible quick selects, or
quick select would read > 100 rows. Also, it didn't always work as
expected due to range optimizer changing table->quick_keys and us
looking at sel->quick_keys.
Another angle is that recent versions have enabled use of Join Buffering
in e.g. outer joins. This further reduces the range of cases where RCfER
should be used.
We are still unable to estimate the cost of RCfER with any precision, so
now changing the condition of "no quick select or quick->records> 100"
to a hopefully better condition "no quick select or quick would cost more
than full table scan".
Step#5: changing the function remove_eq_conds() into a virtual method in Item.
It removes 6 virtual calls for Item_func::type(), and adds only 2
virtual calls for Item***::remove_eq_conds().
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
Step #3: Splitting the function check_equality() into a method in Item.
Implementing Item::check_equality() and Item_func_eq::check_equality().
Implement Item_func_eq::build_equal_items() in addition to
Item_func::build_equal_items() and moving the call for check_equality()
from Item_func::build_equal_items() to Item_func_eq::build_equal_items().
Step 2c:
After discussion with Igor, it appeared that Item_field and Item_ref
could not appear in this context in the old function build_equal_item_for_cond:
else if (cond->type() == Item::FUNC_ITEM ||
cond->real_item()->type() == Item::FIELD_ITEM)
The part of the condition checking for Item_field::FIELD_ITEM was a dead code.
- Moving implementation of Item_ident_or_func_or_sum::build_equal_items()
to Item_func::build_equal_items()
- Restoring deriving of Item_ident and Item_sum_or_func from Item_result_field.
Removing Item_ident_or_func_or_sum.
Step#2:
1. Removes the function build_equal_items_for_cond() and
introduces a new method Item::build_equal_items() instead,
with specific implementations in the following Items:
Item (the default implementation)
Item_ident_or_func_or_sum
Item_cond
Item_cond_and
2. Adds a new abstract class Item_ident_or_func_or_sum,
a common parent for Item_ident and Item_func_or_sum,
as they have exactly the same build_equal_items().
3. Renames Item_cond_and::cond_equal to Item_cond_and::m_cond_equal,
to avoid confusion between the member and local variables named
"cond_equal".
- Remove ANALYZE's timing code off the the execution path of regular
SELECTs.
- Improve the tracker that tracks counts/execution times of SELECTs or
DML statements:
= regular execution just increments counters
= ANALYZE will also collect timings.
Backport from mysql-5.5 to mysql-5.1
Bug#19880368 : GROUP_CONCAT CRASHES AFTER DUMP_LEAF_KEY
Problem:
find_order_by_list does not update the address of order_item
correctly after resolving.
Solution:
Change the ref_by address for a order_by field if its
SUM_FUNC_ITEM to the address of the field present in
all_fields.
Backport from mysql-5.5 to mysql-5.1
Bug #19612819 : FILESORT: ASSERTION FAILED: POS->FIELD != 0 || POS->ITEM != 0
Problem:
While getting the temp table field for a REF_ITEM
make_sortorder is using the real_item. As a result
server fails later with an assert.
Solution:
Do not use real_item to get the temp table field.
Instead use the REF_ITEM itself as temp table fields
are created for REF_ITEM not the real_item.
JOIN::cur_dups_producing_tables was not maintained correctly in
the cases of greedy optimization (search_depth < n_tables).
Moved it to POSITION structure where it will be maintained automatically.
Removed POSITION::prefix_dups_producing_tables since its value can now
be calculated.
Show total execution time (r_total_time_ms) for various parts of the
query:
1. time spent in SELECTs
2. time spent reading rows from storage engines
#2 currently gets the data from P_S.
This bug manifests due to wrong computation and evaluation of
keyinfo->key_length. The issues were:
* Using table->file->max_key_length() as an absolute value that must not be
reached for a key, while it represents the maximum number of bytes
possible for a table key.
* Incorrectly computing the keyinfo->key_length size during
KEY_PART_INFO creation. The metadata information regarding the key
such the field length (for strings) was added twice.
Temporary table count fix. The number of temporary tables was increased
when the table is not actually created. (when do_not_open was passed
as TRUE to create_tmp_table).
Problem:
find_order_by_list does not update the address of order_item
correctly after resolving.
Solution:
Change the ref_by address for a order_by field if its
SUM_FUNC_ITEM to the address of the field present in
all_fields.
Redefine FT_KEYPART in a way that it does not conflict with Hash Join.
Hash join stores field->field_index in KEYUSE::keypart, so we must
use a value of FT_KEYPART that's greater than MAX_FIELDS.
Problem:
While getting the temp table field for a REF_ITEM
make_sortorder is using the real_item. As a result
server fails later with an assert.
Solution:
Do not use real_item to get the temp table field.
Instead use the REF_ITEM itself as temp table fields
are created for REF_ITEM not the real_item.
- Fixed compiler warnings
- Added include/wait_for_binlog_checkpoint.inc, as suggested by JonasO
- Updated 'build-tags' to work with git (Patch by Serg)