Variant #4 of the fix.
Make ORDER BY optimization functions take into account multiple
equalities. This is done in several places:
- remove_const() checks whether we can sort the first table in the
join, or we need to put rows into temp.table and then sort.
- test_if_order_by_key() checks whether there are indexes that
can be used to produce the required ordering
- make_unireg_sortorder() constructs sort criteria for filesort.
1. the same message text for INSERT and INSERT IGNORE
2. no new warnings in UPDATE IGNORE yet (big change for 5.5)
and replace a commonly used expression with a
named constant
Case: table with a NOT NULL field, BEFORE UPDATE trigger,
and UPDATE with a subquery that uses GROUP BY on that
NOT NULL field, and needs a temporary table for it.
Because of the BEFORE trigger, the field becomes nullable
temporarily. But its Item_field (used in GROUP BY) doesn't.
When working with the temptable some code looked at
item->maybe_null, some - at field->null_ptr.
The fix: make Item_field nullable when its field is.
This triggers an assert. The group key size is calculated
before the item is made nullable, so the group key doesn't
have a null byte. The fix: make fields/items nullable
before the group key size is calculated.
CONSTRAINT.
Analysis
=======
INSERT and UPDATE operations using the IGNORE keyword which
causes FOREIGN KEY constraint violations reports an error
despite using the IGNORE keyword.
Foreign key violation errors were not ignored and reported
as errors instead of warnings even when IGNORE was set.
Fix
===
Added code to ignore the foreign key violation errors and
report them as warnings when the IGNORE keyword is used.
NOT NULL constraint must be checked *after* the BEFORE triggers.
That is for INSERT and UPDATE statements even NOT NULL fields
must be able to store a NULL temporarily at least while
BEFORE INSERT/UPDATE triggers are running.
MDEV-8938 Server Crash on Update with joins
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Assertion `inited==INDEX' failed in int handler::ha_index_first(uchar*)
The crash was because errors from init_read_record_idx() was not taken care of.
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
[Attempt #] Make the code that handles "Prepare" phase for multi-table
UPDATE statements handle non-merged semijoins. It can encounter them when
a prepared statement is executed for the second time.
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
Step#5: changing the function remove_eq_conds() into a virtual method in Item.
It removes 6 virtual calls for Item_func::type(), and adds only 2
virtual calls for Item***::remove_eq_conds().
Added THD argument to select_result and all derivative classes.
This reduces number of pthread_getspecific calls from 796 to 776 per OLTP RO
transaction.
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
- Remove ANALYZE's timing code off the the execution path of regular
SELECTs.
- Improve the tracker that tracks counts/execution times of SELECTs or
DML statements:
= regular execution just increments counters
= ANALYZE will also collect timings.
Implement a new mode for parallel replication. In this mode, all transactions
are optimistically attempted applied in parallel. In case of conflicts, the
offending transaction is rolled back and retried later non-parallel.
This is an early-release patch to facilitate testing, more changes to user
interface / options will be expected. The new mode is not enabled by default.
reset default fields not for every modified row, but only once,
at the beginning, as the set of modified fields doesn't change.
exception: INSERT ... ON DUPLICATE KEY UPDATE - the set of fields
does change per row and in that case we reset default fields per row.