table->move_fields wasn't undone in case of error.
1. move_fields is unconditionally undone even when error is occurred
2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5
Before FRM is written walk vcol expressions through
check_table_name_processor() and check if field items match (db,
table_name) qualifier.
We cannot do this in check_vcol_func_processor() as there is already
no table name qualifiers in expressions of written and loaded FRM.
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
Implement a different fix for
"MDEV-19232: Floating point precision / value comparison problem"
Instead of truncating decimal values after every division,
truncate them for comparison purposes.
This reverts commit 62d73df6b2 but keeps the test.
Add a proper error handling of innobase_get_computed_value results in
row_upd_store_row/row_upd_store_v_row.
Also add an assertion in row_vers_build_clust_v_col to fail during row
purge.
Add one more assertion in row_sel_sec_rec_is_for_clust_rec for possible
future catches.
The issue occurs when the subquery_cache is enabled.
When there is a cache miss the division was leading to a value with scale 9.
In the case of cache hit the value returned was of scale 9 and due to the different
values for the scales the where condition evaluated to FALSE, hence the output
was incomplete.
To fix this problem we need to round up the decimal to the limit mentioned in
Item::decimals. This would make sure the values are compared with the same
scale.
update_virtual_field() is called as part of index rebuild in
ha_myisam::repair() (MDEV-5800) which is done on bulk INSERT finish.
Assertion in update_virtual_field() was put as part of MDEV-16222
because update_virtual_field() returns in_use->is_error(). The idea:
wrongly mixed semantics of error status before update_virtual_field()
and the status returned by update_virtual_field(). The former can
falsely influence the latter.
The code erroneously allowed both:
INSERT INTO t1 (vcol) VALUES (DEFAULT);
INSERT INTO t1 (vcol) VALUES (DEFAULT(non_virtual_column));
The former is OK, but the latter is not.
Adding a new virtual method in Item:
virtual bool vcol_assignment_allowed_value() const { return false; }
Item_null, Item_param and Item_default_value override it.
Item_default_value overrides it in the way to:
- allow DEFAULT
- disallow DEFAULT(col)
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.
Replaced with default_storage_engine.
Add support of referential constraints directly in column defininions:
create table t1 (id1 int primary key);
create table t2 (id2 int references t1(id1));
Referenced field name can be omitted if equal to foreign field name:
create table t1 (id int primary key);
create table t2 (id int references t1);
Until 10.5 this syntax was understood by the parser but was silently
ignored.
In case of generated columns this syntax is disabled at parser level
by ER_PARSE_ERROR. Note that separate FOREIGN KEY clause for generated
columns is disabled at storage engine level.
Conversion to a temporal data type resulting into a lower precision
depends on TIME_ROUND_FRACTIONAL. Taking into account this dependency in:
- indexed generated virtual column expressions
- persistent virtual column expressions
A warning is now issued if conversion from the generation expression
to the column data type depends on TIME_ROUND_FRACTIONAL.
The warning will be changed to error in 10.5