Updated tests: cases with bugs or which cannot be run
with the cursor-protocol were excluded with
"--disable_cursor_protocol"/"--enable_cursor_protocol"
Fix for v.10.5
Move table open result processing to the caller
* st_schema_table::process_table doesn't have to check whether the table
was opened successfully
* It also doesn't have to check for a thd error and convert it to a warning
* This simplifies adding new tables into information_schema
* A callback still can output some info to a user in case of error. In
order to do this, I_S_EXTENDED_ERROR_HANDLING should be specified in
i_s_requested_object.
Two new information_schema views are added:
* PERIOD table -- columns TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME,
PERIOD_NAME, START_COLUMN_NAME, END_COLUMN_NAME.
* KEY_PERIOD_USAGE -- works similar to KEY_COLUMN_USAGE, but for periods.
Columns CONSTRAINT_CATALOG, CONSTRAINT_SCHEMA, CONSTRAINT_NAME,
TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, PERIOD_NAME
Two new columns are added to the COLUMNS view:
IS_SYSTEM_TIME_PERIOD_START, IS_SYSTEM_TIME_PERIOD_END - contain YES/NO.
According to the standard, the autoincrement column (i.e. *identity
column*) should be advanced each insert implicitly made by
UPDATE/DELETE ... FOR PORTION.
This is very unconvenient use in several notable cases. Concider a
WITHOUT OVERLAPS key with an autoinc column:
id int auto_increment, unique(id, p without overlaps)
An update or delete with FOR PORTION creates a sense that id will remain
unchanged in such case.
The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however
the generation rules differ in many ways. For example, there's also a
notion autoincrement index, which is bound to the autoincrement field.
We will define our own generation rule for the PORTION OF operations
involving AUTO_INCREMENT:
* If an autoincrement index contains WITHOUT OVERLAPS specification, then
a new value should not be generated, otherwise it should.
Apart from WITHOUT OVERLAPS there is also another notable case, referred
by the reporter - a unique key that has an autoincrement column and a field
from the period specification:
id int auto_increment, unique(id, s), period for p(s, e)
for this case, no exception is made, and the autoincrementing rules will be
proceeded accordung to the standard (i.e. the value will be advanced on
implicit inserts).
`ha_heap::clone` was creating a handler by share's handlerton, which is
partition handlerton.
handler's handlerton should be used instead.
Here in particular, HEAP handlerton will be used and it will create ha_heap
handler.
The bug was fixed by MDEV-22599 bugfix, which changed `Field::cmp` call
to `Field::cmp_prefix` in `TABLE::check_period_overlaps`.
The trick is that `Field_bit::cmp` apparently calls `Field_bit::cmp_key`,
which condiders an argument an actual pointer to data, which isn't correct
for `Field_bit`, since it stores data by `bit_ptr`. which is in the
beginning of the record, and using `ptr` is incorrect (we use it through
`ptr_in_record` call)
After Sergei's cleanup this assertion is not actual anymore -- we can't
predict if the handler was used for lookup, especially in multi-update
scenario.
`position(old_data)` is made earlier in `ha_check_overlaps`, therefore it
is guaranteed that we compare right refs.
The problem here was that ha_check_overlaps internally uses ha_index_read,
which in case of fail overwrites table->status. Even though the handlers
are different, they share a common table, so the value is anyway spoiled.
This is bad, and table->status is badly designed and overweighted by
functionality, but nothing can be done with it, since the code related to
this logic is ancient and it's impossible to extract it with normal effort.
So let's just save and restore the value in ha_update_row before and after
the checks.
Other operations like INSERT and simple UPDATE are not in risk, since they
don't use this table->status approach.
DELETE does not do any unique checks, so it's also safe.
Insert worked incorrect as well. RocksDB used table->record[0] internally to store some
intermediate results for key conversion, during index searching among other operations.
So table->record[0] is spoiled during ha_rnd_index_map in ha_check_overlaps, so in turn
the broken record data was inserted.
The fix is to store RocksDB intermediate result in its own buffer instead of table->record[0].
`rocksdb` MTR suite is is checked and runs fine.
No need for additional tests. The existing overlaps.test covers the case completely.
However, I am not going to add anything related to rocksdb to suite, to keep it away
from additional dependencies.
To run tests with RocksDB engine, one can add following to engines.combinations:
[rocksdb]
plugin-load=$HA_ROCKSDB_SO
default-storage-engine=rocksdb
rocksdb
* The overlaps check is implemented on a handler level per row command.
It creates a separate cursor (actually, another handler instance) and
caches it inside the original handler, when ha_update_row or
ha_insert_row is issued. Cursor closes on unlocking the handler.
* Containing the same key in index means unique constraint violation
even in usual terms. So we fetch left and right neighbours and check
that they have same key prefix, excluding from the key only the period part.
If it doesnt match, then there's no such neighbour, and the check passes.
Otherwise, we check if this neighbour intersects with the considered key.
* The check does not introduce new error and fails with ER_DUPP_KEY error.
This might break REPLACE workflow and should be fixed separately