Commit graph

55 commits

Author SHA1 Message Date
Marko Mäkelä
ddd7d5d8e3 MDEV-24035 Failing assertion: UT_LIST_GET_LEN(lock.trx_locks) == 0 causing disruption and replication failure
Under unknown circumstances, the SQL layer may wrongly disregard an
invocation of thd_mark_transaction_to_rollback() when an InnoDB
transaction had been aborted (rolled back) due to one of the following errors:
* HA_ERR_LOCK_DEADLOCK
* HA_ERR_RECORD_CHANGED (if innodb_snapshot_isolation=ON)
* HA_ERR_LOCK_WAIT_TIMEOUT (if innodb_rollback_on_timeout=ON)

Such an error used to cause a crash of InnoDB during transaction commit.
These changes aim to catch and report the error earlier, so that not only
this crash can be avoided but also the original root cause be found and
fixed more easily later.

The idea of this fix is from Michael 'Monty' Widenius.

HA_ERR_ROLLBACK: A new error code that will be translated into
ER_ROLLBACK_ONLY, signalling that the current transaction
has been aborted and the only allowed action is ROLLBACK.

trx_t::state: Add TRX_STATE_ABORTED that is like
TRX_STATE_NOT_STARTED, but noting that the transaction had been
rolled back and aborted.

trx_t::is_started(): Replaces trx_is_started().

ha_innobase: Check the transaction state in various places.
Simplify the logic around SAVEPOINT.

ha_innobase::is_valid_trx(): Replaces ha_innobase::is_read_only().

The InnoDB logic around transaction savepoints, commit, and rollback
was unnecessarily complex and might have contributed to this
inconsistency. So, we are simplifying that logic as well.

trx_savept_t: Replace with const undo_no_t*. When we rollback to
a savepoint, all we need to know is the number of undo log records
that must survive.

trx_named_savept_t, DB_NO_SAVEPOINT: Remove. We can store undo_no_t
directly in the space allocated at innobase_hton->savepoint_offset.

fts_trx_create(): Do not copy previous savepoints.

fts_savepoint_rollback(): If a savepoint was not found, roll back
everything after the default savepoint of fts_trx_create().
The test innodb_fts.savepoint is extended to cover this code.

Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
2024-12-12 18:02:00 +02:00
Lena Startseva
0a5e4a0191 MDEV-31005: Make working cursor-protocol
Updated tests: cases with bugs or which cannot be run
with the cursor-protocol were excluded with
"--disable_cursor_protocol"/"--enable_cursor_protocol"

Fix for v.10.5
2024-09-18 18:39:26 +07:00
Sergei Golubchik
22b3ba9312 MDEV-25102 UNIQUE USING HASH error after ALTER ... DISABLE KEYS
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,

Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
2024-05-06 17:16:10 +02:00
Sergei Golubchik
4f5dea43df cleanup
* remove dead code
* simplify the check for table->s->next_number_index
* misc
2024-05-05 21:37:08 +02:00
Sergei Golubchik
947eeaa6dc MDEV-29345 update case insensitive (large) unique key with insensitive change of value - duplicate key
use collation-sensitive comparison when comparing fields
2024-05-05 21:37:08 +02:00
Nikita Malyavin
72429cad7f MDEV-30046 wrong row targeted with "insert ... on duplicate" and "replace"
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.

Suchwise, given a hash collision it may choose an incorrect row.

handler::position would be correct and very convenient to use here.

dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
2024-05-05 18:38:34 +02:00
Alexander Barkov
97fcafb9ec MDEV-32837 long unique does not work like unique key when using replace
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT

This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.

Problem:

The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.

Fix:

- If the current key is a long unique, it now works as follows:

  UPDATE can be done if the current long unique is the last
  long unique, and there are no in-engine (normal) uniques.

- For in-engine uniques nothing changes, it still works as before:

  If the current key is an in-engine (normal) unique:
  UPDATE can be done if it is the last normal unique.
2024-01-24 17:19:54 +04:00
Sergei Golubchik
a7ee3bc58b MDEV-29954 Unique hash key on column prefix is computed incorrectly
use the original, not the truncated, field in the long unique prefix,
that is, in the hash(left(field, length)) expression.

because MyISAM CHECK/REPAIR in compute_vcols() moves table->field
but not prefix fields from keyparts.

Also, implement Field_string::cmp_prefix() for prefix comparison
of CHAR columns to work.
2024-01-23 15:40:42 +01:00
Sergei Golubchik
e472b682e0 MDEV-32839 LONG UNIQUE gives error when used with REPLACE
calculate auto-inc value even if long duplicate check fails -
this is what the engine does for normal uniques.

auto-inc value is needed if it's a REPLACE
2023-12-12 15:21:43 +01:00
Marko Mäkelä
f8f7d9de2c Merge 10.4 into 10.5 2023-09-11 11:29:31 +03:00
Sergei Golubchik
65b3c89430 MDEV-32015 insert into an empty table fails with hash unique
don't enable bulk insert when table->s->long_unique_table
2023-09-06 22:38:41 +02:00
Sergei Golubchik
382c543f53 MDEV-32012 hash unique corrupts index on virtual blobs
as always when copying record[0] aside one needs to detach
Field_blob::value's from it, and restore them when record[0]
is restored from a backup.
2023-09-06 22:38:41 +02:00
Oleksandr Byelkin
f291c3df2c Merge branch '10.4' into 10.5 2023-07-27 15:43:21 +02:00
Lena Startseva
9854fb6fa7 MDEV-31003: Second execution for ps-protocol
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
2023-07-26 17:15:00 +07:00
Oleksandr Byelkin
edf8ce5b97 Merge branch 'bb-10.4-release' into bb-10.5-release 2023-05-02 13:54:54 +02:00
Sergei Golubchik
bc970573b3 MDEV-22756 SQL Error (1364): Field 'DB_ROW_HASH_1' doesn't have a default value
exclude generated columns from the "has default value" check
2023-04-28 14:11:59 +02:00
Oleksandr Byelkin
7fa02f5c0b Merge branch '10.4' into 10.5 2023-01-27 13:54:14 +01:00
Sergei Golubchik
fc292f42be MDEV-29199 Unique hash key is ignored upon INSERT ... SELECT into non-empty MyISAM table
disable bulk insert optimization if long uniques are used, because they
need to read the table (index_read) after every inserted now. And bulk
insert optimization might disable indexes.

bulk insert is already disabled in other cases when there are chances
that the table will be read duing the bulk insert.
2023-01-20 15:44:15 +01:00
Marko Mäkelä
a732d5e2ba Merge 10.4 into 10.5 2022-11-08 17:01:28 +02:00
Sachin
10132ad261 MDEV-23264 Unique blobs allow duplicate values upon UPDATE
Problem:-
  We are able to insert duplicate value in table because cmp_binary_offset
  is not able to differentiate between NULL and empty string. So
  check_duplicate_long_entry_key is never called and we don't check for
  duplicate.
Solution
  Added a if condition with is_null() on field which can differentiate
  between NULL and empty string.
2022-11-07 09:50:59 +01:00
Oleksandr Byelkin
af143474d8 Merge branch '10.4' into 10.5 2022-08-03 07:12:27 +02:00
Aleksey Midenkov
231feabd2b MDEV-21540 Initialization of already inited long unique index on reorganize partition
Handler for existing partition was already index-inited at the
beginning of copy_partitions().

In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:

    if (this->inited == RND)
      table->clone_handler_for_update();
    handler *h= table->update_handler ? table->update_handler : table->file;

First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.

Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.

The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).

The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
2022-08-01 19:14:46 +03:00
Sergei Golubchik
ef781162ff Merge branch '10.4' into 10.5 2022-05-09 22:04:06 +02:00
Sergei Golubchik
af810407f7 MDEV-28098 incorrect key in "dup value" error after long unique
reset errkey after using it, so that it wouldn't affect
the next error message in the next statement
2022-04-28 13:17:13 +02:00
Marko Mäkelä
9c18b96603 Merge 10.4 into 10.5 2021-11-09 08:50:33 +02:00
Marko Mäkelä
47ab793d71 Merge 10.3 into 10.4 2021-11-09 08:40:14 +02:00
Nikita Malyavin
1fdac57447 MDEV-26453 Assertion `0' failed in row_upd_sec_index_entry & corruption
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.

Fix: add a relevant check; add new vcol type for a prettier error message.
2021-10-29 05:05:21 +03:00
Marko Mäkelä
0e69f601aa Merge 10.4 into 10.5 2020-06-07 12:22:06 +03:00
Sachin
eb14e073ea MDEV-22719 Long unique keys are not created when individual key_part->length < max_key_length but SUM(key_parts->length) > max_key_length
Make UNIQUE HASH key in case when key_info->key_length > max_key_length
2020-06-07 12:07:41 +05:30
Sachin
e208f91ba8 MDEV-21804 Assertion `marked_for_read()' failed upon INSERT into table with long unique blob under binlog_row_image=NOBLOB
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth  mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.

This bug is not specific for long unique, It also fails for this case
   create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
2020-06-07 12:07:36 +05:30
Sergei Golubchik
18502f99eb MDEV-22185 Failing assertion: node->pcur->rel_pos == BTR_PCUR_ON or ER_KEY_NOT_FOUND or Assertion `inited==NONE' failed in handler::ha_index_init
long unique checks should be done for a partitioned table as a whole,
not for individual partitions.

Followup for f3f31eaa8e that extends it to UPDATE
2020-05-05 19:41:12 +02:00
Sergei Golubchik
fcd84da5f1 MDEV-22218 InnoDB: Failing assertion: node->pcur->rel_pos == BTR_PCUR_ON upon LOAD DATA with NO_BACKSLASH_ESCAPES in SQL_MODE and unique blob in table
`inited == NONE` at the initialization time does not always mean
that it'll be `NONE` later, at the execution time. Use a more complex
caller-specific logic to decide whether to create a cloned lookup handler.

Besides LOAD (as in the original bug report) make sure that all
prepare_for_insert() invocations are covered by tests. Add tests for
CREATE ... SELECT, multi-UPDATE, and multi-DELETE.

Don't enable write cache with long uniques.
2020-04-12 22:10:57 +02:00
Sergei Golubchik
3bb5c6b0c2 MDEV-22113 SIGSEGV, ASAN use-after-poison, Assertion `next_insert_id == 0' in handler::ha_external_lock
if the lookup_handler is allocated on the THD's memroot, it may
not live long enough to be deleted in handler::ha_external_lock()
2020-04-02 14:03:54 +02:00
Sachin
eed6d215f1 MDEV-20001 Potential dangerous regression: INSERT INTO >=100 rows fail for myisam table with HASH indexes
Problem:-

So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.

Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.

This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert

mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.

(Same is done for Aria Storage engine)
2020-02-03 12:44:31 +05:30
Sachin Setiya
5a6023cf6f MDEV-18791 Wrong error upon creating Aria table with long index on BLOB
If we have long unique key for aria engine return too long key error, because
Aria does not support key on virtual generated column.
2020-02-02 13:53:26 +05:30
Sachin
ba7d33a898 MDEV-18820 Assertion `lock_table_has(trx, index->table, LOCK_IX)' failed in lock_rec_insert_check_and_lock upon INSERT into table with blob key
Don't Ignore Any error during index lookup, And throw duplicate key error
only if error is HA_ERR_FOUND_DUPP_KEY
2019-04-03 12:53:58 +05:30
Sachin
901e3ddf79 MDEV-18904 Assertion `m_part_spec.start_part >= m_part_spec.end_part' failed in ha_partition::index_read_idx_map
Remove the DBUG_ASSERT
2019-03-23 18:26:50 +05:30
Sachin
0c567648a4 Revert (MDEV-18888)2b06de8064660c5c, fix it in different way And add test case for MDEV-18953 2019-03-23 18:25:58 +05:30
Sachin
c23d4700e6 MDEV-18901 Wrong results after ADD UNIQUE INDEX(blob_column)
Add test case for MDEV-18901 as MDEV-18967 and MDEV-18922 solves this issue
2019-03-22 11:50:09 +05:30
Sachin
625aa232a6 MDEV-18967 Load data in system version with long unique does not work
Update system versioning fields before generated columns for left out
fill_record
2019-03-21 21:24:59 +05:30
sachinsetia1001@gmail.com
3943fe5630 MDEV-18888 Server crashes in Item_field::register_field_in_read_map upon...
MODIFY COLUMN

Do Not create prefix field for long unique key
2019-03-17 13:51:46 +05:30
sachinsetia1001@gmail.com
8995f33c0b MDEV-18889 Long unique on virtual fields crashes server
Use table->record[0] for ha_index_read_map so that vfield gets automatically
be updated.
2019-03-17 13:45:57 +05:30
sachinsetia1001@gmail.com
2e34a031f8 MDEV-18809 Server crash in fields_in_hash_keyinfo or Assertion `key_info->key_part->field->flags & (1<< 30)' failed in setup_keyinfo_hash
Move calling setup_keyinfo_hash until all continue is exhausted.
And also call re_setup_keyinfo_hash for goto err.
2019-03-15 16:05:01 +05:30
sachinsetia1001@gmail.com
050280ce8b MDEV-18922 Alter on long unique varchar column makes result null
Don't add long key into share->keys_for_keyread
2019-03-15 12:14:33 +05:30
sachin
62bfb2fe49 MDEV-18800 Server crash in instant_alter_column_possible or Assertion...
`!pk->has_virtual()' failed in instant_alter_column_possible upon adding key

Hash key can't be primary key.
2019-03-13 15:15:13 +05:30
sachin
ecf323620b Add test cases for MDEV-18792 MDEV-18793 MDEV-18795 MDEV-18798 MDEV-18801
As MDEV-18799 fixes these mdevs so adding test cases for these mdevs.
2019-03-13 15:10:41 +05:30
sachin
560598c9b2 MDEV-18799 Long unique does not work after failed alter table
Restore table->key_info after calling setup_keyinfo_hash in
mysql_prepare_alter_table.
2019-03-13 15:09:16 +05:30
sachin
f9f625fb43 MDEV-18790 Server crash in fields_in_hash_keyinfo after unsuccessful...
attempt to drop BLOB with long index

Restore long table->key_info, So that in the case of unsuccessful table
alter table->key_info should have a correct value. In the case of successful
table alter old table is flushed so that is why we don't see this error in
the case of successful alter.
2019-03-13 13:59:31 +05:30
Sergei Golubchik
b10340998f MDEV-18748 REPLACE doesn't work with unique blobs on MyISAM table
on long unique conflict, set table->file->dup_ref for
engines that support it
2019-02-27 23:27:43 -05:00
Sergei Golubchik
9fd3e810e8 MDEV-18747 InnoDB: Failing assertion: table->get_ref_count() == 0 upon dropping temporary table with unique blob
delete update handler clone also for temporary tables
2019-02-27 23:27:43 -05:00