Commit graph

68 commits

Author SHA1 Message Date
Sergei Golubchik
7b53672c63 Merge branch '10.5' into 10.6 2024-05-08 20:06:00 +02:00
Sergei Golubchik
22b3ba9312 MDEV-25102 UNIQUE USING HASH error after ALTER ... DISABLE KEYS
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,

Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
2024-05-06 17:16:10 +02:00
Sergei Golubchik
947eeaa6dc MDEV-29345 update case insensitive (large) unique key with insensitive change of value - duplicate key
use collation-sensitive comparison when comparing fields
2024-05-05 21:37:08 +02:00
Nikita Malyavin
72429cad7f MDEV-30046 wrong row targeted with "insert ... on duplicate" and "replace"
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.

Suchwise, given a hash collision it may choose an incorrect row.

handler::position would be correct and very convenient to use here.

dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
2024-05-05 18:38:34 +02:00
Sergei Golubchik
3f6038bc51 Merge branch '10.5' into 10.6 2024-01-31 18:04:03 +01:00
Alexander Barkov
97fcafb9ec MDEV-32837 long unique does not work like unique key when using replace
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT

This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.

Problem:

The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.

Fix:

- If the current key is a long unique, it now works as follows:

  UPDATE can be done if the current long unique is the last
  long unique, and there are no in-engine (normal) uniques.

- For in-engine uniques nothing changes, it still works as before:

  If the current key is an in-engine (normal) unique:
  UPDATE can be done if it is the last normal unique.
2024-01-24 17:19:54 +04:00
Sergei Golubchik
a7ee3bc58b MDEV-29954 Unique hash key on column prefix is computed incorrectly
use the original, not the truncated, field in the long unique prefix,
that is, in the hash(left(field, length)) expression.

because MyISAM CHECK/REPAIR in compute_vcols() moves table->field
but not prefix fields from keyparts.

Also, implement Field_string::cmp_prefix() for prefix comparison
of CHAR columns to work.
2024-01-23 15:40:42 +01:00
Sergei Golubchik
e95bba9c58 Merge branch '10.5' into 10.6 2023-12-17 11:20:43 +01:00
Sergei Golubchik
e472b682e0 MDEV-32839 LONG UNIQUE gives error when used with REPLACE
calculate auto-inc value even if long duplicate check fails -
this is what the engine does for normal uniques.

auto-inc value is needed if it's a REPLACE
2023-12-12 15:21:43 +01:00
Marko Mäkelä
0dd25f28f7 Merge 10.5 into 10.6 2023-09-11 14:46:39 +03:00
Marko Mäkelä
f8f7d9de2c Merge 10.4 into 10.5 2023-09-11 11:29:31 +03:00
Sergei Golubchik
65b3c89430 MDEV-32015 insert into an empty table fails with hash unique
don't enable bulk insert when table->s->long_unique_table
2023-09-06 22:38:41 +02:00
Sergei Golubchik
382c543f53 MDEV-32012 hash unique corrupts index on virtual blobs
as always when copying record[0] aside one needs to detach
Field_blob::value's from it, and restore them when record[0]
is restored from a backup.
2023-09-06 22:38:41 +02:00
Oleksandr Byelkin
043d69bbcc Merge branch '10.5' into 10.6 2023-05-03 09:51:25 +02:00
Oleksandr Byelkin
edf8ce5b97 Merge branch 'bb-10.4-release' into bb-10.5-release 2023-05-02 13:54:54 +02:00
Sergei Golubchik
bc970573b3 MDEV-22756 SQL Error (1364): Field 'DB_ROW_HASH_1' doesn't have a default value
exclude generated columns from the "has default value" check
2023-04-28 14:11:59 +02:00
Oleksandr Byelkin
c3a5cf2b5b Merge branch '10.5' into 10.6 2023-01-31 09:31:42 +01:00
Oleksandr Byelkin
7fa02f5c0b Merge branch '10.4' into 10.5 2023-01-27 13:54:14 +01:00
Sergei Golubchik
fc292f42be MDEV-29199 Unique hash key is ignored upon INSERT ... SELECT into non-empty MyISAM table
disable bulk insert optimization if long uniques are used, because they
need to read the table (index_read) after every inserted now. And bulk
insert optimization might disable indexes.

bulk insert is already disabled in other cases when there are chances
that the table will be read duing the bulk insert.
2023-01-20 15:44:15 +01:00
Marko Mäkelä
2ac1edb1c3 Merge 10.5 into 10.6 2022-11-08 17:37:22 +02:00
Marko Mäkelä
a732d5e2ba Merge 10.4 into 10.5 2022-11-08 17:01:28 +02:00
Sachin
10132ad261 MDEV-23264 Unique blobs allow duplicate values upon UPDATE
Problem:-
  We are able to insert duplicate value in table because cmp_binary_offset
  is not able to differentiate between NULL and empty string. So
  check_duplicate_long_entry_key is never called and we don't check for
  duplicate.
Solution
  Added a if condition with is_null() on field which can differentiate
  between NULL and empty string.
2022-11-07 09:50:59 +01:00
Marko Mäkelä
44fd2c4b24 Merge 10.5 into 10.6 2022-09-20 16:53:20 +03:00
Alexander Barkov
fe844c16b6 Merge remote-tracking branch 'origin/10.4' into 10.5 2022-09-14 16:24:51 +04:00
Marko Mäkelä
18795f5512 Merge 10.3 into 10.4 2022-09-13 16:36:38 +03:00
Oleksandr Byelkin
d2f1c3ed6c Merge branch '10.5' into bb-10.6-release 2022-08-03 12:19:59 +02:00
Oleksandr Byelkin
af143474d8 Merge branch '10.4' into 10.5 2022-08-03 07:12:27 +02:00
Aleksey Midenkov
231feabd2b MDEV-21540 Initialization of already inited long unique index on reorganize partition
Handler for existing partition was already index-inited at the
beginning of copy_partitions().

In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:

    if (this->inited == RND)
      table->clone_handler_for_update();
    handler *h= table->update_handler ? table->update_handler : table->file;

First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.

Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.

The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).

The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
2022-08-01 19:14:46 +03:00
Sergei Golubchik
3bc98a4ec4 Merge branch '10.5' into 10.6 2022-05-10 14:01:23 +02:00
Sergei Golubchik
ef781162ff Merge branch '10.4' into 10.5 2022-05-09 22:04:06 +02:00
Sergei Golubchik
af810407f7 MDEV-28098 incorrect key in "dup value" error after long unique
reset errkey after using it, so that it wouldn't affect
the next error message in the next statement
2022-04-28 13:17:13 +02:00
Marko Mäkelä
25ac047baf Merge 10.5 into 10.6 2021-11-09 09:11:50 +02:00
Marko Mäkelä
9c18b96603 Merge 10.4 into 10.5 2021-11-09 08:50:33 +02:00
Marko Mäkelä
47ab793d71 Merge 10.3 into 10.4 2021-11-09 08:40:14 +02:00
Nikita Malyavin
1fdac57447 MDEV-26453 Assertion `0' failed in row_upd_sec_index_entry & corruption
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.

Fix: add a relevant check; add new vcol type for a prettier error message.
2021-10-29 05:05:21 +03:00
Oleksandr Byelkin
a3099a3b4a MDEV-24312 master_host has 60 character limit, increase to 255 bytes
Also increase user name up to 128.

The work was started by Rucha Deodhar <rucha.deodhar@mariadb.com>,
contains audit plugin fixes by Alexey Botchkov <holyfoot@askmonty.org>.
2021-04-20 16:36:56 +02:00
Varun Gupta
f691d9865b MDEV-7317: Make an index ignorable to the optimizer
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.

To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.

Primary keys (explicit or implicit) cannot be made ignorable.

The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
2021-03-04 22:50:00 +05:30
Sergei Golubchik
25d9d2e37f Merge branch 'bb-10.4-release' into bb-10.5-release 2021-02-15 16:43:15 +01:00
Sergei Golubchik
00a313ecf3 Merge branch 'bb-10.3-release' into bb-10.4-release
Note, the fix for "MDEV-23328 Server hang due to Galera lock conflict resolution"
was null-merged. 10.4 version of the fix is coming up separately
2021-02-12 17:44:22 +01:00
Marko Mäkelä
0e69f601aa Merge 10.4 into 10.5 2020-06-07 12:22:06 +03:00
Sachin
eb14e073ea MDEV-22719 Long unique keys are not created when individual key_part->length < max_key_length but SUM(key_parts->length) > max_key_length
Make UNIQUE HASH key in case when key_info->key_length > max_key_length
2020-06-07 12:07:41 +05:30
Sachin
e208f91ba8 MDEV-21804 Assertion `marked_for_read()' failed upon INSERT into table with long unique blob under binlog_row_image=NOBLOB
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth  mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.

This bug is not specific for long unique, It also fails for this case
   create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
2020-06-07 12:07:36 +05:30
Sergei Golubchik
18502f99eb MDEV-22185 Failing assertion: node->pcur->rel_pos == BTR_PCUR_ON or ER_KEY_NOT_FOUND or Assertion `inited==NONE' failed in handler::ha_index_init
long unique checks should be done for a partitioned table as a whole,
not for individual partitions.

Followup for f3f31eaa8e that extends it to UPDATE
2020-05-05 19:41:12 +02:00
Sergei Golubchik
fcd84da5f1 MDEV-22218 InnoDB: Failing assertion: node->pcur->rel_pos == BTR_PCUR_ON upon LOAD DATA with NO_BACKSLASH_ESCAPES in SQL_MODE and unique blob in table
`inited == NONE` at the initialization time does not always mean
that it'll be `NONE` later, at the execution time. Use a more complex
caller-specific logic to decide whether to create a cloned lookup handler.

Besides LOAD (as in the original bug report) make sure that all
prepare_for_insert() invocations are covered by tests. Add tests for
CREATE ... SELECT, multi-UPDATE, and multi-DELETE.

Don't enable write cache with long uniques.
2020-04-12 22:10:57 +02:00
Sergei Golubchik
3bb5c6b0c2 MDEV-22113 SIGSEGV, ASAN use-after-poison, Assertion `next_insert_id == 0' in handler::ha_external_lock
if the lookup_handler is allocated on the THD's memroot, it may
not live long enough to be deleted in handler::ha_external_lock()
2020-04-02 14:03:54 +02:00
Marko Mäkelä
8b6cfda631 Merge 10.4 into 10.5 2020-02-07 08:51:20 +02:00
Sachin
eed6d215f1 MDEV-20001 Potential dangerous regression: INSERT INTO >=100 rows fail for myisam table with HASH indexes
Problem:-

So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.

Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.

This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert

mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.

(Same is done for Aria Storage engine)
2020-02-03 12:44:31 +05:30
Sachin Setiya
5a6023cf6f MDEV-18791 Wrong error upon creating Aria table with long index on BLOB
If we have long unique key for aria engine return too long key error, because
Aria does not support key on virtual generated column.
2020-02-02 13:53:26 +05:30
Sachin
ba7d33a898 MDEV-18820 Assertion `lock_table_has(trx, index->table, LOCK_IX)' failed in lock_rec_insert_check_and_lock upon INSERT into table with blob key
Don't Ignore Any error during index lookup, And throw duplicate key error
only if error is HA_ERR_FOUND_DUPP_KEY
2019-04-03 12:53:58 +05:30
Sachin
901e3ddf79 MDEV-18904 Assertion `m_part_spec.start_part >= m_part_spec.end_part' failed in ha_partition::index_read_idx_map
Remove the DBUG_ASSERT
2019-03-23 18:26:50 +05:30