MDEV-371 Unique Index for long columns
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
2019-02-20 02:53:08 +05:30
|
|
|
create table t1(a blob unique) engine= InnoDB;
|
|
|
|
insert into t1 values('RUC');
|
|
|
|
insert into t1 values ('RUC');
|
|
|
|
ERROR 23000: Duplicate entry 'RUC' for key 'a'
|
|
|
|
drop table t1;
|
2019-02-21 22:42:00 +01:00
|
|
|
create table t1 (a blob unique , c int unique) engine=innodb;
|
|
|
|
show create table t1;
|
|
|
|
Table Create Table
|
|
|
|
t1 CREATE TABLE `t1` (
|
|
|
|
`a` blob DEFAULT NULL,
|
|
|
|
`c` int(11) DEFAULT NULL,
|
2019-06-19 10:35:39 +05:30
|
|
|
UNIQUE KEY `c` (`c`),
|
|
|
|
UNIQUE KEY `a` (`a`) USING HASH
|
2022-09-13 16:36:38 +03:00
|
|
|
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
|
2019-02-21 22:42:00 +01:00
|
|
|
drop table t1;
|
MDEV-371 Unique Index for long columns
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
2019-02-20 02:53:08 +05:30
|
|
|
#test for concurrent insert of long unique in innodb
|
|
|
|
create table t1(a blob unique) engine= InnoDB;
|
|
|
|
show create table t1;
|
|
|
|
Table Create Table
|
|
|
|
t1 CREATE TABLE `t1` (
|
|
|
|
`a` blob DEFAULT NULL,
|
|
|
|
UNIQUE KEY `a` (`a`) USING HASH
|
2022-09-13 16:36:38 +03:00
|
|
|
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
|
MDEV-371 Unique Index for long columns
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
2019-02-20 02:53:08 +05:30
|
|
|
connect 'con1', localhost, root,,;
|
|
|
|
connect 'con2', localhost, root,,;
|
|
|
|
connection con1;
|
|
|
|
set innodb_lock_wait_timeout= 2;
|
|
|
|
set transaction isolation level READ UNCOMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('RUC');
|
|
|
|
connection con2;
|
|
|
|
set innodb_lock_wait_timeout= 2;
|
|
|
|
set transaction isolation level READ UNCOMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('RUC');
|
|
|
|
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
|
|
|
|
connection con1;
|
|
|
|
commit;
|
|
|
|
set transaction isolation level READ COMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('RC');
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
set transaction isolation level READ COMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('RC');
|
|
|
|
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
|
|
|
|
commit;
|
|
|
|
connection con1;
|
|
|
|
commit;
|
|
|
|
set transaction isolation level REPEATABLE READ;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('RR');
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
set transaction isolation level REPEATABLE READ;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('RR');
|
|
|
|
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
|
|
|
|
connection con1;
|
|
|
|
commit;
|
|
|
|
set transaction isolation level SERIALIZABLE;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('S');
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
set transaction isolation level SERIALIZABLE;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('S');
|
|
|
|
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
|
|
|
|
commit;
|
|
|
|
connection con1;
|
|
|
|
commit;
|
|
|
|
select * from t1;
|
|
|
|
a
|
|
|
|
RUC
|
|
|
|
RC
|
|
|
|
RR
|
|
|
|
S
|
|
|
|
drop table t1;
|
|
|
|
create table t1(a blob unique) engine=Innodb;
|
|
|
|
connection con1;
|
|
|
|
set transaction isolation level READ UNCOMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('RUC');
|
|
|
|
connection con2;
|
|
|
|
set transaction isolation level READ UNCOMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('RUC');;
|
|
|
|
connection con1;
|
|
|
|
rollback;
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
connection con1;
|
|
|
|
set transaction isolation level READ COMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('RC');
|
|
|
|
connection con2;
|
|
|
|
set transaction isolation level READ COMMITTED;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('RC');;
|
|
|
|
connection con1;
|
|
|
|
rollback;
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
connection con1;
|
|
|
|
set transaction isolation level REPEATABLE READ;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('RR');
|
|
|
|
connection con2;
|
|
|
|
set transaction isolation level REPEATABLE READ;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('RR');;
|
|
|
|
connection con1;
|
|
|
|
rollback;
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
connection con1;
|
|
|
|
set transaction isolation level SERIALIZABLE;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values('S');
|
|
|
|
connection con2;
|
|
|
|
set transaction isolation level SERIALIZABLE;
|
|
|
|
start transaction;
|
|
|
|
insert into t1 values ('S');;
|
|
|
|
connection con1;
|
|
|
|
rollback;
|
|
|
|
connection con2;
|
|
|
|
commit;
|
|
|
|
connection default;
|
|
|
|
drop table t1;
|
|
|
|
disconnect con1;
|
|
|
|
disconnect con2;
|
2021-10-07 15:04:56 +03:00
|
|
|
# MDEV-20131 Assertion `!pk->has_virtual()' failed
|
|
|
|
create table t1 (a text, primary key(a(1871))) engine=innodb;
|
|
|
|
ERROR 42000: Specified key was too long; max key length is 1536 bytes
|