mariadb/mysql-test/main/long_unique_innodb.test

141 lines
3 KiB
Text
Raw Normal View History

MDEV-371 Unique Index for long columns This patch implements engine independent unique hash index. Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key length > handler->max_key_length() or it can be explicitly specified. Automatic Creation:- Create TABLE t1 (a blob unique); Explicit Creation:- Create TABLE t1 (a int , unique(a) using HASH); Internal KEY_PART Representations:- Long unique key_info will have 2 representations. (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); ) 1. User Given Representation:- key_info->key_part array will be similar to what user has defined. So in case of example it will have 2 key_parts (a, b) 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to HASH_FIELD. This key_part will be always after user defined key_parts. So:- User Given Representation [a] [b] [hash_key_part] key_info->key_part ----^ Storage Engine Representation [a] [b] [hash_key_part] key_info->key_part ------------^ Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function. Working:- 1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH. 2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags) 3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields, When Explicit length is given by user then Item_left is used to concatenate Item_field values. 4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result field by field.
2019-02-19 22:23:08 +01:00
--source include/have_innodb.inc
#
# MDEV-371 Unique indexes for blobs
#
MDEV-371 Unique Index for long columns This patch implements engine independent unique hash index. Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key length > handler->max_key_length() or it can be explicitly specified. Automatic Creation:- Create TABLE t1 (a blob unique); Explicit Creation:- Create TABLE t1 (a int , unique(a) using HASH); Internal KEY_PART Representations:- Long unique key_info will have 2 representations. (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); ) 1. User Given Representation:- key_info->key_part array will be similar to what user has defined. So in case of example it will have 2 key_parts (a, b) 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to HASH_FIELD. This key_part will be always after user defined key_parts. So:- User Given Representation [a] [b] [hash_key_part] key_info->key_part ----^ Storage Engine Representation [a] [b] [hash_key_part] key_info->key_part ------------^ Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function. Working:- 1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH. 2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags) 3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields, When Explicit length is given by user then Item_left is used to concatenate Item_field values. 4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result field by field.
2019-02-19 22:23:08 +01:00
create table t1(a blob unique) engine= InnoDB;
insert into t1 values('RUC');
--error ER_DUP_ENTRY
insert into t1 values ('RUC');
drop table t1;
create table t1 (a blob unique , c int unique) engine=innodb;
show create table t1;
drop table t1;
MDEV-371 Unique Index for long columns This patch implements engine independent unique hash index. Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key length > handler->max_key_length() or it can be explicitly specified. Automatic Creation:- Create TABLE t1 (a blob unique); Explicit Creation:- Create TABLE t1 (a int , unique(a) using HASH); Internal KEY_PART Representations:- Long unique key_info will have 2 representations. (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); ) 1. User Given Representation:- key_info->key_part array will be similar to what user has defined. So in case of example it will have 2 key_parts (a, b) 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to HASH_FIELD. This key_part will be always after user defined key_parts. So:- User Given Representation [a] [b] [hash_key_part] key_info->key_part ----^ Storage Engine Representation [a] [b] [hash_key_part] key_info->key_part ------------^ Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function. Working:- 1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH. 2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags) 3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields, When Explicit length is given by user then Item_left is used to concatenate Item_field values. 4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result field by field.
2019-02-19 22:23:08 +01:00
--echo #test for concurrent insert of long unique in innodb
create table t1(a blob unique) engine= InnoDB;
show create table t1;
connect ('con1', localhost, root,,);
connect ('con2', localhost, root,,);
--connection con1
set innodb_lock_wait_timeout= 2;
set transaction isolation level READ UNCOMMITTED;
start transaction;
insert into t1 values('RUC');
--connection con2
set innodb_lock_wait_timeout= 2;
set transaction isolation level READ UNCOMMITTED;
start transaction;
--error ER_LOCK_WAIT_TIMEOUT
insert into t1 values ('RUC');
--connection con1
commit;
set transaction isolation level READ COMMITTED;
start transaction;
insert into t1 values('RC');
--connection con2
commit;
set transaction isolation level READ COMMITTED;
start transaction;
--error ER_LOCK_WAIT_TIMEOUT
insert into t1 values ('RC');
commit;
--connection con1
commit;
set transaction isolation level REPEATABLE READ;
start transaction;
insert into t1 values('RR');
--connection con2
commit;
set transaction isolation level REPEATABLE READ;
start transaction;
--error ER_LOCK_WAIT_TIMEOUT
insert into t1 values ('RR');
--connection con1
commit;
set transaction isolation level SERIALIZABLE;
start transaction;
insert into t1 values('S');
--connection con2
commit;
set transaction isolation level SERIALIZABLE;
start transaction;
--error ER_LOCK_WAIT_TIMEOUT
insert into t1 values ('S');
commit;
--connection con1
commit;
select * from t1;
drop table t1;
create table t1(a blob unique) engine=Innodb;
--connection con1
set transaction isolation level READ UNCOMMITTED;
start transaction;
insert into t1 values('RUC');
--connection con2
set transaction isolation level READ UNCOMMITTED;
start transaction;
--send insert into t1 values ('RUC');
--connection con1
rollback;
--connection con2
--reap
commit;
--connection con1
set transaction isolation level READ COMMITTED;
start transaction;
insert into t1 values('RC');
--connection con2
set transaction isolation level READ COMMITTED;
start transaction;
--send insert into t1 values ('RC');
--connection con1
rollback;
--connection con2
--reap
commit;
--connection con1
set transaction isolation level REPEATABLE READ;
start transaction;
insert into t1 values('RR');
--connection con2
set transaction isolation level REPEATABLE READ;
start transaction;
--send insert into t1 values ('RR');
--connection con1
rollback;
--connection con2
--reap
commit;
--connection con1
set transaction isolation level SERIALIZABLE;
start transaction;
insert into t1 values('S');
--connection con2
set transaction isolation level SERIALIZABLE;
start transaction;
--send insert into t1 values ('S');
--connection con1
rollback;
--connection con2
--reap
commit;
connection default;
drop table t1;
disconnect con1;
disconnect con2;