With MAX_INDEXIES=64(default), key_map=Bitmap<64> is just a wrapper around
ulonglong and thus "trivial" (can be bzero-ed, or memcpy-ed, and stays
valid still)
With MAX_INDEXES=128, key_map = Bitmap<128> is not a "trivial" type
anymore. The implementation uses MY_BITMAP, and MY_BITMAP contains pointers
which make Bitmap invalid, when it is memcpy-ed/bzero-ed.
The problem in 10.4 is that there are many new key_map members, inside TABLE
or KEY, and those are often memcopied and bzeroed
The fix makes Bitmap "trivial", by inlining most of MY_BITMAP functionality.
pointers/heap allocations are not used anymore.
If a derived table has SELECT DISTINCT, provide index statistics for it so that the join optimizer in the
upper select knows that ref access to the table will produce one row.
This patch fixes an invalid read in fill_effective_table_privileges
triggered by a grant_version increase between a PREPARE for a
statement creating a view from I_S and EXECUTE.
A tmp table was created and free'd while preparing the statement,
TABLE_LIST::table_name was set to point to the tmp table
TABLE_SHARE::table_name which no longer existed after preparing was
done.
The grant version increase made fill_effective_table_privileges
called during EXECUTE to try fetch the updated grant info and
this is where the dangling table name was used.
Fix partitioning for trx_id-versioned tables.
`partition by hash`, `range` and others now work.
`partition by system_time` is forbidden.
Currently we cannot use row_start and row_end in `partition by`, because
insertion of versioned field is done by engine's handler, as well as
row_start/row_end's value set up, which is a transaction id -- so it's
also forbidden.
The drawback is that it's now impossible to use `partition by key()`
without parameters for such tables, because it references row_start and
row_end implicitly.
* add handler::vers_can_native()
* drop Table_scope_and_contents_source_st::vers_native()
* drop partition_element::find_engine_flag as unused
* forbid versioning partitioning for trx_id as not supported
* adopt vers tests for trx_id partitioning
* forbid any row_end referencing in `partition by` clauses,
including implicit `by key()`
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.