mirror of
https://github.com/MariaDB/server.git
synced 2025-01-18 04:53:01 +01:00
d00f19e832
This patch implements engine independent unique hash index. Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key length > handler->max_key_length() or it can be explicitly specified. Automatic Creation:- Create TABLE t1 (a blob unique); Explicit Creation:- Create TABLE t1 (a int , unique(a) using HASH); Internal KEY_PART Representations:- Long unique key_info will have 2 representations. (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); ) 1. User Given Representation:- key_info->key_part array will be similar to what user has defined. So in case of example it will have 2 key_parts (a, b) 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to HASH_FIELD. This key_part will be always after user defined key_parts. So:- User Given Representation [a] [b] [hash_key_part] key_info->key_part ----^ Storage Engine Representation [a] [b] [hash_key_part] key_info->key_part ------------^ Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function. Working:- 1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH. 2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags) 3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields, When Explicit length is given by user then Item_left is used to concatenate Item_field values. 4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result field by field. |
||
---|---|---|
.. | ||
cmake | ||
mysql-test/oqgraph | ||
CMakeLists.txt | ||
graphcore-config.h | ||
graphcore-graph.cc | ||
graphcore-graph.h | ||
graphcore-types.h | ||
graphcore.cc | ||
graphcore.h | ||
ha_oqgraph.cc | ||
ha_oqgraph.h | ||
oqgraph_config.h.in | ||
oqgraph_judy.cc | ||
oqgraph_judy.h | ||
oqgraph_probes.d | ||
oqgraph_shim.cc | ||
oqgraph_shim.h | ||
oqgraph_thunk.cc | ||
oqgraph_thunk.h | ||
README |
OQGraph storage engine v3 Copyright (C) 2007-2014 Arjen G Lentz & Antony T Curtis for Open Query, & Andrew McDonnell The Open Query GRAPH engine (OQGRAPH) is a computation engine allowing hierarchies and more complex graph structures to be handled in a relational fashion. In a nutshell, tree structures and friend-of-a-friend style searches can now be done using standard SQL syntax, and results joined onto other tables. Based on a concept by Arjen Lentz v3 implementation by Antony Curtis, Arjen Lentz, Andrew McDonnell For more information, documentation, support, enhancement engineering, see http://openquery.com/graph or contact graph@openquery.com INSTALLATION OQGraph requires at least version 1.40.0 of the Boost Graph library. To obtain a copy of the Boost library, see http://www.boost.org/ This can be obtained in Debian Wheezy by `apt-get install libboost-graph-dev` OQGraph requires libjudy - http://judy.sourceforge.net/ This can be obtained in Debian Wheezy by `apt-get install libjudy-dev` BUILD (example) cd path/to/maria/source mkdir build # use symlink to scratch cd build CONFIGURE="-DWITH_EXTRA_CHARSETS=complex -DWITH_PLUGIN_ARIA=1 -DWITH_READLINE=1 -DWITH_SSL=bundled -DWITH_MAX=1 -DWITH_EMBEDDED_SERVER=1" cmake .. $CONFIGURE make -j5 mysql-test-run --suite oqgraph